a computer-based system to support forensic...

13
International Journal on Document Analysis and Recognition manuscript No. (will be inserted by the editor) A computer-based system to support forensic studies on handwritten documents Katrin Franke, Mario K¨ oppen Fraunhofer Institute for Production Systems and Design Technology, Department Pattern Recognition, Pascalstr. 8-9, 10587 Berlin, Germany e-mail: katrin.franke mario.koeppen @ipk.fhg.de Received: December 4, 2000 / Revised version: date Abstract. Computer-based forensic handwriting analysis re- quires sophisticated methods for the pre-processing of digi- tized paper documents, in order to provide high-quality digi- tized handwriting, which represents the original handwritten product as good as possible. Due to the requirement of pro- cessing a huge amount of different document types neither a standardized queue of processing stages, fixed parameter sets nor fixed image operations are qualified for such pre- processing methods. So, we present an open layered frame- work that covers adaptation abilities at the parameter, oper- ator and algorithm level. Moreover, an embedded module, which uses genetic programming, might generate specific fil- ters for background removal on the fly. The framework is re- alized as an assistance system for forensic handwriting ex- perts and it is in use by the Bundeskriminalamt, the federal police bureau in Germany, for one and a half years. In the following the layered framework will be presented, funda- mental document-independent filters for textured, homoge- nous background removal and for foreground removal will be described as well as aspects of the implementation. Also, results of the framework-application will be given. 1 Introduction Handwriting examination or identification is frequently used in crime investigation, the prosecution and sentencing of crim- inal offenders. Important fields of interest are for example fi- nancial fraud, extortion, threat, terrorism, drugs related crime, pornography and racism. Aims, methods and techniques of forensic handwriting examination differ fundamentally from those of graphology. A forensic handwriting expert compares handwriting on the basis of well-defined sets of character- istics and does not make any relation between handwriting characteristics and personality traits. Forensic handwriting examination has become an estab- lished part of forensic science. As other related fields of foren- sic science, like fingerprint identification, examination of tool- marks and bullets, handwritingexamination is primarily based upon the knowledge and experiences of the forensic expert. Although much effort has been put into the training of the a) b) Fig. 1. Poor separation of handwriting from textured background: (a) original signature image, (b) signature image after procesing examiners, the methodology of forensic handwriting analy- sis is being criticized, in particular the lack of proof of val- idation. According to the non-objective measurements and non-reproducible decisions, traditional methods, like visual inspection and expert rating, were tried to be supported by computerized semi-automatic and interactive systems. Two systems operating in forensic labs for such a purpose are the FISH-system [21] and the SCRIPT-system [4]. Although there is some promising research in this area [7], [8], [24], it is rather rarely applied in the forensic labs. Computer-supported analysis of handwriting,using meth- ods of digital image processing and pattern recognition,requi- res sophisticated pre-processing methods to extract the hand- writing specimens from documents, in particular to remove the document background as well as other imprints being not subject of the handwriting examination. Unfortunately, the systems that were used in forensic labs [21] [4] could not ful- fill this demand properly (figure 1 and figure 2). On request of the handwriting examination group of the Bundeskrimi- nalamt (the German police bureau) an open layered frame- work was developed to process handwritten documents. The key idea of the model is to provide distinguished lev- els of user interaction during the configuration of the system for document processing, in particular from a simple param- eter adjustment up to an initialization for an automated fil- ter generation. In contrary to known approaches that scope on parallel processing [1] the layers of the proposed model

Upload: vukhanh

Post on 09-Apr-2018

216 views

Category:

Documents


2 download

TRANSCRIPT

International Journal on Document Analysis and Recognition manuscript No.(will be inserted by the editor)

A computer-based system to support forensic studies on handwritten documents

Katrin Franke, Mario K¨oppen

Fraunhofer Institute for Production Systems and Design Technology, Department Pattern Recognition, Pascalstr. 8-9, 10587 Berlin, Germanye-mail:[email protected]

Received: December 4, 2000 / Revised version: date

Abstract. Computer-based forensic handwriting analysis re-quires sophisticated methods for the pre-processing of digi-tized paper documents, in order to provide high-quality digi-tized handwriting, which represents the original handwrittenproduct as good as possible. Due to the requirement of pro-cessing a huge amount of different document types neithera standardized queue of processing stages, fixed parametersets nor fixed image operations are qualified for such pre-processing methods. So, we present an open layered frame-work that covers adaptation abilities at the parameter, oper-ator and algorithm level. Moreover, an embedded module,which uses genetic programming, might generate specific fil-ters for background removal on the fly. The framework is re-alized as an assistance system for forensic handwriting ex-perts and it is in use by the Bundeskriminalamt, the federalpolice bureau in Germany, for one and a half years. In thefollowing the layered framework will be presented, funda-mental document-independent filters for textured, homoge-nous background removal and for foreground removal willbe described as well as aspects of the implementation. Also,results of the framework-application will be given.

1 Introduction

Handwriting examination or identification is frequently usedin crime investigation, the prosecution and sentencing of crim-inal offenders. Important fields of interest are for example fi-nancial fraud, extortion, threat, terrorism, drugs related crime,pornography and racism. Aims, methods and techniques offorensic handwriting examination differ fundamentally fromthose of graphology. A forensic handwriting expert compareshandwriting on the basis of well-defined sets of character-istics and does not make any relation between handwritingcharacteristics and personality traits.

Forensic handwriting examination has become an estab-lished part of forensic science. As other related fields of foren-sic science, like fingerprint identification,examination of tool-marks and bullets, handwritingexamination is primarily basedupon the knowledge and experiences of the forensic expert.Although much effort has been put into the training of the

a)

b)

Fig. 1. Poor separation of handwriting from textured background:(a) original signature image, (b) signature image after procesing

examiners, the methodology of forensic handwriting analy-sis is being criticized, in particular the lack of proof of val-idation. According to the non-objective measurements andnon-reproducible decisions, traditional methods, like visualinspection and expert rating, were tried to be supported bycomputerized semi-automatic and interactive systems. Twosystems operating in forensic labs for such a purpose are theFISH-system [21] and the SCRIPT-system [4]. Although thereis some promising research in this area [7], [8], [24], it israther rarely applied in the forensic labs.

Computer-supported analysis of handwriting,using meth-ods of digital image processing and pattern recognition, requi-res sophisticated pre-processing methods to extract the hand-writing specimens from documents, in particular to removethe document background as well as other imprints being notsubject of the handwriting examination. Unfortunately, thesystems that were used in forensic labs [21] [4] could not ful-fill this demand properly (figure 1 and figure 2). On requestof the handwriting examination group of the Bundeskrimi-nalamt (the German police bureau) an open layered frame-work was developed to process handwritten documents.

The key idea of the model is to provide distinguished lev-els of user interaction during the configuration of the systemfor document processing, in particular from a simple param-eter adjustment up to an initialization for an automated fil-ter generation. In contrary to known approaches that scopeon parallel processing [1] the layers of the proposed model

2 Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents

a)

b)

Fig. 2.Poor threshold binarization: (a) original signature image, (b)signature image after processing

represent abstraction levels of potential user interaction. Ofcourse, the overall processing of a certain document is car-ried out automatically.

In the following section the layered model of the proposedframework is motivated and explained as a whole. Then, sec-tion 3 focuses on single methods of handwriting pre-proces-sing that are applied within the framework. Such methods thatare not published yet will be presented into detail. Section 4provides some facts about the implementation. Results of theframework’s application are stated in section 5, followed byconclusions and further studies in section 6.

2 System overview

Within this section the developed framework will be presentedfrom various perspectives. At first a brief description of themajor elements of the user interface will be given. So, thereader gets familiar with the available tools. The second pointof view comprises the on- and off-line phase of the frame-work application, whereby the third view of the frameworkfocuses on its abstract architecture to explain user interactionabilities.

Before this, the framework approach will be motivated.

2.1 State at the project beginning and requirements

At the beginning of our project in 1997, there wasn’t a suit-able solution available. The systems [21] [4] employed at thistime were not able to provide high quality images of digitizedhandwriting. Major drawbacks were caused by only partiallyeliminated noise signals, cases where a correct separation ofhandwriting from textured background were impossible (fig-ure 1) and cases where parts of the handwriting were lost(figure 2), in particular those with low contrast to the back-ground. These problems revealed the need for specific meth-ods dealing with handwritten documents and for taking intoaccount the typicality of handwritings. Such methods shouldbe aimed at focusing on a true representation of the origi-nal handwriting, as it is required for forensic examinations.(Technical factors, however, limit the scope of true handwrit-ing representation. Handwritten documents are scanned with200-600 dpi, depending on the examination goals as well as

List of main and auxiliary functions

Mode of handwriting representation after the processing

Mode of background removal inclusive parameters

Mode of foreground removal

Stack for clamping the SIC Tool settings during pattern document definition

Fig. 3. A tool for specifying processing operators and their parame-ters

performance goals of the computer-based processing). Prob-ably, a main reason for the mentioned problems is that, at thetime, when the employed systems [21] [4] were developed,the technical premises were not yet available. (In the case ofthe FISH-system [21] the development started in 1975). To-day, however, this is no longer the case. The time has cometo consider the further development of traditional systems -reflecting on new technical possibilities.

Hence, the Bundeskriminalamt (the German central po-lice bureau) initiated a research and development project. Theaim was to design and to realize a new framework for theelimination of noise signals from digitized handwritten doc-uments that are subjects of forensic examination. Moreover,the new framework should be able to be integrated into anexisting system environment like the FISCH-system [21] andit should be able to use the framework as a stand-alone ap-plication as well. The practical relevance of the project mightbe pointed out with some facts. The FISH-system [21] wasconceptualized to handle 10.000 investigation cases per year.This means in the worst case there would be 10.000 differenttypes of document. There are no restrictions to the documenttypes. The only thing what they have in common is handwrit-ing on them. 77.000 documents were stored until 1995 in theFISCH-system [21] and this number increases every day. Thescientific challenge is caused by this huge amount of differ-ent documents, being assumed as infinite. Moreover, at thesame time high quality images of handwriting must be pro-vided even if there are low contrast strokes and/or texturedbackground.

2.2 System architecture

The description of the user interface is limited to such ele-ments that enables the adaptation of the operators and param-eters and that support the archiving of control sequences in

Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents 3

Process and Save processing protocoll

Save as pattern document

Load pattern document

Document ID

Workers name and Date

Name and Path of the Index-file and the image folder

Further comments e.g. case ID

Stack for clamping the SIC Html

Generated process protocol

Fig. 4. Tool for using Html to store process protocols

such calledpattern documentsas well as of processing proto-cols to match the required proof of processing and validation.

TheSIC Tool is the central element of the user interface.It supports the operator and the numerical parameter specifi-cation for document processing. From the chosen operatorsthe resulting processing algorithm is derived automatically.Within the off-line phase of the framework application (com-pare section 2.3.1) theSIC Tool is applied for the manualdefinition of pattern documents as well as for the interactiveprocessing of questioned documents without a specific pat-tern document. In the case of the application of additionalstructural parameters, like a mask image that defines variousregions of interest within the questioned document, theSICTool-settings were applied sequentially and by simple drag-and-drop clamped to each region. In such a case, the man-agement of this pill of various settings is carried out by thedesigned document class; for theSIC Tool it is not relevant.As it can be seen in figure 3, theSIC Tool enables the selec-tion of various operators for back- and foreground removal.Among them are operators for document-independent as wellas document-dependent processing (compare section 3). Thedocument-specific processing (also section 3) is supported bythe before mentioned mask images as well as by the extensionof the function-listat the top of theSIC Tool(see section 3.3).

Either the created pattern document or the processing pro-tocol of interactive processing can be stored by employing theSIC Html-tool (figure 4). TheSIC Html-tool useshypertextmarkuplanguage (html) to relate ASCII-data that specify theoperators and the numerical parameter to images, like thequestioned document image or the mask image. Moreover,the Html-standard was chosen to display and to exchangeprotocol files and pattern documents very easily by a usualweb-browser. A proprietary editor is really not state of theart. Also, the html-files might be provided via Intranet withina lab, e.g. for validation purposes or for expert ratings. TheSIC Html-tool supports also the inclusion of administrationdata, like the case number, the name of the expert and the dateof the file creation. This information is important to meet therequirements of quality management in forensic handwritingexamination.

For fundamental digital image processing, like the defini-tion of the mask images, common tools should be used, too.

Fig. 5. Specific methods, taking into account the typicality of hand-writings, realized asPlug-In for AdobeTM PhotoshopTM

Within the forensic labs AdobeTM PhotoshopTM is widelyin use. So, some routines of inter-application communicationsupport the data exchange between PhotoshopTM and the pre-sented framework. Moreover, aPlug-Inwas developed to ex-tent PhotoshopTM by sophisticated methods in handwritingpre-processing (figure 5). Hence, the forensic experts mighttickle around with the provided document-independent oper-ators, e.g. for parameter adaptations or thePlug-In might beemployed in small-sized forensic labs that are not able to in-stall the whole system.

2.3 Framework for document processing

From the authors point of view the realization of fully docu-ment-independent handwriting processing will stay a scien-tific challenge for a while. Although, there is a universal pro-cessing flow, which comprises document-layout recognitionand document segmentation, handwriting segmentation andhandwriting quality enhancement; a standardized queue ofprocessing stages can not be used as a framework concept.With respect to the expected amount of different document

4 Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents

DigitizedDocument

Definition of apattern document

Definepattern document ?

Database ofpattern documents

Apply apattern document ?

Processing by using apattern document

- document specific -- document dependent -

Processing withouta priory information

- document independent -

Processeddocument

+Process protocol

yes

no

yes

no

Fig. 6. Flow chart of document processing

types, being subject of forensic analysis, and all those inho-mogeneous structures and characteristics of the documents,the classical serial processing has to be replaced by an itera-tive and parallel one. Hence, the processing operations haveto be selected by considering the prevailing document andby considering the specific processing-target. Moreover, aniterative and parallel processing concept allows taking intoaccount intermediate data to improve final processing oper-ations. It is understood that fixed parameter sets for the pro-cessing are not useful, too. And, even the number of differ-ent image operations should be extendable, because there aremany approaches promising good processing results [6] [17][18] [3] [28] [27].

The best solution seems to be an open layered frameworkconcept to realize an assistance system in forensic handwrit-ing analysis. It has to comprise a functional kernel providingbasic processing operations for the most frequent documenttypes as well as opportunities to adapt/extend the functional-ity by user interactions. The open layered architecture of theframework that was developted by the authors can be seenin figure 7. Each layer stands for an abstraction level of pro-cedures working on the documents. Also, the layers supportthe understanding of how the framework operates and howuser interaction might adapt/extend the functionality to spe-cific demands.

2.3.1 The application of the frameworkThe framework can be applied in two separated cycles (seefigure 6); once a pattern documents is prepared, it can be usedfor automatic controlling later on.

The such-called on-line phase works as follows: From acollection of pre-defined pattern documents, a human userselects the suitable one, which includes all that informationneeded to process the questioned document(s) correctly. Theinformation stored in the pattern document include the struc-tural and numerical parameters, the involved image operatorsas well as the processing flow. So, the on-line phase operatesautonomously. Documents with their corresponding patterndocuments might be processed on a remote machine or asbatch job over the night.

Image and File handling

Parameter specification

Operator adaptation

Algorithm specification

Layout recognition

User interaction

Process documents,Reference documents,

Processed documents,Process protocols,Pattern documents,

basic Image operations,default Parameter sets

adapted Image operations,specific Parameter sets,adapted Processing,newly generated Algorithms

Fig. 7.Layer model of the framework for document processing

In the off-line phase the mentioned pattern documentsmight be generated as well as small numbers of documentsmight be processed interactively. In every case a protocol willbe provided in the same style as a usual pattern document.Moreover, in the off-line phase the user can also generate newimage operators, which are adapted to special demands thatwere not predictable during the framework development. Adetailed description of all the layers supporting user interac-tion in the off-line phase follows.

2.3.2 The layered model architectureThe presented framework, whose block diagram is shown infigure 7, is made of theIFH (image and file handling), thePS(parameter specification), theOA (operator adaptation) andthe AS (algorithm specification) layers. With respect to thehuge amount of different documents theLR (layout recog-nition) layer is not working autonomously. Here, the user’sassistance is still required.

The IFH Layer for image and file handling:Dealing withdigitized documents requires managing them. Users want toscan, load, view, browse, save and process document images.A huge database is not part of the presented framework. De-pending on the application it is readily available, like in thecase of the FISH-system, or it might be extended. The men-tioned pattern documents as well as the protocol files werestored ashypertext markuplanguage-(html)-files. Of course,these html-files are generated automatically. The advantageof using them is that fixed relation of document images to allother parameters and the files can be distributed and displayedeasily.

The PS Layer for parameter specification:Assigning dif-ferent image operators to various regions of interest within adocument is a common way. Within the PS-layer these struc-tural parameters (see section 3.3.1), also called mask images ,can be defined interactively. Then, the specific parameter sets,the operators and the algorithms have to be assigned to eachregion. There are also some adjustable numerical parameters,e.g. the maximal noise ratio.

The OA Layer for operation adaptation:To extent thefunctional kernel, a module for the automated generation ofimage processing operators is embedded within the frame-work. This LUCIFER-module [15] [10] (see figure 18 and sec-tion 3.3) uses genetic programming to design texture filters,which can be integrated in the document processing for back-ground filtering. The user has only to provide a so-called goalimage as well as a small part of the original image, fromwhich the handwriting can not be extracted using the otherbackground filters. The goal image is human made by trac-

Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents 5

ing the foreground information, in particular the handwriting.In this way the goal image can verify the filters generated bythe LUCIFER-module. A detailed description is given in sec-tion 3.3.

The AS Layer for algorithm specification:The open ar-chitecture of the framework allows the use of various ap-proaches to document processing. Unfortunately, there is noreally intelligent layout recognition yet, which would be ableto process this huge amount of different documents. There-fore, a human user has to do this job once in advance. Theimage operators, implemented in the framework, are chosenin such a way that users are not overwhelmed, hopefully. Theycan switch on/off single operations like:

– Homogenous background + grayvalue output image– Textured background + lines + binary output image– Generated filterXYZ + machine-print + noise + lines +

binary output image– Empty reference dropout (referenceABC) + noise +

grayvalue output image

Even if the framework will be extended, this procedure willhold. The open layered architecture of the proposed frame-work is able to consider upcoming operators, too. Within eachlayer of the framework new approaches might be includedand the users of the framework might interact with them. Al-gorithms of document processing, which are included withinthe framework up to now, are presented in section 3.

3 Methods for document pre-processing

The operators for document processing can be grouped intoalgorithms for document-layout analysis, like automatic loca-tion of text blocks; algorithms for background texture and/orimage cleaning, as histogram and threshold based techniques;algorithms for foreground cleaning, as guideline and pre-prin-ted data removal; and algorithms for handwriting reconstruc-tion and enhancement. There are different strategies for thecleaning of background and foreground. Some approachesuses separated processing [5] [18] [6] [2] others use combinedcleaning operators for fore- and background [20] [17].

Although, this grouping is fundamental, the authors wantto focus on a further aspect. As mentioned before, it is nec-essary to handle various distinguished document types, there-fore distinguished amount of a-priory knowledge about thecharacteristics of a document type is expected. Hence, the op-erators for document processing must be able to handle thisdifferent a-priory knowledge. Given information not to con-sider and operators and parameters not to adapt may lead toa document quality that is just moderate. To expect too muchinformation could mean that some documents might not beprocessed. The authors propose a grouping of the processingoperators that is derived form the given a-priory knowledgeof a document type.

In general the document types being examined in the dailyforensic casework can be sorted into three groups. Types ofdocuments being examined very often represent group One.For this group it should be easy to get an empty referencedocument, e.g. a drivers license, a pass document or a stan-dardized bank check. The second document group includessuch documents, which occur more or less often, such as

emptyreference

more specifiedby parameters

(i.e. using of mask images)

only by the kind ofback- and foreground

(i.e. homogenous, textured, lines, stamps)

Flexibility of operators Link to a reference

Document-dependent

Document-specific

Document-independent

Fig. 8.Types of document processing

a registry-paper of a certain hotel or an insurance contract.In some cases it might be difficult to obtain an empty refer-ence document. However, it would be nice to create a patterndocument, because the document type could be examined re-peadly. The third document group covers all that documenttypes appearing only few times and/or there is no way to getan empty reference, e. g. an aged testament or a piece of pa-per that was washed. From these three document groups therequired operations were derived.

The processing operators and the derived generic types ofdocument processing differ in their consideration of a-prioriknowledge (e.g. empty reference, layout information and/orstructural and numeric parameters). In the following the typesof document processing are listed in an increasing order bythe strength of their connectivity to a reference:

– document-independent (without a-priory knowledge)– document-specific (by using adapted parameters)– document-dependent (by using an empty reference)

The required processing quality of the handwritten documentsand the flexibility of the whole framework are opposing tar-gets (see figure 8). Therefore, the operators to be includedinto the framework have to be selected carefully. Also, therecould be diverse operators available, which may be chosenalternatively. The processing operators chosen by the authorscover local and global image filters, textural and structuralas well as syntactical and layout driven approaches. Morespecialized, document-independent operators were providedfor background removal of homogeneous and textured back-grounds and for foreground removal of lines, machine-printsand noise. The document-specific processing is realized byusing special parameter sets as an additional input for the be-fore mentioned document-independent operators. Moreover,an embedded module (see figure 18) allows the automatedgeneration of document-specific background filters. The docu-ment-dependent processing is realized by using the extentedapproach of morphological subtraction [20] [10]

3.1 Document dependent processing

Usually for the removal of nearly homogenous backgrounds,different kind of histograms and threshold techniques are used.Against it, it was pointed out by Okada and Shridhar [20]that enhanced inter-image subtraction, the so-called morpho-logical subtraction, provides acceptable cleaning results for

6 Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents

0 255

0 255

025

5

0 255

025

50

255

segmented 2D-Histogram

2D-Histogram

2D-Histogram

corresponding cleaned check image

treated, empty reference-document

filled-in document

filled-in document

blank reference-document

Fig. 9. Application of the 2D-Lookup for document-dependent handwriting segmentation

Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents 7

textured and/or image check backgrounds. For document-de-pendent processing, the authors provide a method for back-ground and foreground cleaning [10] that was derived formthe morphological subtraction. In contrary to the fundamen-tal approach, a blank reference-document is filtered only oncein the off-line phase and is used for grayvalue comparisonduring the background cleaning process [10]. For the com-parison, a segmented 2D-Histogram is used for looking-up.Hence, it is possible to provide qualitatively comparable pro-cessing results by using a smaller amount of computationaltime. Concomitant to the introduction of 2D-Histogram seg-mentation, which will be referred to as 2D-Lookup (for fur-ther details see the appendix), the basic question is about theorigins of the two images, which are used for the look-up.So far, they have been the treated, empty reference-documentand the filled-in document, usually untreated (figure 9).

3.2 Document independent processing

3.2.1 Textured background removalThe processing of textured documents was studied intensivelyin the past [11] [10] [9]. Textured background removal ongrayvalue images is a challenge. Handwriting might not beseparated by global grayvalues intensities or empty referencesubtractions [10]. Only, the consideration of local structuresand local intensities by applying adapted morphological op-erators provided acceptable results (compare figure 10). Thefirst implementation of this filter was done empirically bythe authors, later it was proven by the LUCIFER-module [10](section 3.3). Also, it was shown [10] that the 2D-Lookup canbe derived as an abstraction of the former inter-image subtrac-tion, which gives a more flexible approach to inter-image fil-tering. This is caused by the fact that for the graylevel lookupa blank check image is not mandatory. It is also possible touse two different filtered images of one filled-in check image.

In this manner, the difficult procedure for positional ad-justment between filled-in and blank document image can beavoided. The fundamental problem for a practical and effi-cient usage of the 2D-Lookup is the selection of filter oper-ations, which provide significant differences between back-ground and user entered information. In an intermediate step,the 2D-Histogram has to be segmented. The result of this his-togram analysis depends on both, the free-to-choose imageoperators and the quality of the labeling in the histogram. Theapplication of the 2D-Lookup, as it is employed for texturedbackground removal of various types of textures, like Eu-rocheck and American Express Traveler Check, is shown infigure 10. It has to be considered that in this application-caseof the 2D-Lookup only such image elements can be elimi-nated, which are included in the labeled segment of the 2D-Histogram. Further image elements like the guidelines thatshow comparable characteristics to the handwritings (e.g. thesame line thickness) and that is therefore not influenced bythe filtering, has to be eliminated by an additional foregroundcleaning.

For any further requests, like background removal of anew passport, applying the LUCIFER-module (compare sec-tion 3.3) to determine a suitable filter operation seems to bemore convenient.

a)

b)

Fig. 11. Local contrast enhancement: (a) original signature image,(b) signature image after procesing

3.2.2 Homogenous background removalIn the following, methods for the elimination of homogenousbackground will be listed in detail. Independent of the writingmaterial (e.g. kind of ink, kind of paper and paper color) andwriting pad, these methods offer a binary image of high qual-ity, without neglecting relevant parts of the writing product.It is expected that background has no textures and that thereis a small contrast difference between handwriting and back-ground. There are two major steps included in the procedure,which were optimized for the special case of handwritings.

Local contrast enhancement: Local image operators, alongwith other operators of mathematical morphology, are usedfor the purpose of contrast enhancement. In this context ”lo-cal” means that the parameter assessment of the image oper-ators is performed in the neighborhood of every pixel. This isdone by a mask-like structuring element, which is positionedon every image position (slides over the image). The param-eter choice of every position depends on a computation fromthe other image points underneath the mask. This procedureallows a better representation of local disturbances of the in-tensity distribution. Moreover, this method reduces the possi-bility of error resulting from global image properties such asa dark background causing the loss of low-contrast strokes.

The size of the structuring element should not exceed theaverage stroke width of the writing trace. Previous research[11] established that the stroke width of handwriting in a doc-ument for a broad range of varying writing devices such asball point pen, fountain pen, or roller pen, digitized with aresolution of 300 dpi, was about seven to ten pixels. Due tothis fact, for the contrast enhancement five pixels are chosenfor the structuring element.

(x, y)

w

StrokeP2

P1

(P'4) P0

(P'3) P7

P6 (P'2)

P5 (P'1)

P4 (P'0)

P3

Fig. 12.Functional principle of the locally adaptive binarization [14]

8 Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents

255

2550Grayvalues of the original image

Gra

yval

ues

of th

e m

orph

olog

ical

fil

tere

d im

age

Original signature image

Morphological filtered signature image

Signature image without background texture

Fully cleaned signature image

2D-Histogram

Fig. 10.Application of the 2D-Lookup for document-independent textured background cleaning

Figure 11 shows the result of the application of contrastenhancement and grayvalue normalization for handwriting.This time, even fine small details are preserved in the re-sulting image. Hence, by applying local-adaptive contrast en-hancement, the inclusion of fine details is ensured. A detaileddescription of the operator was stated elsewhere [9].

Local adaptive binarization: The following binarizationstep should not result in broad lines or unexpected alterationsto the fine lines in the original image. For this reason, a direc-tion-oriented binarization, which considers image contents aswell as grayvalue contrast, is advised. For the special case ofhandwritings, the slant and the width of every stroke are alsoconsidered. This is performed by local image operators withthe size of the structural element of seven pixels.

Fb(x;y)=

8<:

1 ifW3

i=0[L(Pi)^L(P0

i)^L(Pi+1)^L(P0

i+1) ]is true

0 otherwise(1)

with

w pre-defined maximal stroke widthP0

i P0

(i+4)mod8; for i = 0; : : :;7

L(P) = ave(P)�g(x;y)> TT pre-defined thresholdPx;Py coordinates of pointPg(x;y) grayvalue at pointP(Px;Py)

This fundamental binarization process (compare figure 12 andequation 1) was introduced by Kamel and Zhao in 1993 andfurther extended by replacing an average filtering through agaussian filtering [19].

3.2.3 Reconstruction and noise removalA binarized image might contain distortions resulting fromthe poor quality of the document or the scanning process.The evaluation of a such called seed image, followed by areconstruction of the binary image solve this problem. Here,a morphological reconstruction operation is employed [23].The seed image is obtained by clearing all obscure image con-tents such as strokes with a width below a given threshold.Figure 13 shows the reference image of a handwritten sig-nature. In the reconstruction step, all image segments, whichdo not contain at least one pixel of the reference image, arecleared. The result is a binary image of high quality (figure13.) After these image operations, other algorithms are usedto remove imprints like lines and machine printing. A fur-ther, more smoothly noise filtering is realized by the well-known analysis of connected components. The correspondingparameter controls the size of image elements to be removed.

3.2.4 Stamp and imprint removalThe removal of stamps and other machine-imprints employsan extended connected component analysis. Beside the seg-ment-sizes, also, the frequency of occurrence in a row of uni-form segments is considered. This approach works quite well(figure 14). However, problems arise if the stamp-segmentsare connected to the handwriting. Here, further research anddevelopment is required.

Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents 9

a)

b)

Fig. 13. Recontruction and noise removal: (a) seed image derivedform the original image in figure 11 (b) final, high-quality binaryimage of the signature

a)

b)

Fig. 14. Removal of stamps and other imprints: (a) original signa-ture image with overlapping stamp (b) signature image after stampremoval

b)a)

c) d)

Fig. 15.Line reconstruction by using morphological filtering (lengthof the structuring element = 5 pixels; angle =π=4): (a) binarizedimage, (b) stroke image after line removal, (c) reconstructed stroke,(d) result image

3.2.5 Line removal and stroke reconstructionThere are two kinds of operators for horizontal and/or verti-cal line removal. The first one is applied on grayvalue imagesof handwritten documents. This approach is inspired by thelegendary Otsu-method. Here, the operator acts like a back-ground filter, by simple subtraction of the average pixelvalueper image row/column from the pixelvalue from the actualimage position. To the rest of the work the filter for homoge-nous background removal is applied. This approach worksquite well for lined and squared paper, but unsatisfying arethe results for guidelines.

b)a)

c) d)

f)e)

Fig. 16. Proposed line reconstruction: (a) binarized image, (b)searching for line candidates, (c) detected line, (d) line removal withconcominant stroke marking, (e) reconstructed line, (f) result image

The second, more sophisticated algorithm is based on astructural analysis of the elements of the binary image. Mosteffort was spend in the reconstruction of the handwritten stro-kes during line-removal.

e)

handwritten stroke

deleted line

broken stroke markesstroke reconstruction

12

34

1

2

3

4

1 2

34

Fig. 17.Stroke marking: detail of figure 16

In contrary to the often stated assumption that the hand-written strokes may be reconstructed by applying morpholog-ical filtering, figure 15 shows that in the case of a small cut-ting angle and a thick guide lines the reconstruction fails. Nomatter whether binary or grayvalue morphology is used, if thecutting angle is smaller than the angle of structuring element(also called dynamic kernels [27]) the upper and lower strokecan not be connected. Furthermore, this procedure leads todistortion in the case of connected strokes.

The proposed method for stroke reconstruction combineshandwritten stroke tracing and line removal in one step (fig-

10 Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents

original image

goal image

010100101101011011000101

Fig. 18.Embedded LUCIFER-module for background filter generation

ure 16). After detecting a line within the image, it is tried to beremoved carefully by checking the local neighborhood of theline for crossing strokes. All upper and lower crossing pointswere marked (figure 17). Than, the marks were grouped andstraightly connected. The following look-up with the originalimage excludes stroke distortions. The results tune optimisti-cally, however, in case of handwritten strokes that touch theline the grouping works just moderately.

3.3 Document specific processing

The document-specific pre-processing is realized into twomanners. The first one uses the available operators. Then, de-pending on the users selection specific operators with theircorresponding parameters were clamped to the pre-definedregions of interest. The second approach supports the genera-tion of new, adapted background filters that takes the charac-teristics of a certain document type into account.

3.3.1 Application of mask imagesTo realize an efficient document processing the application ofsuch called mask image is quite common. Such a mask imageis employed to define regions of interest, like number-field oralpha-numeric-field, e.g. to support optical character recog-nition tools. Also, for the separation of handwriting this ap-proach is useful. A document may comprise various regionsthat have quite different background textures like in the caseof a passport. Then, a specific mask image helps to apply thecorrect filter operation to each region and to provide sophis-ticated cleaning results at the end.

3.3.2 Lucifer II frameworkFor solving the problem of determining a suitable filter op-eration that distinguishs between document background anduser entered information, the LUCIFER II framework for filterdesign is used. LUCIFER II was developed for textural filterdesign in the context of surface inspection, and it uses evo-lutionary algorithms for its adaptation [15]. The feasibility ofusing the LUCIFER II framework for the design of specificbackground filters will be presented in the following.

General Overview: The framework is composed of (user-supplied) original image, filter generator, filter output images

1 & 2, result image, (user-supplied) goal image, 2D-Lookupmatrix, comparing unit and filter design signal.

An evolutionary algorithm maintains a population of in-dividuals, each of which specifies a setting of the frame-work. By applying the resulting 2D-Lookup and measuringthe quality of coincidence of goal and result image, a fitnessvalue can be assigned to each individual. These fitness mea-sures are used for the standard genetic operations of an evo-lutionary algorithm. The 2D-Lookup algorithm, the fitnessmeasure, the node and terminal functions of the individual’sexpression trees and the setting of the 2D-Lookup matrix willbe shortly described in the next subsections. A more com-prehensive introduction to the framework is given elsewhere[15].

Fitness function: A fitness measure is given by the de-gree of coincidence of goal image and result image. Both arebinary images. The fitness measure, which is computed bythe comparing unit, is a weighted sum of three single mea-sures: the quota of white pixels in the result image, whichare also white in the goal image (whitegoalok), the quota ofblack pixels in the result image, which are also black in thegoal image (blackgoalok) and the quota of black pixels of thegoal image, which are also black in the result image (black-resok). Note, thatblackgoalokandblackresokare different.The multiple objective here is to increase these measures si-multaneously. After performing some experiments with theframework, it was decided to use the following weighted sumfor these three objectives:

f = 0:1blackresok+0:5whitegoalok+0:4blackgoalok:

This fitness function was designed in order to direct the ge-netic search according to the schemata theorem, by whichhigher weighted objectives are fulfilled first.

Genetic Design of Filter Operations: One essential part ofthe framework are the two image processing operations. In or-der to generate them, genetic programming is used [16]. Ge-netic programming maintains a population of expression trees(shortly: trees). Every tree equals a program by its structure.For the design of the trees, the following operators were usedas terminal functions: Move, Convolution, Ordered WeightedAveraging (OWA) [26], Fuzzy integral [12] [25], TextureNumbers. The node functions were out of the set of the op-erations minus, minimum, maximum, square, evaluate. In alltests, a maximum number of 50 generations was used. For

Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents 11

Fig. 19.Sneak preview of the major user interface

details, and also for the relaxation-based technique for thesetting of the 2D-Lookup matrix, consider [15].

4 Implementation

The functional framework-kernel, which includes all mod-ules for digital image processing was implemented as a li-brary written in ANSI C. Up to now it runs under UNIX (So-laris 2.5.1 and SUN OS 4.3), WIN NT 4.0, Windows 2000and OS/2 Warp 4.0 operating system. Due to a platform andoperating system independent software design it also can beused on a PowerMacintosh under MacOS 8.0. The major partof the graphical user interface (see section 2.3 and figure 19)was implemented in Smalltalk using ParcPlace Visual Workswith C-Connect running on various platforms, too. Beside thekernel function calls and the inter-application communica-tion, the management of process and mask images and thehandling of the html-protocol-files are covered by the majorinterface. Due to the software design with its strict separationof functional kernel and user interface further applicationswere implemented, adapted to special demands in daily foren-sic casework. The first one allows to process huge amountsof documents and is realized as a simple console applicationrunning on a remote machine. For small forensic labs withlimited resources and small amounts of documents to be pro-cessed an AdobeTM PhotoshopTM Plug-In was implemented(compare section 2.2). With respect to the programmer inter-face of PhotoshopTM thePlug-In covers only the document-independent processing functions for homogenous and tex-tures backgrounds as well as for line, imprint and noise re-moval. Beside complete applications, the functional kernelwas distributed as software library with programmer inter-face and now it is operating in automated check processingsystems.

5 Discussion and results

Launching the reseach project, 212 different document typeswere provided by the Bundeskriminalamt. The documenttypes were selected from various backgrounds and cover

memos, diverse bank formularies, passports, contracts, deliv-ery notes, invoices, applications for a work permit, hotel reg-istration etc. The majority of documents was sized betweenDIN A6 and DIN A4.

A flatbed scanner supporting 256 grayvalues and 300dpi resolution is used to digitize the handwritten documents.Grayvalue images were chosen with respect to the expectedhigh quality images and the limited hardware resources.Therefore, up to now, the proposed framework includes onlygrayvalue image processing operators.

The total processing time for a document differs due tovarying conditions. Average processing time for some sam-ples processed on a Pentium II with 266 MHz and 196 MByteRAM are listed in table 1. Note that currently the homoge-neous background removal is more sophisticated and ableto keep low contrast strokes whereas texture background re-moval allows the extraction of handwriting from texturedbackground with comparable lesser quality. Thats why, thecomputational effort for homogenous background removallis higher.

document size time

Homogenous background

A4 page 2478� 3469 Pixel 40.959 secbank cheque 1771� 1006 Pixel 8.592 secsnippet 672� 227 Pixel 0.640 sec

Texture background

A4 page 2478� 3469 Pixel 17.085 secbank cheque 1771� 1006 Pixel 3.435 secsnippet 672� 227 Pixel 0.220 sec

Table 1.Processing times for background removal of various docu-ments with 256 graylevels.

6 Conclusion and further work

A framework was presented for the automated pre-processingof documents for forensic handwriting analysis. Fulfilling thestated requirements this framework can be used in the dailycasework of forensic experts and it is in use for more thantwo years now. The framework covers functions for digital-ization of documents, their pre-processing, in particular theextraction and qualitative improvement of handwriting, andthe archiving of processing protocols and processing param-eters. The open architecture of the functional kernel supportsthe adaptation to further demands.

The document pre-processing itself follows a new conceptthat considers different kinds of bindings of a-priori knowl-edge to processing documents. Within the concept it is dis-tinguished between document-independent processing with-out considering a-priori knowledge, document-specific pro-cessing with a binding of parameter sets to a document anddocument-dependent processing with a strict binding of a ref-erence to a document. To provide a wide variability of pro-cessing filters the basic functional kernel might be extendedby user generated filters using the LUCIFER-framework [15][10] working as an embedded module.

12 Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents

The current restriction to graylevel image processingcaused by limited hardware resources seems to be irrelevantin the future. To improve the processing quality, color imageprocessing [3] has to be considered in framework updates andextensions.

Acknowledgments

The authors wish to thank Yu-Nong Zhang, Xiufen Liu andWensheng Ye for their contributions in coding and test-ing. Also, thanks to Gerhard Grube and all other forensichandwriting examiners, who were involved in verifying theframework. This work was sponsored in part by the Bun-deskriminalamt, section handwriting examination (GZ.:IX/B-2310297-Ge).

Also the authors wish to thank the unknown reviewers fortheir comments that helped to improve the papers quality.

Appendix: The 2D-Lookup Algorithm

The 2D-Lookup algorithm stems from mathematical mor-phology [22], [13]. It was primarily intended for the segmen-tation of color images. However, the algorithm can be gener-alized to be used for grayvalue images as well.

For starting off the 2D-Lookup algorithm, the two oper-ation images 1 and 2, which are of equal size, need to beprovided. The 2D-Lookup algorithm goes over all commonpositions of the two operation images. For each position, thetwo pixel values at this position in operation images 1 and 2are used as indices for looking-up the 2D-Lookup matrix. Thematrix element, which is found there, is used as pixel valuefor this position of the result image. If the matrix is bi-valued,the resultant image is a binary image.

Let I1 and I2 be two grayvalue images, defined by theirimage functionsg1 andg2 over their common domainP�N �N :

g1 : P!f0; : : :;gmaxg

g2 : P!f0; : : :;gmaxg (2)

The 2D-Lookup matrix is also given as an image functionl , but its domain is not the set of all image positions butthe set of tupels of possible grayvalue pairsf0; : : :;gmaxg�

f0; : : :;gmaxg,

l : f0; : : :;gmaxg�f0; : : :;gmaxg! S� f0; : : :;gmaxg: (3)

Then, the resultant image function is given by:

r : P! S

r(x;y) = l(g1(x;y);g2(x;y)): (4)

In standard applications, every grayvalue is coded by eightbit, resulting in a maximum grayvalue of 255. Also, the do-main of the image function is a rectangle. In this case, the2D-Lookup is performed by the following (object-oriented)pseudo-code:

for x=0 to img width -1 do

begin

for y=0 to img height-1 do

begin

g1 = g1(x,y)

g2 = g2(x,y)

out(x,y) = l(g1,g2)

end y

end x

To give a simple example for the 2D-Lookup procedure,gmax= 3 is assumed in the following. Let

g1 :0 1 20 3 3 and g2 :

2 3 12 3 2

be the two input images and the 2D-Lookup matrix be givenby

l :

g1g2

0 1 2 3

0 0 0 1 11 0 1 2 22 1 2 3 33 2 3 3 2

Then, the resultant image is

r :l(0;2) l(1;3) l(2;1)l(0;2) l(3;3) l(3;2) =

1 3 21 2 3

Since the goal image is supplied as a binary one and in orderto keep user instruction as simple as possible, the 2D-Lookupmatrix used in the LUCIFER-FRAMEWORK CONTAINS ONLY

BINARY ENTRIES BLACK (0) AND WHITE (1).

REFERENCES

1. R. A. BROOKS, A robust layered control system for a mobilerobot, IEEE JOURNAL OF ROBOTICS AND AUTOMATION, 2(1986),PP. 14–23.

2. M. CHERIET, J. SAID , AND C. SUEN, A formal model for doc-ument processing of business forms, MONTREAL, CANADA ,1995.

3. C. CRACKNELL AND A. DOWNTOWN, A color classifica-tion to form dropout, IN PROCEEDINGS6TH INTERNATIONAL

WORKSHOP ON FRONTIERS IN HANDWRITING RECOGNI-TION (IWFHR), TAJON, KOREA, 1998.

4. W. C. DE JONG, L. N. K. VAN DER KOOIJ, AND D. P.SCHMIDT, Computer aided analysis of handwriting, the NIFO-TNO approach, IN 4TH EUROPEAN HANDWRITING CONFER-ENCE FOR POLICE AND GOVERNMENT HANDWRITING EX-PERTS, 1994.

5. G. DIMAURO, S. IMPEDOVO, G. PIRLO, AND A. SALZO, Au-tomatic bankcheck processing: a new engineered system, IN-TERNATIONAL JOURNAL OF PATTERN RECOGNITION AND

ARTIFICIAL INTELLIGENCE , 11 (1997),PP. 467–504.6. S. DJEZIRI, F. NOUBOUD, AND R. PLAMONDON, Extraction

of items from checks, IN PROCEEDINGS OF THE4TH INTER-NATIONAL CONFERENCE ON DOCUMENT ANALYSIS AND

RECOGNITION(ICDAR), ULM , GERMANY, 1997.7. D. DOERMANN AND A. ROSENFELD, Recovery of tempo-

ral information from static images of handwriting, INTER-NATIONAL JOURNAL OF COMPUTER VISION, 15 (1995),PP. 143–164.

8. B. FOUND,The Forensic Analysis of Behavioural Artefacts: In-vestigations of Theoreticaland Analytical Approachesto Hand-writing Identification, PHD THESIS, LATROBE UNIVERSITY

BUNDOORA, 1997.

Katrin Franke, Mario K¨oppen: A computer-based system to support forensic studies on handwritten documents 13

9. K. FRANKE AND G. GRUBE, The automatic extraction ofpseudodynamic information from static images of handwrit-ing based on marked grayvalue segmentation, JOURNAL OF

FORENSIC DOCUMENT EXAMINATION, 11 (1998),PP. 17–38.

10. K. FRANKE AND M. K OPPEN, Towards an universalapproachto background removal in images of bankchecks, IN PROCEED-INGS 6TH INTERNATIONAL WORKSHOP ONFRONTIERS IN

HANDWRITING RECOGNITION (IWFHR), TAJON, KOREA,1998.

11. K. FRANKE, M. KOPPEN, B. NICKOLAY, AND S. UNGER,Machbarkeits- und Konzeptstudie Automaisches System zurUnterschriftenverifikation, TECH. REP., FRAUNHOFER IPKBERLIN, 1996.

12. S. GROSSERT, M. KOPPEN, AND B. NICKOLAY, A new ap-proach to fuzzy morphology based on fuzzy integral and itsapplication in image processing, IN PROCEEDINGSICPR’96,VIENNA, AUSTRIA, 1996,PP. 625–630.

13. E. J. SERRA, Image Analysis and Mathematical Morphology.Volume 2: Theoretical Advances, ACADEMIC PRESS, LON-DON, 1988.

14. M. KAMEL AND A. ZHAO, Extraction of binary charac-ter/graphics images from grayscale document images, CVGIP:GRAPHICAL MODELS AND IMAGE PROCESSING, 55 (1993),PP. 203–217.

15. M. KOPPEN, M. TEUNIS, AND B. NICKOLAY, A frameworkfor the evolutionary generation of 2d-lookup based texturefilters, IN METHODOLOGIES FOR THECONCEPTION: DE-SIGN AND APPLICATION OF SOFT COMPUTING, PROCEED-INGS OF THE5TH INTERNATIONAL CONFERENCE ONSOFT

COMPUTING AND INFORMATION / INTELLIGENT SYSTEMS

(IIZUKA’98), T. Y AMAKAWA AND G. MATSUMOTO, EDS.,1998,PP. 965–970.

16. J. KOZA, Genetic programming — On the programming ofcomputers by means of natural selection, MIT PRESS, 1992.

17. L. LEE, M. L IZARRAGA, N. GOMES, AND A. KOERICH,A prototype for brazilianbankcheck recognition, INTERNA-TIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFI-CIAL INTELLIGENCE, 11 (1997),PP. 549–570.

18. K. LIU, C. SUEN, M. CHERIET, J. SAID , C. NADAL ,AND Y. TANG, Automatic extraction of baselines and datafrom check images, INTERNATIONAL JOURNAL OF PATTERN

RECOGNITION AND ARTIFICIAL INTELLIGENCE, 11 (1997),PP. 675–697.

19. L. LOHMANN, Bildanalysesystem zur robusten Erkennung vonKennzeichen an Fahrzeugen, PHD THESIS, TECHNICAL UNI-VERSITY BERLIN, 1999.

20. M. OKADA AND M. SHIRDHAR, Extraction of user en-tered components from a personal bankcheck using morpho-logical subtraction, INTERNATIONAL JOURNAL OF PATTERN

RECOGNITION AND ARTIFICIAL INTELLIGENCE, 11 (1997),PP. 549–570.

21. M. PHILIPP, Fakten zuFISH, Das Forensische Informations-System Handschriften des Bundeskriminalamtes - Eine Anal-yse nach ¨uber 5 Jahren Wirkbetrieb, TECH. REP., KRIM-INALTECHNISCHES INSTITUT 53, BUNDESKRIMINALAMT ,THAERSTRASSE11, 65173 WIESBADEN, 1996.

22. J. SERRA, Image Analysis and Mathematical Morphology,ACADEMIC PRESS, LONDON, 1982.

23. P. SOILLE, Morphological Image Analysis: Principles and Ap-plications, SPRINGER-VERLAG, BERLIN, 1999.

24. Y. SOLIHIN AND C. LEEDHAM, Preparing handwriting im-ages for forensic examination, IN PROCEEDINGS OF THE

EIGHT BIENNIAL CONFERENCE OF THEINTERNATIONAL

GRAPHONOMICSOCIETY, 1997,PP. 125–126.

25. M. SUGENO, Theory of fuzzy integral and its applications,PHD THESIS, TOKYO INSTITUTE OF TECHNOLOGY, 1974.

26. R. YAGER, On ordered weighted averaging aggregation oper-ators in multi-criteria decision making, IEEE TRANS. SMC,18 (1988),PP. 183–190.

27. X. YE, M. CHERIET, C. SUEN, AND K. L IU, Extractionof bankcheck items by mathematical morphology, INTERNA-TIONAL JOURNAL ON DOCUMENT ANALYSIS AND RECOGNI-TION, 2 (1999),PP. 53–66.

28. K. YOSHIKAWA, Y. ADACHI, AND M. YOSHIMURA, Extract-ing the signatures from travelers’ checks, IN PROC. 6TH IN-TERNATIONAL WORKSHOP ONFRONTIERS IN HANDWRIT-ING RECOGNITION(IWFHR), TAJON, KOREA, 1998.