combining pores and ridges with minutiae for improved fingerprint verification

10
Fast communication Combining pores and ridges with minutiae for improved fingerprint verification Mayank Vatsa a , Richa Singh a , Afzel Noore b, , Sanjay K. Singh c a Indraprastha Institute of Information Technology, New Delhi, India b Lane Department of Computer Science and Electrical Engineering, West Virginia University, USA c Department of Computer Engineering Institute of Technology, Banaras Hindu University, India article info Article history: Received 7 September 2007 Received in revised form 11 May 2009 Accepted 12 May 2009 Available online 21 May 2009 Keywords: Fingerprint verification Active contour Delaunay triangulation Information fusion abstract This paper presents a fast fingerprint verification algorithm using level-2 minutiae and level-3 pore and ridge features. The proposed algorithm uses a two-stage process to register fingerprint images. In the first stage, Taylor series based image transformation is used to perform coarse registration, while in the second stage, thin plate spline transformation is used for fine registration. A fast feature extraction algorithm is proposed using the Mumford–Shah functional curve evolution to efficiently segment contours and extract the intricate level-3 pore and ridge features. Further, Delaunay triangulation based fusion algorithm is proposed to combine level-2 and level-3 information that provides structural stability and robustness to small changes caused due to extraneous noise or non-linear deformation during image capture. We define eight quantitative measures using level-2 and level-3 topological characteristics to form a feature supervector. A 2n-support vector machine performs the final classification of genuine or impostor cases using the feature supervectors. Experimental results and statistical evaluation show that the feature supervector yields discriminatory informa- tion and higher accuracy compared to existing recognition and fusion algorithms. Published by Elsevier B.V. 1. Introduction Fingerprint features have been widely used for verify- ing the identity of an individual. Automatic fingerprint verification systems use ridge flow patterns and general morphological information for broad classification, and minutiae information for verification [1]. The ridge flow patterns and morphological information are referred to as level-1 features, while ridge endings and bifurcations, also known as minutiae, are referred to as level-2 features. With the availability of high resolution fingerprint sensors, it is now feasible to capture more intricate features such as ridge path deviation, ridge edge features, ridge width and shape, local ridge quality, distance between pores, size and shape of pores, position of pores on the ridge, permanent scars, incipient ridges, and permanent flexure creases. These fine details are chara- cterized as level-3 features [2] and play an important role in matching and improving the verification accuracy. Researchers have proposed several algorithms for level-2 fingerprint recognition [1]. However, limited research has been performed for level-3 feature extraction and matching [3–7]. The recognition performance of these algorithms are good but have certain limitations. The algorithm proposed in [3] requires manual intervention for fingerprint alignment whereas the algorithm proposed in [4,5] requires very high resolution ( 2000 ppi) images. Meenen et al. [6] proposed a pore extraction algorithm Contents lists available at ScienceDirect journal homepage: www.elsevier.com/locate/sigpro Signal Processing ARTICLE IN PRESS 0165-1684/$ - see front matter Published by Elsevier B.V. doi:10.1016/j.sigpro.2009.05.009 Corresponding author. E-mail addresses: [email protected] (M. Vatsa), [email protected] (R. Singh), [email protected] (A. Noore), [email protected] (S.K. Singh). Signal Processing 89 (2009) 2676–2685

Upload: independent

Post on 28-Mar-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

ARTICLE IN PRESS

Contents lists available at ScienceDirect

Signal Processing

Signal Processing 89 (2009) 2676–2685

0165-16

doi:10.1

� Cor

E-m

(R. Sing

(S.K. Sin

journal homepage: www.elsevier.com/locate/sigpro

Fast communication

Combining pores and ridges with minutiae for improvedfingerprint verification

Mayank Vatsa a, Richa Singh a, Afzel Noore b,�, Sanjay K. Singh c

a Indraprastha Institute of Information Technology, New Delhi, Indiab Lane Department of Computer Science and Electrical Engineering, West Virginia University, USAc Department of Computer Engineering Institute of Technology, Banaras Hindu University, India

a r t i c l e i n f o

Article history:

Received 7 September 2007

Received in revised form

11 May 2009

Accepted 12 May 2009Available online 21 May 2009

Keywords:

Fingerprint verification

Active contour

Delaunay triangulation

Information fusion

84/$ - see front matter Published by Elsevier

016/j.sigpro.2009.05.009

responding author.

ail addresses: [email protected] (M. Vatsa), r

h), [email protected] (A. Noore), sks.

gh).

a b s t r a c t

This paper presents a fast fingerprint verification algorithm using level-2 minutiae and

level-3 pore and ridge features. The proposed algorithm uses a two-stage process to

register fingerprint images. In the first stage, Taylor series based image transformation is

used to perform coarse registration, while in the second stage, thin plate spline

transformation is used for fine registration. A fast feature extraction algorithm is

proposed using the Mumford–Shah functional curve evolution to efficiently segment

contours and extract the intricate level-3 pore and ridge features. Further, Delaunay

triangulation based fusion algorithm is proposed to combine level-2 and level-3

information that provides structural stability and robustness to small changes caused

due to extraneous noise or non-linear deformation during image capture. We define

eight quantitative measures using level-2 and level-3 topological characteristics to form

a feature supervector. A 2n-support vector machine performs the final classification of

genuine or impostor cases using the feature supervectors. Experimental results and

statistical evaluation show that the feature supervector yields discriminatory informa-

tion and higher accuracy compared to existing recognition and fusion algorithms.

Published by Elsevier B.V.

1. Introduction

Fingerprint features have been widely used for verify-ing the identity of an individual. Automatic fingerprintverification systems use ridge flow patterns and generalmorphological information for broad classification, andminutiae information for verification [1]. The ridge flowpatterns and morphological information are referred to aslevel-1 features, while ridge endings and bifurcations, alsoknown as minutiae, are referred to as level-2 features.With the availability of high resolution fingerprint

B.V.

[email protected]

[email protected]

sensors, it is now feasible to capture more intricatefeatures such as ridge path deviation, ridge edge features,ridge width and shape, local ridge quality, distancebetween pores, size and shape of pores, position of poreson the ridge, permanent scars, incipient ridges, andpermanent flexure creases. These fine details are chara-cterized as level-3 features [2] and play an important rolein matching and improving the verification accuracy.

Researchers have proposed several algorithms forlevel-2 fingerprint recognition [1]. However, limitedresearch has been performed for level-3 feature extractionand matching [3–7]. The recognition performance of thesealgorithms are good but have certain limitations. Thealgorithm proposed in [3] requires manual interventionfor fingerprint alignment whereas the algorithm proposedin [4,5] requires very high resolution (� 2000 ppi) images.Meenen et al. [6] proposed a pore extraction algorithm

ARTICLE IN PRESS

M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685 2677

that suffers due to elastic distortion and misclassificationof pore features. Jain et al. [7] proposed an automatedalgorithm that extracts minutiae information for align-ment. Level-3 features are then extracted and efficientmatching is performed using iterative closest pointalgorithm. However, level-3 feature extraction and match-ing algorithm is computationally expensive.

The objective of this research is to develop a fast andaccurate automated fingerprint verification algorithm thatincorporates both level-2 and level-3 features. We proposea fast level-3 feature extraction and matching algorithmusing Mumford–Shah curve evolution approach. Thegallery and probe fingerprint images are registered usinga two-stage registration process and the features areextracted using an active contour model. For improvingthe fingerprint verification performance, we furtherpropose fusion algorithm for level-2 and level-3 featuresthat uses Delaunay triangulation for generating featuresupervector and support vector machine (SVM) forclassification. Experimental results and statistical testson a database of 550 classes show the effectiveness of theproposed algorithms. In the next section, we describefingerprint feature extraction algorithm and Section 3presents the proposed fusion algorithm.

2. Proposed fingerprint feature extraction and matching

In the proposed fingerprint verification algorithm, wefirst extract level-2 features which are used in the two-stage registration process. Minutiae are extracted from afingerprint image using the ridge tracing and minutiaeextraction algorithm [9]. The extracted minutiae arematched using a dynamic bounding box based matchingalgorithm [10]. The details of the minutiae extraction andmatching algorithms are found in [9,10], respectively. Wenext register gallery and probe images using the two-stage registration algorithm and then extract level-3

Fig. 1. Steps illustrating the proposed fi

features for fusion and matching. Fig. 1 illustrates thevarious stages of the proposed algorithm. In this section,we describe the two-stage registration and level-3 featureextraction and matching algorithms in detail.

2.1. Two-stage non-linear registration algorithm

Fingerprint images are subjected to non-linear defor-mation due to varying pressure applied by a user whenthe image is captured. These deformations affect theunique spatial distribution and characteristics of finger-print features. To address these non-linearities andaccurately register the probe fingerprint image withrespect to the gallery fingerprint image, a two-stagenon-linear registration algorithm is proposed. The regis-tration process uses two existing registration algorithms:Taylor series transformation [6] and thin plate splinebased ridge curve correspondence [8]. Taylor series basedalgorithm performs an image transformation that takesfingerprint images along with their minutiae coordinatesas input and computes the registration parameter usingTaylor series. We use this algorithm to coarsely registerthe gallery and probe images in the first stage. In the nextstage, we use the more detailed and accurate thin platespline based ridge curve correspondence [8] algorithm forfine registration of fingerprint images. Thin plate splinealgorithm is used at the second stage because it requirescoarsely registered features as input. Fingerprint imagescontain a large number of level-3 features that increasesthe complexity of the thin plate spline algorithm. Wetherefore use only level-2 features to perform fineregistration of gallery and probe images. The non-lineartwo-stage approach thus registers the gallery and probefingerprint images with respect to rotation, scaling,translation, feature positions, and local deformation.Fig. 2 illustrates the results obtained with the proposedtwo stage registration algorithm. Note that Fig. 2(c) shows

ngerprint verification algorithm.

ARTICLE IN PRESS

Fig. 2. Representative results of the two stage registration algorithm. (a) Fingerprint ridge curves (level-2) and pore (level-3) samples captured from two

different gallery and probe images of the same individual, (b) registration using Taylor series [6], and (c) result of the two stage registration algorithm.

Although, the registration algorithm uses only level-2 ridge and minutia features, level-3 pores are also effectively registered.

Fig. 3. Images illustrating the intermediate steps of the level-3 feature extraction algorithm: (a) input fingerprint image, (b) stopping term f computed

using Eq. (3), and (c) extracted fingerprint contour.

M. Vatsa et al. / Signal Processing 89 (2009) 2676–26852678

that a segment of level-2 ridge curves and level-3 poresobtained from both the gallery and probe images areperfectly registered and appear superimposed.

1 More mathematical details of curve evolution parameterization

and stopping term can be found in [11,12].

2.2. Level-3 pore and ridge feature extraction and matching

The proposed level-3 feature extraction algorithm usescurve evolution with the fast implementation of Mum-ford–Shah functional [11,12]. Mumford–Shah curve evolu-tion efficiently segments the contours present in the image.In this approach, feature boundaries are detected byevolving a curve and minimizing an energy based segmen-tation model as defined in the following equation [11]:

EnergyðC; c1; c2Þ ¼ aZ Z

Ofk �C0kdx dyþ b

Z ZinðCÞjIðx; yÞ

� c1j2 dx dyþ l

Z ZoutðCÞjIðx; yÞ � c2j

2 dx dy

(1)

where �C is the evolution curve such that �C ¼ fðx; yÞ :�cðx; yÞ ¼ 0g, C is the curve parameter, f is the weightingfunction or the stopping term, O represents the image

domain, Iðx; yÞ is the fingerprint image, c1 and c2 are theaverage value of pixels inside and outside �C, respectively,and a, b, and l are positive constants such that aþ bþ l ¼1 and aob � l. Further, Chan and Vese [12] parameterizethe energy equation (Eq. (1)) by an artificial time t � 0 anddeduce the associated Euler–Lagrange equation that leadsto the following active contour model,1

�c0t ¼ afð�nþ �kÞjr�cj þ rfr �cþ bdðI � c1Þ

2þ ld �cðI � c2Þ

2

(2)

where �n is the advection term and �k is the curvaturebased smoothing term. r is the gradient and d ¼ 0:5=ðpðx2 þ 0:25ÞÞ. The stopping term f is set to

f ¼1

1þ ðjrIjÞ2(3)

This gradient based stopping term ensures that at thestrongest gradient, the speed of the curve evolution becomeszero, and therefore it stops at the edges of the image.

ARTICLE IN PRESS

Contour Tracing Ridge Contoursafter Tracing

Pore Coordinatesafter Tracing

Fig. 4. Tracing of fingerprint contour and categorization of level-3 pore and ridge features.

Fig. 5. Level-3 pore features obtained after tracing.

Fig. 6. Level-3 ridge features obtained after tracing.

M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685 2679

Initial contour is initialized as a grid with 750 blocksover the fingerprint image and the boundary ofeach feature is computed. The size of grid is chosen tobalance the time required for curve evolution and correctextraction of all the features. Fig. 3 shows examples offeature extraction from fingerprint images. This imageshows that due to the stopping term (Fig. 3b), the noisepresent in the fingerprint image has very little effect oncontour extraction. Once the contour extraction algorithmprovides final fingerprint contour �c, it is scanned usingthe standard contour tracing technique [13] from top tobottom and left to right consecutively. Tracing andclassification of level-3 pore and ridge features areperformed based on the standards defined by the ANSI/NIST committee for extended fingerprint feature set(CDEFFS) [2]. During tracing, the algorithm classifies thecontour information into pores and ridges:

(1)

2

corr

findi

A blob of size greater than 2 pixels and less than 40pixels2 is classified as a pore. Therefore, noisy contours,which are sometimes wrongly extracted, are notincluded in the feature set. A pore is approximatedwith a circle and the center is used as the pore feature.

In our experiments, we found that the range of 2–40 pixels

ectly classifies a blob as pore. This analysis is consistent with the

ngs by Jain et al. [7].

(2)

3

An edge of a ridge is defined as the ridge contour. Eachrow of the ridge feature represents x; y coordinates ofthe pixel and direction of the contour at that pixel.

As shown in Fig. 4, fine details and structures present inthe fingerprint contour are traced and categorized intopores and ridge contours depending on the shape andattributes. Additional examples of level-3 feature extrac-tion are shown in Figs. 5 and 6. The feature set comprisesof two fields separated by a unique identifier, e.g.fðxP

1; yP1Þ; . . . ; ðx

Pn; y

PnÞ; ðx

R1; y

R1; y

R1Þ; . . . ; ðx

Rm; y

Rm; y

RmÞg where n

and m represents the amount of extracted pore and ridgefeatures, respectively. Here, the first field contains all thepore information while the second field contains all theridge information. The size of feature set is dependent onthe size of the fingerprint image and features present in it.

For matching, the features pertaining to the gallery andprobe images are represented as row vectors, f g and f p,and Mahalanobis distance Dð �f g ;

�f pÞ is computed:

Dð �f g ;�f pÞ ¼

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið �f g �

�f pÞtS�1ð �f g �

�f pÞ

q(4)

where �f g and �f p are the two features vectors,3 and S isthe positive definite covariance matrix of �f g and �f p.Mahalanobis distance Dð �f g ;

�f pÞ is used as the match

�f g ¼ fðxPg1 ; y

Pg1Þ; . . . ; ðx

Pgn; y

PgnÞ; ðx

Rg1 ; y

Rg1; y

Rg1Þ; . . . ; ðx

Rgm; y

Rgm; y

RgmÞg

�f p ¼ fðxPp1 ; y

Pp1Þ; . . . ; ðx

Ppn ; y

PpnÞ; ðx

Rp1; y

Rp1; y

Rp1Þ; . . . ; ðx

Rpm ; y

Rpm ; y

RpmÞg.

ARTICLE IN PRESS

M. Vatsa et al. / Signal Processing 89 (2009) 2676–26852680

score of level-3 features. It ensures that the featureshaving high variance do not contribute to the distance andhence decrease the false reject rate for a fixed false acceptrate (FAR).

3. Information fusion using Delaunay triangulation andSVM classification

Researchers have theoretically and experimentally shownthat, in general, fusion of two or more biometric sourcesyields better performance compared to single biometrics[14]. Fusion can be performed at different levels such as rawdata or image level, feature level, match score level, anddecision level. In fingerprint biometrics, image fusion andfeature fusion have been performed using level-2 minutiaeand mosaicing techniques [15]. Further, level-2 and level-3match score fusion algorithms have been proposed in [7,16].In this paper, we propose a fusion algorithm for fusing level-2 and level-3 information such that it is resilient to minordeformations in features and missing information. Finger-print features are extracted using the feature extractionalgorithms described in Section 2.

Delaunay triangulation has been used with level-2features for fingerprint indexing and identification[17–19]. However, in the proposed fusion algorithm,Delaunay triangulation is used to generate a featuresupervector that contains both level-2 and level-3 fea-tures. A Delaunay triangle is formed using minutiaeinformation as follows:

(1)

Given n minutiae points, the Voronoi diagram iscomputed which decomposes the minutiae pointsinto different regions.

(2)

Voronoi diagram is used to compute the Delaunaytriangulation by joining the minutiae coordinatespresent in the neighborhood Voronoi regions.Fig. 7(a) and (b) show an example of Voronoi diagramand Delaunay triangulation of fingerprint minutiae.

Each triangle in the Delaunay triangulation is used as aminutiae triplet [20]. Fig. 7(c) shows an example of aminutiae triplet along with level-3 features. Tuceryan and

Fig. 7. Example of (a) Voronoi diagram of fingerprint minutiae, (b) Delau

Chorzempa [17] found that Delaunay triangulation havethe best structural stability and hence minutiae tripletscomputed from Delaunay triangulation are able to sustainthe variations due to fingerprint deformation [18]. Further,Bebis et al. [18] and Ross and Mukherjee [19] have shownthat any local variation due to noise or insertion/deletionof feature points affects the Delaunay triangulation onlylocally.

From the minutiae triplets generated using the Delau-nay triangulation, the feature supervector that includesminutia, pore, and ridge information is computed. Jainet al. [7] have shown that pore features are less reliablecompared to minutiae and ridge features. On the otherhand, CDEFFS [2] has defined reliable and discriminatorylevel-2 and level-3 features that can be used for verifica-tion. Based on these studies, the feature supervector in theproposed fusion algorithm is composed of eight elementsdescribed as follows:

(1)

nay

Average cosine angle in minutiae triplet (A): Let amin andamax be the minimum and maximum angles in aminutiae triplet. Average cosine angle of the minutiaetriplet is defined as

A ¼ cosðaavgÞ (5)

where aavg ¼ ðamin þ amaxÞ=2. Average cosine anglevector is computed for all the minutiae triplets in agiven fingerprint.

(2)

Triangle orientation (O): According to Bhanu andTan [20], triangle orientation is defined as f ¼signðz21 � z32Þ, where z21 ¼ z2 � z1, z32 ¼ z3 � z2,z13 ¼ z1 � z3, and zi ¼ xi � jyi. zi is computedfrom the coordinates ðxi; yiÞ, i ¼ 1;2;3 in the minu-tiae triplet. Triangle orientation vector, O is thencomputed for all the minutiae triplets in a givenfingerprint.

(3)

Triplet density (D): If there are n0 minutiae in a Voronoiregion centered at a minutia, then the minutiaedensity is n0. For a minutiae triplet, we define tripletdensity as the average minutiae density of the threeminutiae that form the triplet. If n01, n02, and n03 are theminutiae density of the three minutiae, then thetriplet density is ðn01 þ n02 þ n03Þ=3. We compute triplet

triangulation of fingerprint minutiae, and (c) minutiae triplet.

ARTICLE IN PRESS

M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685 2681

density for all the triplets in the fingerprint image toform the triplet density vector D.

(4)

Edge ratio in minutiae triplet (E): The ratio of thelongest and shortest edges in each minutiae tripletforms the edge ratio vector, E.

(5)

Min–Max distance between minutia points and k-nearest

neighbor pores (Pm): For every minutia point, we firstcompute the distances between minutiae and itsk-nearest neighboring pores which are on the sameridge. Out of these k distances, the minimum andmaximum distances are used as the fusion para-meters. Min–Max distance vector (Pm) is then gener-ated from all the minutiae in a fingerprint image.

(6)

Average distance of k-nearest neighbor pores (Pavg):Average distance vector of k-nearest neighbor pores,Pavg , is formed by computing the average distanceof k-nearest neighbor pores of all the minutiae in atriplet.

(7)

Average ridge width (Rw): Ridge width is computed foreach minutiae triplet using the tracing technique [13].Average ridge width vector (Rw) is computed by takingthe average of the ridge width of each minutiae triplet.

(8)

Ridge curve parameters (RC): Ross and Mukherjee [19]proposed the use of ridge curve parameters forfingerprint identification. We modified the ridge curveparameter for the proposed fusion algorithm. Eachridge can be parameterized as y ¼ ax2 þ bxþ c wherea, b, and c are the parameterized coefficients. Ridgecurve parameter for a ridge is ½a=ðaþ bþ cÞ; b=

ðaþ bþ cÞ; c=ðaþ bþ cÞ�. In a minutiae triplet, eachminutiae is associated with a ridge. Therefore theridge curve parameters of a minutiae triplet are½ai=ðai þ bi þ ciÞ; bi=ðai þ bi þ ciÞ; ci=ðai þ bi þ ciÞ� wherei ¼ 1;2; and 3. Ridge curve parameter RC is similarlycomputed for all the minutiae triplets.

A feature supervector IðA;O;D; E; Pm; Pavg ;Rw;RCÞ is formedby concatenating these eight elements (vectors) andincludes information pertaining to both level-2 andlevel-3 fingerprint features. This feature supervector isused for verification purposes.

3.1. Matching feature supervectors using SVM

For matching two feature supervectors, I1ðA;O;D; E; Pm;

Pavg ;Rw;RCÞ and I2ðA;O;D; E; Pm; Pavg ;Rw;RCÞ pertaining to

the gallery and probe fingerprints, respectively, wepropose the use of support vector machine. The first stepof the matching algorithm is to compute Mahalanobisdistance between each element of the gallery and probefeature supervectors using Eq. (4). Let dðiÞ be theMahalanobis distance associated with individual elementsof the feature supervector, where i ¼ 1;2;3;4;5;6;7;and 8. The octuple Mahalanobis distance vector d is usedas input to the SVM classification algorithm.

Support vector machine, proposed by Vapnik [21], is apowerful methodology for solving problems in non-linearclassification, function estimation, and density estimation[22]. SVM starts with the goal of separating the data witha hyperplane and extends it to non-linear decision

boundaries. SVM is thus a classifier that performsclassification by constructing hyperplanes in a multi-dimensional space and separates the data points intodifferent classes. To construct an optimal hyperplane, SVMuses an iterative training algorithm that maximizes themargin between two classes [22]. In our previousresearch, we have shown that a variant of support vectormachine known as the dual n-SVM (2n-SVM) is useful forclassification in biometrics [22,23]. For more details of2n-SVM, readers can refer to [23,24].

The 2n-SVM decision algorithm is divided into two-stages: (1) training and (2) classification.

Training: 2n-SVM classifier is first trained on thelabeled training Mahalanobis distance vectors computedfrom the training fingerprint images. Using the standardtraining procedure [23,24] and non-linear radial basiskernel function, 2n-SVM is trained to perform classifica-tion between genuine and impostor Mahalanobis distancevectors.

Classification: The trained 2n-SVM is used to performclassification on the probe dataset. Mahalanobis distancevector between gallery and probe data, d, is given as inputto the non-linear 2n-SVM classifier and the output is asigned distance from the separating hyperplane. Adecision of accept or reject is made using a decisionthreshold T as defined in Eq. (6).

Decision ¼accept if SVM output � T

reject otherwise

((6)

4. Database and algorithms used for validation

The performance of the proposed level-3 featureextraction and fusion algorithms are evaluated andcompared with existing level-3 feature based fingerprintrecognition algorithms and existing fusion algorithms ona 1000 ppi fingerprint database. In this section, we brieflydescribe the database and the algorithms used forvalidation.

4.1. Fingerprint database

A fingerprint database obtained from law enforcementagency is used to validate the proposed algorithms. Thisdatabase contains 5500 fingerprint images from 550classes (10 images per class) captured using an opticalscanner. For each class, there are four rolled fingerprintimages, four slap fingerprint image, and two partialfingerprint images with less than 10 minutiae. Theresolution of fingerprint images is 1000 ppi to facilitatethe extraction of both level-2 and level-3 features. Imagesin the database follow all the standards defined by CDEFFS[2]. Further, all fingerprints are captured in standardupward direction and in general, deformation is only dueto the pressure applied by the user. From each class, threefingerprints are selected for training the fusion algorithmand 2n-SVM classifier, and the remaining seven imagesper class are used as the gallery and probe dataset. Forperformance evaluation, we use cross validation with 20trials. Three images are randomly selected for training and

ARTICLE IN PRESS

10−2 10−1 10084

86

88

90

92

94

96

98

100

False Accept Rate (%)

Gen

uine

Acc

ept R

ate

(%)

Level−3 Kryszczuk et. al.Level−3 Jain et.al.Level−2 MinutiaeProposed Level−3

Fig. 8. ROC plot for the proposed level-3 feature extraction and

comparison with existing level-3 feature based algorithms [4,5,7] and

minutiae based algorithm.

M. Vatsa et al. / Signal Processing 89 (2009) 2676–26852682

the remaining images are used as the test data to evaluatethe algorithms. This train-test partitioning is repeated 20times and receiver operating characteristics (ROC) curvesare generated by computing the genuine accept rates(GAR) over these trials at different false accept rates.For each cross validation trial, we have 11,550 genuinematches ð550� 7� 6=2Þ and 73,97,775 impostor matchesð550� 7� 549� 7=2Þ using the gallery and probe data-base.

4.2. Existing algorithms

To compare the performance of the proposed level-3algorithm (Section 2.2) with existing algorithms, we haveimplemented two algorithms. The algorithm by Kryszczuket al. [4,5] extracts pore information from high resolutionfingerprint images by applying different techniques suchas correlation based alignment, Gabor filtering, binariza-tion, morphological filtering, and tracing. The match scoreobtained from this algorithm is a normalized similarityscore in the range of ½0;1�. The algorithm proposed by Jainet al. [7] uses Gabor filtering and wavelet transform forpore and ridge feature extraction and iterative closestpoint for matching.

To compare the performance of the proposed fusionalgorithm, we use three existing fusion algorithms.Feature fusion using concatenation [25] compares theperformance at feature level. Further, sum rule [14] andSVM match score fusion [22] compares the performancewith match score level fusion algorithms. The matchscores obtained by matching level-2 features and level-3features (Section 2.2) are combined using match scorefusion algorithms.

5. Experimental evaluation

The performance of the proposed algorithms areevaluated both experimentally by computing theaverage verification accuracy at 0.01% false accept rateand statistically by computing the half total error rate(HTER) [27]. Experimental results are obtained usingthe cross validation approach. We perform two experi-ments: (1) evaluation of the proposed level-3 featureextraction algorithm and (2) evaluation of the proposedDelaunay triangulation and SVM classification basedfusion algorithm.

Performance evaluation of level-3 feature extraction

algorithm: We first compute the verification performanceof the proposed level-3 feature extraction algorithm andcompare it with existing level-2 [9,10] and level-3 featurebased verification algorithms [4,5,7]. The ROC plots inFig. 8 and verification accuracies in Table 1 summarize theresults of this experiment. The proposed level-3 featureextraction algorithm yields a verification accuracy of91.41% which is 2–7% better than existing algorithms.Existing level-2 minutiae based verification algorithm ismore optimized for 500 ppi fingerprint images and theperformance does not improve when 1000 ppi images areused. Also the performance decreases significantly if thereare fewer number of minutiae in the fingerprint image

(� 10). The level-3 pore matching algorithm [4,5] requiresvery high quality fingerprint images (� 2000 ppi) forfeature extraction and matching, and the performancesuffers when the fingerprint quality is poor or the amountof pressure applied during scanning affects the poreinformation. The algorithm proposed by Jain et al. [7] isefficient with rolled and slap fingerprint images. However,the verification accuracy decreases with partial finger-prints when the number of minutiae is limited. Theexperimental results show that the proposed level-3feature extraction algorithm provides good recognitionperformance even with limited number of minutiae. TheMumford–Shah functional based feature extraction algo-rithm is robust to irregularities due to noise and efficientlycompensates for variations in pressure during fingerprintcapture by incorporating both ridge and pore features.Mahalanobis distance measure also reduces the falsereject cases thus improving the verification accuracy.

We further compare the performance of existingalgorithms with and without the registration process. Inthis experiment, first the verification accuracy of existingalgorithms is computed without the 2-stage registrationtechnique and then compared with the accuracy obtainedby applying the 2-stage registration technique. Note thatthe existing algorithms already have in built techniquesfor fingerprint registration, e.g. Jain et al. [7] use iterativeclosest point matching that can address minor deforma-tion in fingerprints. This experimental setup evaluates andcompares the performance of the 2-stage registrationtechnique on existing algorithms. The 2-stage registrationtechnique minimizes the spatial differences or elasticdeformation between the gallery-probe image pairs, thusreducing the intra-class variations. As evident from theresults in Table 1, it significantly improves the verificationaccuracy for all the existing feature extraction, matchingand fusion algorithms. This result is also consistent withthe results established by other researchers [1,8].

In our experiments, we observe that, in general, level-2and level-3 algorithms provide correct results when both

ARTICLE IN PRESS

Table 1Average verification performance of fingerprint verification and fusion algorithms.

Recognition and fusion algorithms Accuracy without

registration

Accuracy with

registration

HTER Time (s)

[Max, Min] Average

Level-2 minutiae [9,10] 84.56 89.11 [8.92, 4.57] 5.45 03

Level-3 Kryszczuk et al. [4,5] 79.49 84.03 [19.54, 4.89] 7.99 12

Level-3 Jain et al. [7] 85.62 88.07 [6.33, 4.21] 5.97 34

Level-3 proposed – 91.41 [4.57, 4.08] 4.30 09

Feature fusion—concatenation [25] 85.13 90.08 [7.23, 3.92] 4.96 13

Score fusion—sum rule [14] 90.23 91.99 [6.79, 3.45] 4.01 12

Score fusion—SVM [22] 92.07 95.12 [3.12, 1.84] 2.44 13

Proposed Delaunay triangulation and SVM

classification fusion

– 96.37 [1.88, 1.76] 1.82 15

Verification accuracy is computed at 0.01% false accept rate (FAR).

Fig. 9. Three sample probe cases: (a) when both level-2 and level-3 features based algorithms provide correct result, (b) when level-2 feature based

algorithm fails to provide correct result whereas level-3 features based algorithm provides accurate decision, and (c) when both the algorithms fail to

perform. For these three probe cases, we used good quality rolled fingerprint images as gallery.

10-2 10-1 10090

91

92

93

94

95

96

97

98

99

100

Gen

uine

Acc

ept R

ate

(%)

False Accept Rate (%)

Fig. 10. ROC plot for the proposed Delaunay triangulation and SVM

classification based fusion algorithm and comparison with existing

feature concatenation algorithm [25], sum rule [14], and SVM match

score fusion algorithm [22].

M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685 2683

the gallery and probe are rolled fingerprint images.Similarly, when the images are good quality slap or partialprints, both level-2 and level-3 features yield correctresults (Fig. 9(a)). However, there are several slap andpartial samples in the database that are very difficult torecognize using level-2 features because of the inherentfingerprint structure. At 0.01% FAR, one such example isshown in Fig. 9(b) where level-2 feature extractionalgorithm does not yield correct result but the proposedlevel-3 feature extraction algorithm provides correctresults. Further, there are cases involving partial finger-prints in which both level-2 and level-3 algorithms fail toperform (Fig. 9(c)) because of sensor noise and lesspressure. Such cases require image quality assessmentand enhancement algorithms to enhance the quality suchthat feature extraction algorithm can extract usefulfeatures for verification.

Performance evaluation of proposed fusion algorithm: Wenext compute the verification accuracy of the proposedfusion and matching algorithm. As shown in Fig. 10 andTable 1, the Delaunay triangulation and SVM classificationbased fusion algorithm improves the verification accuracyby at least 4.96% using both level-2 and the proposedlevel-3 feature extraction algorithms. The proposed fusionapproach also outperforms existing feature fusion algo-rithm [25] that performs feature normalization, feature

selection, and concatenation. It yields good performancewhen compatible feature sets are fused. However, in ourexperiments, level-2 and level-3 features are incompatiblefeature sets and require different matching schemes. After

ARTICLE IN PRESS

M. Vatsa et al. / Signal Processing 89 (2009) 2676–26852684

fusion, the concatenated feature vector is matched usingEuclidean distance measure which, in some cases, is notable to discriminate between genuine and impostor fusedfeatures. We also compare the proposed fusion algorithmwith sum rule [14] and SVM based match score fusionalgorithm [22]. While sum rule and SVM match scorefusion perform better compared to feature fusion usingconcatenation [25], the proposed fusion algorithm out-performs the sum rule by 4.38% and the SVM match scorefusion algorithm by 1.25%. The results of fusion algo-rithms, existing and the proposed, also accentuate thatlevel-2 and level-3 features provide complementaryinformation and combining multiclassifier match scoresimproves the verification accuracy. However, linear sta-tistical sum rule does not guarantee improved perfor-mance in all cases. On the other hand, SVM match scorefusion and the proposed fusion algorithm use non-lineardecision functions for improved performance. As shown inFig. 10, the Delaunay triangulation and SVM classificationbased fusion is better than SVM match score fusion forlow FAR as the proposed fusion algorithm computes themost discriminative fingerprint features and classifiesthem using non-linear 2n-SVM decision algorithm.

The algorithms are also statistically validated usingHTER [26,27]. Half total error rate is defined as

HTER ¼FARþ FRR

2(7)

Confidence intervals (CI) are computed around HTER asHTER� s � Za=2. s and Za=2 are computed using Eqs. (8)and (9) [27].

s ¼ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiFARð1� FARÞ

4 � NIþ

FRRð1� FRRÞ

4 � NG

r(8)

Za=2 ¼

1:645 for 90% CI

1:960 for 95% CI

2:576 for 99% CI

8><>: (9)

Here, NG is the total number of genuine scores and NI isthe total number of impostor scores. Table 1 summarizesthe results of the statistical evaluation in terms ofminimum, maximum, and average HTER correspondingto 20 cross validation trials. The proposed Delaunaytriangulation and SVM classification based fusion algo-rithm yields the lowest and most stable HTER; whereasHTER values pertaining to other algorithms vary signifi-cantly for different cross validation trials. For example,SVM match score fusion yields the maximum andminimum HTER of 3.12 and 1.84, respectively. On theother hand, for the proposed fusion algorithm these valuesare 1.88 and 1.76, respectively. Further, for 99% confidence,the best confidence interval of 1:82� 0:22% is observedfor the proposed fusion algorithm. Therefore, statisticalresults further validate the effectiveness of the proposedDelaunay triangulation and SVM classification basedfusion algorithm and the experimental results.

We also evaluate the average time for verification usingdifferent algorithms on a P-IV 3.2 GHz, 1 GB RAM andMatlab programming. The proposed level-3 pore and ridgefeature extraction and matching algorithm takes 9 s,Kryszczuk’s algorithm [4,5] requires 12 s, and Jain’s

algorithm [7] requires 34 s. Further, the fusion algorithmtakes only 15 s for verification which includes featureextraction, computation of feature supervector, and2n-SVM classification.

6. Conclusion

With the availability of high resolution fingerprintsensors, salient features such as pores and ridge contoursare prominently visible. In this paper, these conspicuouslevel-3 features are combined with the commonly usedlevel-2 minutiae to improve the verification accuracy. Thegallery and probe fingerprint images are registered usingthe two-stage registration process. A feature extractionalgorithm is proposed to extract detailed level-3 pore andridge features using Mumford–Shah curve evolutionapproach. The paper further proposes fusion of level-2and level-3 features using Delaunay triangulation algo-rithm that computes a feature supervector containingeight topological measures. Final classification is per-formed using a 2n-SVM learning algorithm. The proposedalgorithm tolerates minor deformation of features andnon-linearity in the fingerprint information. Experimentalresults and statistical evaluation on a high resolutionfingerprint database show the efficacy of the proposedfeature extraction and fusion algorithms. The proposedalgorithm performs better than existing recognitionalgorithms and fusion algorithms.

We plan to extend the proposed Delaunay triangula-tion and SVM classification based fusion algorithm byincorporating image quality measure and studying theeffect of including other level-3 features such as dots,incipient ridges, and scars in a structured manner. Futurework will also characterize the performance of level-3features on a comprehensive large scale database whichcontains fingerprint images of varying size and quality.

Acknowledgment

This research is supported in part through a grant(Award no. 2003-RC-CX-K001) from the Office of Scienceand Technology, National Institute of Justice, Office ofJustice Programs, United States Department of Justice. Theauthors thank the reviewers for providing useful sugges-tions and raising thoughtful questions.

References

[1] D. Maltoni, D. Maio, A.K. Jain, S. Prabhakar, Handbook of FingerprintRecognition, Springer, Berlin, 2003.

[2] Working draft of CDEFFS: the ANSI/NIST committee to definean extended fingerprint feature set, 2008. Available at hhttp://fingerprint.nist.gov/standard/cdeffs/index.htmli.

[3] A.R. Roddy, J.D. Stosz, Fingerprint features—statistical analysisand system performance estimates, Proceedings of the IEEE 85 (9)(1997) 1390–1421.

[4] K. Kryszczuk, A. Drygajlo, P. Morier, Extraction of level 2 and level 3features for fragmentary fingerprints, in: Proceedings of the SecondCOST275 Workshop, 2004, pp. 83–88.

[5] K. Kryszczuk, P. Morier, A. Drygajlo, Study of the distinctiveness oflevel 2 and level 3 features in fragmentary fingerprint comparison,in: Proceedings of ECCV International Workshop on BiometricAuthentication, 2004, pp. 124–133.

ARTICLE IN PRESS

M. Vatsa et al. / Signal Processing 89 (2009) 2676–2685 2685

[6] P. Meenen, A. Ashrafi, R. Adhami, The utilization of a Taylor series-based transformation in fingerprint verification, Pattern Recogni-tion Letters 27 (14) (2006) 1606–1618.

[7] A.K. Jain, Y. Chen, M. Demirkus, Pores and ridges: high resolutionfingerprint matching using level 3 features, IEEE Transactions onPattern Analysis and Machine Intelligence 29 (1) (2007) 15–27.

[8] A. Ross, S. Dass, A.K. Jain, Fingerprint warping using ridge curvecorrespondences, IEEE Transactions on Pattern Analysis andMachine Intelligence 28 (1) (2006) 19–30.

[9] X.D. Jiang, W.Y. Yau, W. Ser, Detecting the fingerprint minutiae byadaptive tracing the gray level ridge, Pattern Recognition 34 (5)(2001) 999–1013.

[10] A. Jain, R. Bolle, L. Hong, Online fingerprint verification, IEEETransactions on Pattern Analysis and Machine Intelligence 19 (4)(1997) 302–314.

[11] A. Tsai, A. Yezzi Jr., A. Willsky, Curve evolution implementation ofthe Mumford–Shah functional for image segmentation, denoising,interpolation, and magnification, IEEE Transactions on ImageProcessing 10 (8) (2001) 1169–1186.

[12] T. Chan, L. Vese, Active contours without edges, IEEE Transactionson Image Processing 10 (2) (2001) 266–277.

[13] T. Pavlidis, Algorithms for Graphics and Image Processing, Compu-ter Science Press, 1982.

[14] A. Ross, K. Nandakumar, A.K. Jain, Handbook of Multibiometrics,Springer, Berlin, 2006.

[15] A. Ross, S. Shah, J. Shah, Image versus feature mosaicing:a case study in fingerprints, in: Proceedings of SPIE Conferenceon Biometric Technology for Human Identification III, 2006,pp. 620208-1–620208-12.

[16] M. Vatsa, R. Singh, A. Noore, M.M. Houck, Quality-augmentedfusion of level-2 and level-3 fingerprint information using DSm theory,International Journal of Approximate Reasoning 50 (1) (2009) 51–61.

[17] M. Tuceryan, T. Chorzempa, Relative sensitivity of a family ofclosest-point graphs in computer vision applications, PatternRecognition 24 (5) (1991) 361–373.

[18] G. Bebis, T. Deaconu, M. Georgiopoulos, Fingerprint identificationusing delaunay triangulation, in: Proceedings of IEEE InternationalConference on Intelligence, Information and Systems, 1999,pp. 452–459.

[19] A. Ross, R. Mukherjee, Augmenting ridge curves with minutiaetriplets for fingerprint indexing, in: Proceedings of SPIE Conferenceon Biometric Technology for Human Identification IV, vol. 6539,2007, p. 65390C.

[20] B. Bhanu, X. Tan, Fingerprint indexing based on novel featuresof minutiae triplets, IEEE Transactions on Pattern Analysis andMachine Intelligence 25 (5) (2003) 616–622.

[21] V.N. Vapnik, The Nature of Statistical Learning Theory, Springer,Berlin, 1995.

[22] R. Singh, M. Vatsa, A. Noore, Intelligent Biometric InformationFusion using Support Vector Machine, Soft Computing in ImageProcessing: Recent Advances, Springer, Berlin, 2006, pp. 327–350(Chapter 12).

[23] R. Singh, M. Vatsa, A. Noore, Improving verification accuracy bysynthesis of locally enhanced biometric images and deformablemodel, Signal Processing 87 (11) (2007) 2746–2764.

[24] H.G. Chew, C.C. Lim, R.E. Bogner, An implementation of trainingdual-n support vector machines, in: L. Qi, K.L. Teo, X. Yang (Eds.),Optimization and Control with Applications, Kluwer, Dordrecht,2004, pp. 157–182.

[25] A. Ross, R. Govindarajan, Feature level fusion using handand face biometrics, in: Proceedings of SPIE Conferenceon Biometric Technology for Human Identification II, 2005,pp. 196–204.

[26] E.S. Bigun, J. Bigun, B. Duc, S. Fischer, Expert conciliation for multi-modal person authentication systems by Bayesian statistics, in:Proceedings of Audio Video based Biometric Person Authentication,1997, pp. 291–300.

[27] S. Bengio, J. Mariethoz, A statistical significance test for personauthentication, in: Proceedings of Odyssey: The Speaker andLanguage Recognition Workshop, 2004, pp. 237–244.