multimodal biometric recognition using iris, knuckleprint and palmprint · multimodal biometric...

212
Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy by ADITYA NIGAM to the DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING INDIAN INSTITUTE OF TECHNOLOGY KANPUR, INDIA August 2014

Upload: phungnguyet

Post on 01-Apr-2018

228 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Multimodal BiometricRecognition using Iris,

Knuckleprint and Palmprint

A Thesis Submitted

in Partial Fulfillment of the Requirements

for the Degree of

Doctor of Philosophy

by

ADITYA NIGAM

to the

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

INDIAN INSTITUTE OF TECHNOLOGY KANPUR, INDIA

August 2014

Page 2: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful
Page 3: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful
Page 4: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful
Page 5: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Synopsis

Personal authentication is one of the basic requirements of our modern day society.

Almost financial as well as personal security related applications expect to have

an automated, efficient, near real time and highly accurate access control mecha-

nism. Traditional methods of authentication are based on token and/or knowledge.

Biometrics is an alternative to traditional methods because it is harder to circum-

vent. Biometric based authentication systems use individual’s characteristics which

are based on either behavior (voice, signature, gait etc) or physiology (face, iris,

palmprint, fingerprint, knuckleprint, ear etc). These characteristics are hard to

circumvent because they cannot be lost or forgot like token or knowledge based

methods. Some of the well known identification systems make use of fingerprint,

face, iris, palmprint, finger-knuckleprint, ear, gait etc. But each biometric trait has

its own set of challenges and trait specific issues. Hence, none of the traits can be

considered as the best one because it depends on the type of applications where it

has to be applied.

The performance of any unimodal biometric system is dependent on factors like

environment, atmosphere, sensor precision. Also, there are several trait specific

challenges such as pose, expression, aging etc for face recognition, occlusion and

acquisition related issues for iris and poor quality and social acceptance related

Page 6: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

vi

issues for fingerprint. Hence, fusion of more than one biometric samples, traits

or algorithms to achieve superior performance is an alternative way to achieve the

better performance and is termed in literature as multi-biometrics or multimodal

biometrics. In this thesis, we have considered three different biometric traits viz.

iris, knuckleprint and palmprint. Iris can be considering as the best biometric in

terms of performance which is fused with two other biometric traits to obtain an

efficient multi-modal biometric system.

The iris is a ring made up of tissues that allows light to enter into the eye. Thin

circular diaphragm between cornea and lens is called as iris which has abundance

of micro-texture such as crypts, furrows, ridges, corona, freckles and pigment spots.

These textures are randomly distributed and hence they are believed to be unique.

On the other hand, knuckleprint and palmprint do not have such a rich anatomical

structure. But, they posses line like (i.e. knuckle-lines, palm-lines, wrinkles) rich

pattern based structure in vertical, horizontal as well as diagonal directions which

can be very useful if they are used in conjunction with iris samples because iris has

mostly radial features. It has been observed experimentally that such a fusion of

orthogonal multiple biometric modalities facilitates the system to reject the imposter

confidently.

Utilization of multi modality information can be useful to achieve high perfor-

mance while working over large database. The spoof vulnerability is much lesser

than unimodal system; hence it is ideal for outdoor unattended supervision in un-

controlled environments. However, not much work has been done in this area mainly

due to non-availability of multimodal datasets.

Any biometric based personal authentication system consists of several steps such

as : Sample Acquisition, ROI Extraction, Sample Normalization, Preprocessing,

Page 7: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

vii

Feature Extraction and Template Matching. Each step affects overall performance

of the system; segmentation is one of the most critical steps. Wrong segmentation

renders the subsequent steps meaningless.

This thesis proposes an efficient multimodal authentication system which fuses

iris, knuckleprint and palmprint images. To the best of our knowledge, this is the

first effort in which iris, knuckleprint and palmprint samples are fused. In this

work we have proposed new iris, knuckleprint and palmprint extraction algorithms.

Several quality parameters for iris and knuckleprint modalities are proposed. Each

biometric trait is enhanced and transformed using the proposed local enhancement

and LGBP transformation. Tracking based matching algorithm is proposed to

perform biometric sample identification/recognition that uses corner features and

CIOF dissimilarity measure.

There are eight chapters in this thesis. The introduction of any general biometric

system, stages involved, modes of operation, traits and their properties are discussed

in Chapter 1. Also motivation and performance parameter are described along with

the key contributions presented. in the next chapter the detailed literature review

is presented for each of the three traits considered in this thesis work. There are

several image processing as well as computer vision based techniques that are used

in designing the systems. They are discussed in Chapter 3.

In Chapter 4, iris based recognition system has been proposed. The iris segmen-

tation is done efficiently using an improved circular hough transform for inner iris

boundary (i.e pupil) detection. The robust integro-differential operator is used to

detect outer iris boundary that makes use of the pupil location. The quality of the

acquired iris sample is estimated using the proposed quality assessment parameters

and if it is less than a predefined threshold then it is recaptured. This early quality

Page 8: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

viii

assessment is very crucial in order to handle poor quality and non-ideal imagery.

The segmented iris is normalized to polar coordinates (i.e. rectangular strips) and

is preprocessed using the proposed LGBP (Local Gradient Binary Pattern) to ob-

tain robust features. The KLT based corners are extracted and matched using the

proposed dissimilarity measure CIOF (Corners having Inconsistent Optical Flow).

The proposed system has been tested over publicly available CASIA 4.0 Interval

and Lamp iris databases consisting of 2, 639 and 16, 212 images respectively. It is

found that CRR (Rank 1 accuracy) of the proposed system is 100% and 99.87% for

Interval and Lamp databases respectively. Further, its EER for Interval and Lamp

are 0.109% and 1.3% respectively.

In Chapter 5, knuckleprint based recognition system has been proposed. The

knuckleprint segmentation is done by estimating the central knuckle-line using a

new modified version of gabor filter called curvature gabor filter. The quality of the

acquired knuckleprint sample is estimated using the proposed quality assessment

parameters. The segmented knuckleprint ROI is preprocessed using the proposed

LGBP (Local Gradient Binary Pattern) to obtain robust features. The KLT based

corners are extracted and matched using the proposed dissimilarity measure CIOF

(Corners having Inconsistent Optical Flow). The proposed system has been tested

over publicly available PolyU knuckleprint databases consisting of 7, 920 images. It

is found that CRR of the proposed system is 99.79% with an EER of 0.93% over

PolyU knuckleprint database.

In Chapter 6, palmprint based recognition system has been presented. The palm-

print segmentation is done by obtaining the two valley points and then a square

shaped ROI is clipped using that. The segmented palmprint is preprocessed us-

ing the proposed LGBP (Local Gradient Binary Pattern) proposed dissimilarity

Page 9: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

ix

measure CIOF (Corners having Inconsistent Optical Flow) is used for matching.

The system has been tested over publicly available CASIA and PolyU palmprint

databases consisting of 4, 528 and 7, 720 images respectively. It is found that CRR

(Rank 1 accuracy) of the proposed system is 100% and 99.95% for CASIA and

PolyU palmprint databases respectively. Further, its EER for CASIA and PolyU

are 0.15% and 0.41% respectively.

In Chapter 7, details of multi-modal based recognition system and all stages in-

volved in it are discussed. The major focus is over the chimeric multimodal database

creation and its experimental analysis. Three multimodal systems viz. iris and

knuckleprint, knuckleprint and palmprint and finally iris, knuckleprint and palm-

print have been discussed. Several testing strategies such as intersession matching,

one training and one testing and multiple training and multiple testing have been

considered. Two publicly available iris databases (CASIA Interval and LAMP) are

fused with two public palmprint databases (CASIA, PolyU) while for knuckleprint

largest publicly available PolyU database is used. Hence, 4 tri-modal databases are

generated for testing. It is observed that the performance of the trimodal system

shows almost perfect behavioral (i.e CRR = 100% and EER = 0%) under various

testing strategies.

The last chapter concludes the work carried out in the thesis. It is shown through

experimental analysis that orthogonal feature based modality fusion of iris with

other biometrics like knuckleprint and palmprint can be very useful while working

on large databases. It is observed that very high accuracy can be achieved. The

proposed system can perform single verification of an individual within a second

that is fast enough for any real-time application.

Page 10: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

x

Page 11: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Acknowledgements

It gives me immense pleasure in expressing my profound sense of gratitude to my

thesis supervisor Prof. Phalguni Gupta. I am highly indebted and deeply grateful

to him for his inspiring guidance, constant encouragement, valuable suggestions and

ingenious ideas. His enormous knowledge and understanding has greatly helped me

in better understanding of various aspects of Biometrics. His expertise in the area

of Biometrics and enthusiasm in the research have been of great value to me.

Aditya Nigam

Page 12: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xii

Page 13: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Dedicated

to

My Beloved Parents, All My Respected Teachers & My

Wife Mrs. Jyoti Nigam and Son Master Bhavya Nigam

Page 14: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xiv

Page 15: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Contents

List of Tables xx

List of Figures xxiv

List of Algorithms xxxi

List of Abbreviations xxxiii

1 Introduction 1

1.1 Biometric Traits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.1 Physiological based Traits . . . . . . . . . . . . . . . . . . . . 4

1.1.2 Behavioral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2 Modes of Biometric System . . . . . . . . . . . . . . . . . . . . . . . 9

1.2.1 Enrollment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.2.2 Verification . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.2.3 Identification . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.3 Multi-biometric System . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.3.1 Types of Multi-biometric System . . . . . . . . . . . . . . . . 12

1.3.2 Fusion Levels . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1.4 Motivation of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.4.1 Iris . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

xv

Page 16: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xvi CONTENTS

1.4.2 Knuckleprint . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.4.3 Palmprint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

1.4.4 Multi-biometrics . . . . . . . . . . . . . . . . . . . . . . . . . 18

1.5 Performance Analysis Metrics . . . . . . . . . . . . . . . . . . . . . . 20

1.5.1 Verification Performance Metrics . . . . . . . . . . . . . . . . 20

1.5.2 Identification Performance Metrics . . . . . . . . . . . . . . . 23

1.6 Thesis Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

1.6.1 ROI Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1.6.2 Quality Estimation . . . . . . . . . . . . . . . . . . . . . . . . 25

1.6.3 Biometric Feature Enhancement and Transformation . . . . . 26

1.6.4 Biometric Feature Extraction and Matching . . . . . . . . . . 26

1.6.5 Multimodal Biometric System . . . . . . . . . . . . . . . . . . 26

1.7 Thesis Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

2 Literature Review 29

2.1 Iris based Biometric System . . . . . . . . . . . . . . . . . . . . . . . 29

2.2 Knuckleprint based Biometric System . . . . . . . . . . . . . . . . . . 36

2.3 Palmprint based Biometric System . . . . . . . . . . . . . . . . . . . 40

2.4 Multi-modal Biometric System . . . . . . . . . . . . . . . . . . . . . . 43

3 Preliminaries 45

3.1 Local Binary Pattern (LBP) . . . . . . . . . . . . . . . . . . . . . . . 46

3.2 Corner Point Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 49

3.3 Sparse Point Tracking (KLT) . . . . . . . . . . . . . . . . . . . . . . 50

3.4 Phase Only Correlation (POC) . . . . . . . . . . . . . . . . . . . . . 53

4 Iris Recognition System 57

4.1 Iris ROI Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

4.1.1 Iris Inner Boundary Localization . . . . . . . . . . . . . . . . 58

Page 17: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

CONTENTS xvii

4.1.2 Iris Outer Boundary Localization . . . . . . . . . . . . . . . . 62

4.1.3 Proposed Segmentation Algorithm . . . . . . . . . . . . . . . 64

4.1.4 Iris Normalization . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.2 Iris Quality Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . 68

4.2.1 Focus (F) based Quality Attribute . . . . . . . . . . . . . . . 69

4.2.2 Motion Blur (MB) based Quality Attribute . . . . . . . . . . . 70

4.2.3 Contrast and Illumination (CI) based Quality Attribute . . . . 72

4.2.4 Dilation (D) based Quality Attribute . . . . . . . . . . . . . . 72

4.2.5 Specular Reflection (SR) based Quality Attribute . . . . . . . 73

4.2.6 Occlusion (O) based Quality Attribute . . . . . . . . . . . . . 73

4.2.7 Quality Class Determination . . . . . . . . . . . . . . . . . . . 79

4.3 Iris Image Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.3.1 Iris Image Enhancement . . . . . . . . . . . . . . . . . . . . . 81

4.3.2 Iris Image Transformation . . . . . . . . . . . . . . . . . . . . 82

4.4 Iris Feature Extraction and Matching . . . . . . . . . . . . . . . . . . 84

4.4.1 Corners having Inconsistent Optical Flow (CIOF) . . . . . . . 84

4.4.2 Matching Algorithm . . . . . . . . . . . . . . . . . . . . . . . 85

4.5 Databases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

4.5.1 Databases Used . . . . . . . . . . . . . . . . . . . . . . . . . . 87

4.5.2 Testing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.6 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

4.6.1 Iris Segmentation Analysis . . . . . . . . . . . . . . . . . . . . 89

4.6.2 Threshold Selection . . . . . . . . . . . . . . . . . . . . . . . . 91

4.6.3 Impact of Enhancement on System Performance . . . . . . . . 93

4.6.4 Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . 94

5 Knuckleprint Recognition System 99

5.1 Knuckleprint ROI Extraction . . . . . . . . . . . . . . . . . . . . . . 100

Page 18: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xviii CONTENTS

5.1.1 Knuckle Area Detection . . . . . . . . . . . . . . . . . . . . . 100

5.1.2 Central Knuckle-Line Detection . . . . . . . . . . . . . . . . . 102

5.1.3 Central Knuckle-Point Detection . . . . . . . . . . . . . . . . 104

5.2 Knuckleprint Image Quality Extraction . . . . . . . . . . . . . . . . . 106

5.2.1 Focus (F ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.2.2 Clutter (C ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

5.2.3 Uniformity based Quality Attribute (S ) . . . . . . . . . . . . . 109

5.2.4 Entropy based Quality Attribute (E ) . . . . . . . . . . . . . . 110

5.2.5 Reflection based Quality Attribute (Re) . . . . . . . . . . . . 111

5.2.6 Contrast based Quality Attribute (Con) . . . . . . . . . . . . 112

5.2.7 Quality Class Determination . . . . . . . . . . . . . . . . . . . 113

5.3 Knuckleprint Image Preprocessing . . . . . . . . . . . . . . . . . . . . 114

5.4 Knuckleprint Feature Extraction and Matching . . . . . . . . . . . . 115

5.4.1 Corners having Inconsistent Optical Flow (CIOF) . . . . . . . 115

5.5 Database Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . 117

5.5.1 Testing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.6 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

5.6.1 Knuckleprint Segmentation . . . . . . . . . . . . . . . . . . . . 119

5.6.2 Threshold Selection . . . . . . . . . . . . . . . . . . . . . . . . 119

5.6.3 Comparative Analysis . . . . . . . . . . . . . . . . . . . . . . 121

6 Palmprint Recognition System 123

6.1 Palmprint ROI Extraction . . . . . . . . . . . . . . . . . . . . . . . . 123

6.2 Palmprint Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 125

6.3 Palmprint Feature Extraction and Matching . . . . . . . . . . . . . . 126

6.3.1 Corners having Inconsistent Optical Flow (CIOF) . . . . . . . 127

6.4 Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

6.4.1 Testing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . 131

Page 19: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

CONTENTS xix

6.5 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

7 Multimodal Based Recognition System 135

7.1 Knuckleprint and Palmprint Fusion . . . . . . . . . . . . . . . . . . . 136

7.1.1 Multi-modal Databases Creation . . . . . . . . . . . . . . . . 136

7.1.2 Testing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . 138

7.1.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . 140

7.2 Iris and Knuckleprint Fusion . . . . . . . . . . . . . . . . . . . . . . . 144

7.2.1 Multimodal Database Creation . . . . . . . . . . . . . . . . . 145

7.2.2 Testing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . 146

7.2.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . 147

7.3 Iris, Knuckleprint and Palmprint Fusion . . . . . . . . . . . . . . . . 149

7.3.1 Multimodal database creation . . . . . . . . . . . . . . . . . . 151

7.3.2 Testing Strategy . . . . . . . . . . . . . . . . . . . . . . . . . 152

7.3.3 Performance Analysis . . . . . . . . . . . . . . . . . . . . . . . 152

8 Conclusions 159

References 164

Publications 177

Page 20: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xx CONTENTS

Page 21: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

List of Tables

1.1 Biometric Properties (L = Low; M = Medium; H = High) . . . . . . 19

4.1 Quality Parameters and Quality Class for Images in Fig. 4.19 . . . . 80

4.2 Iris Database Specifications . . . . . . . . . . . . . . . . . . . . . . . 88

4.3 Database and Testing Strategy Specifications . . . . . . . . . . . . . . 88

4.4 Values of Various Parameters used in Algorithm 4.1 and 4.2 . . . . . 89

4.5 Variation in Threshold needed for Correct Segmentation . . . . . . . 90

4.6 Required Parametric Variations: ’-’ stands for no change, ’↑’ stands

for increasing and ’↓’ for decreasing the parameter value . . . . . . . 91

4.7 Segmentation Error. (Rows Below Images: Errors Occurred in Inter-

val, Lamp Databases) . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.8 Threshold Selection for CASIA V4 Interval and Lamp Iris Database

using only vcode Matching over Validation Set of First 100 Subjects. . 92

4.9 Enhancement based Performance boost-up over Iris Interval Databases

(In the above Table fusion referred as vcode+hcode) . . . . . . . . . . 93

4.10 Performance Analysis of Proposed System . . . . . . . . . . . . . . . 95

4.11 Comparative Performance Analysis over CASIA V4 Interval and Lamp

Iris Database . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5.1 Database Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . 119

xxi

Page 22: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xxii LIST OF TABLES

5.2 Parameterized Performance Analysis of the proposed system on PolyU

Knuckleprint Database. (only vcode matching) . . . . . . . . . . . . . 120

5.3 Comparative Performance Analysis over PolyU Knuckleprint (Results

as reported in [27]) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

6.1 Database Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . 130

6.2 Palmprint Results (L=LEFT Hand,R=RIGHT Hand,ALL=L+R) 132

6.3 Comparative Performance with Other Systems . . . . . . . . . . . . . 134

7.1 Database Specifications (L=LEFT Hand,R=RIGHT ,ALL=L+R) . 139

7.2 Testing Strategy Specifications (L=LEFT ,R=RIGHT ,ALL=L+R) 141

7.3 Consolidated Results (L=LEFT Hand,R=RIGHT Hand,ALL=L+R)142

7.4 Database Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . 146

7.5 Performance Analysis of the Proposed System over Iris Interval and

Lamp Databases along with Knuckleprint PolyU Database (In the

above Table fusion referred as vcode+hcode) . . . . . . . . . . . . . . 149

7.6 Database Specifications of Four Self Created Tri-Modal Database us-

ing Iris, Knuckleprint and Palmprint. . . . . . . . . . . . . . . . . . . 150

7.7 Database Specifications for Interval CASIA (db1) containing images

from 349 Subjects in (3 + 4) = 7 different poses. First 3 images are

considered for training and Last 4 are taken as testing. . . . . . . . . 153

7.8 Database Specifications for Interval PolyU (db2) containing images

from 349 Subjects in (3 + 4) = 7 different poses. First 3 images are

considered for training and Last 4 are taken as testing. . . . . . . . . 154

7.9 Database Specifications for Lamp Casia (db3) containing images from

566 Subjects in (4 + 4) = 8 different poses. First 4 images are con-

sidered for training and Last 4 are taken as testing. . . . . . . . . . . 155

Page 23: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

LIST OF TABLES xxiii

7.10 Database Specifications for Lamp PolyU (db4) containing images from

386 Subjects in (6 + 6) = 12 different poses. First 6 images are con-

sidered for training and Last 6 are taken as testing. . . . . . . . . . . 156

Page 24: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xxiv LIST OF TABLES

Page 25: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

List of Figures

1.1 Some Biometric Traits . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Flow Diagram of Enrollment Process . . . . . . . . . . . . . . . . . . 10

1.3 Flow diagram of Authentication Process . . . . . . . . . . . . . . . . 11

1.4 Flow Diagram of Identification Process . . . . . . . . . . . . . . . . . 12

1.5 Fingerprint Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.6 Two Fingerprint Matchers (Multi-algorithm) . . . . . . . . . . . . . . 14

1.7 Samples of Same Fingerprint (Multi-instance) . . . . . . . . . . . . . 14

1.8 Five different Biometric Traits (Multi-Modal) . . . . . . . . . . . . . 15

1.9 Iris, Knuckle and Palmprint Anatomy . . . . . . . . . . . . . . . . . . 17

1.10 Graphical Representation of FAR and FRR . . . . . . . . . . . . . . 21

1.11 Graphical Representation of EER and Accuracy . . . . . . . . . . . . 22

1.12 Graph Showing Genuine and Imposter Score Distribution (here∫ t0(s(x|HG)dx)

represents the genuine similarity score distribution) . . . . . . . . . . 23

1.13 Graph showing Genuine Vs Imposter Best Score Graph . . . . . . . . 24

2.1 Images are taken from [19, 18] . . . . . . . . . . . . . . . . . . . . . . 30

2.2 Mapping of radial rays into a polar representation pupil and iris

boundaries become vertical edges [13] . . . . . . . . . . . . . . . . . . 31

xxv

Page 26: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xxvi LIST OF FIGURES

2.3 (a) Most Significant bit-plane 7 (b) Bit-plane 6 (c) Bit-plane 5 (d)

Bit-plane 4 (e) Bit-plane 3 (f) Bit-plane 2 (g) Bit-plane 1 (h) Least

Significant bit-plane 0 [11]. . . . . . . . . . . . . . . . . . . . . . . . . 31

2.4 Iris Deformable Model [50] . . . . . . . . . . . . . . . . . . . . . . . . 32

2.5 Iris Image Preprocessing [50] : (a) Original Image, (b) Detected Inner

Boundary, (c) Detected Outer Boundary, (d) Lower Half of Iris, (e)

Normalized, (f) Eyelid Masking and (g) Enhanced Image . . . . . . . 33

2.6 Overview of Non-ideal Iris Recognition System [22] . . . . . . . . . . 33

2.7 Off Angle Iris Segmentation [2] . . . . . . . . . . . . . . . . . . . . . 34

2.8 Two Most Popular Iris Systems . . . . . . . . . . . . . . . . . . . . . 35

2.9 Steps Involved for Feature Vector Extraction [51] . . . . . . . . . . . 36

2.10 Ordinal Measures for Iris Recognition [66] . . . . . . . . . . . . . . . 37

2.11 Knuckleprint Acquisition System [77], (a) Device, (b) Raw Sample,

(c) ROI Localized, (d) Cropped. . . . . . . . . . . . . . . . . . . . . 37

2.12 Ensemble of Local and Global Features [80] . . . . . . . . . . . . . . 38

2.13 Improvement over Gabor based Compcode. (a) Original Image, (b)

Orientation Code (ImCompCode), (c) Magnitude Code (MagCode)

(Images taken from [79]) . . . . . . . . . . . . . . . . . . . . . . . . . 39

2.14 Local Feature Extraction . . . . . . . . . . . . . . . . . . . . . . . . 40

2.15 Partitioning for Zernike based Palmprint System [7] . . . . . . . . . . 41

2.16 Partitioning for Phase based Palmprint System [9] . . . . . . . . . . . 42

2.17 Partitioning for Stockwell based Palmprint System [6] . . . . . . . . . 42

2.18 Partitioning for DCT based Palmprint System [8] . . . . . . . . . . . 43

3.1 LBP Computation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

3.2 Application of LBP in Face Recognition . . . . . . . . . . . . . . . . 48

3.3 Corner Features shown in Red . . . . . . . . . . . . . . . . . . . . . . 50

3.4 Hand Tracking Application . . . . . . . . . . . . . . . . . . . . . . . . 53

Page 27: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

LIST OF FIGURES xxvii

3.5 Phase Only Correlation (POC) between two translated images. Im-

age in (c) represents the corresponding POC values for each spatial

location . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.1 Overall Architecture of the Proposed Iris Recognition System . . . . . 58

4.2 Iris Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

4.3 Iris Pupil Segmentation (Thresholding) . . . . . . . . . . . . . . . . . 59

4.4 Iris Pupil Segmentation (Gradient based Thresholding using Sobel) . 60

4.5 For any radius r, any point (x, y) having 2 pupil centers (cx1 , cy1) and

(cx2 , cy2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.6 Segmented Iris Pupil . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

4.7 All Steps in Iris Pupil Segmentation . . . . . . . . . . . . . . . . . . 62

4.8 Application of Integro-differential Operator . . . . . . . . . . . . . . . 62

4.9 Iris Localization (1st Row : Original Iris, 2nd Row : Segmented Iris) . 65

4.10 Normalization of the Cropped Iris . . . . . . . . . . . . . . . . . . . 67

4.11 Focus Filter Response . . . . . . . . . . . . . . . . . . . . . . . . . . 70

4.12 JNB Edge Width . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

4.13 Occlusion in Normalized Iris . . . . . . . . . . . . . . . . . . . . . . . 74

4.14 Application of Region-Growing . . . . . . . . . . . . . . . . . . . . . 75

4.15 Eyelid Regions: Arrows Denote the Direction for Region-Growing . . 75

4.16 Determination of Eyelash Region . . . . . . . . . . . . . . . . . . . . 78

4.17 Reflection Detection by Thresholding . . . . . . . . . . . . . . . . . . 78

4.18 Overall Occlusion Mask . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.19 Quality Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.20 Iris Texture Enhancement . . . . . . . . . . . . . . . . . . . . . . . . 81

4.21 LGBP Transformation (Red: -ve gradient;Green: +ve grad.;Blue:

zero grad.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

4.22 Original and Transformed (vcode, hcode) for Iris ROI’s . . . . . . . . 83

Page 28: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xxviii LIST OF FIGURES

4.23 Iris Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

4.24 Segmentation Error Hierarchy (Numbers Mentioned for each Sub-

category Specify the Number Images of that Category for Interval

Database.) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

4.25 Parameterized ROC Curve of the Proposed System for Iris Interval

Database (only vcode matching) . . . . . . . . . . . . . . . . . . . . . 93

4.26 Parameterized ROC Analysis of the Proposed System for Iris Lamp

Database (only vcode matching) . . . . . . . . . . . . . . . . . . . . . 94

4.27 Enhancement based Performance boost-up in terms of Receiver Op-

erating Characteristic Curves for the Proposed System . . . . . . . . 95

4.28 Comparative Analysis of Proposed System over Interval Database . . 97

4.29 Comparative Analysis of Proposed System over Lamp Database . . . 97

5.1 Overall Architecture of the Proposed Knuckleprint Recognition System100

5.2 Knuckleprint ROI Annotation . . . . . . . . . . . . . . . . . . . . . . 101

5.3 Knuckle Area Detection . . . . . . . . . . . . . . . . . . . . . . . . . 101

5.4 Conventional Gabor Filter . . . . . . . . . . . . . . . . . . . . . . . . 102

5.5 Curvature Knuckle Filter . . . . . . . . . . . . . . . . . . . . . . . . 103

5.6 Knuckle Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

5.7 Knuckleprint ROI detection. (a) Knuckle filter response is super im-

pose over the knuckle area with blue color, (b) Full Annotated and

(c) FKP (FKPROI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

5.8 Some Segmented Knuckleprints . . . . . . . . . . . . . . . . . . . . . 106

5.9 Poor Quality Knuckleprint ROI Samples . . . . . . . . . . . . . . . . 107

5.10 Defocus based Quality Attribute F and C . . . . . . . . . . . . . . . 109

5.11 Uniformity based Quality Attribute (S ) . . . . . . . . . . . . . . . . . 110

5.12 Entropy based Quality Attribute E . . . . . . . . . . . . . . . . . . . 111

5.13 Reflection based Quality Attribute Re . . . . . . . . . . . . . . . . . 112

Page 29: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

LIST OF FIGURES xxix

5.14 Knuckleprint Quality Estimation. Top, Middle, Bottom Images for

each Proposed Quality Parameter. . . . . . . . . . . . . . . . . . . . . 113

5.15 Enhanced Knuckleprint . . . . . . . . . . . . . . . . . . . . . . . . . 114

5.16 Original and Transformed (vcode, hcode) for Knuckleprint ROI’s . . . 114

5.17 Illumination Invariance of LGBP transformation. First row shows

same image under five different illumination conditions. Second row

shows their corresponding vcode’s . . . . . . . . . . . . . . . . . . . . 115

5.18 Knuckleprint Recognition . . . . . . . . . . . . . . . . . . . . . . . . 116

5.19 Some Images from Knuckleprint Database [56] used in this work . . . 117

5.20 Failed Knuckle ROI detection . . . . . . . . . . . . . . . . . . . . . . 120

5.21 Parameterized ROC Analysis of the Proposed System for PolyU Knuck-

leprint Database. First 50 subject’s left index images are used. (only

vcode matchings) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

5.22 Performance of the Proposed System over Knuckleprint PolyU Database122

6.1 Overall Architecture of the Proposed Palmprint Recognition System . 124

6.2 Palmprint ROI Extraction (Images are taken from [9]) . . . . . . . . 124

6.3 Palmprint Image Enhancement . . . . . . . . . . . . . . . . . . . . . 125

6.4 Original and Transformed (vcode, hcode) for Palmprint ROI . . . . . 126

6.5 Steps of the Palmprint Recognition System . . . . . . . . . . . . . . . 127

6.6 Sample Palmprint Images from Casia and PolyU Databases . . . . . 130

6.7 Vertical and Horizontal Information Fusion based Performance Boost-

up for CASIA Palmprint Databases (X − axis in Log scale) . . . . . 132

6.8 Vertical and Horizontal Information Fusion based Performance Boost-

up for PolyU Palmprint Databases (X − axis in Log scale) . . . . . 133

7.1 Sample Images from all Databases (Five Distinct Users) . . . . . . . . 137

7.2 Genuine Vs Imposter Best Matching Score Separation Graphs . . . . 143

Page 30: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xxx LIST OF FIGURES

7.3 ROC Characteristics Comparison of the Unimodal Vs Multimodal

Databases for the Proposed System . . . . . . . . . . . . . . . . . . . 150

Page 31: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

List of Algorithms

4.1 Pupil Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4.2 Iris Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

4.3 Eyelid Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

4.4 Eyelash Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

4.5 Occlusion Mask Determination . . . . . . . . . . . . . . . . . . . . . 79

4.6 CIOF (Irisa, Irisb) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

5.1 Knuckleprint ROI Detection . . . . . . . . . . . . . . . . . . . . . . . 105

5.2 Uniformity based Quality Attribute (S ) . . . . . . . . . . . . . . . . . 110

5.3 CIOF (knucklea, knuckleb) . . . . . . . . . . . . . . . . . . . . . . . . 118

6.1 Palmprint ROI Extraction . . . . . . . . . . . . . . . . . . . . . . . . 125

6.2 CIOF (palma, palmb) . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

xxxi

Page 32: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xxxii LIST OF ALGORITHMS

Page 33: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

List of Abbreviations

CLAHE Contrast Limited Adaptive Histogram Equalization

AUC Area Under ROC Curve

CMC Cumulative Matching Characteristics

CRR Correct Recognition Rate

EER Equal Error Rate

EUC Error Under ROC Curve

FAR False Acceptance Rate

FMR False Match Rate

FNMR False Non-Match Rate

FRR False Rejection Rate

GAR Genuine Acceptance Rate

ICA Independent Component Analysis

IITK Indian Institute of Technology Kanpur

PCA Principal Component Analysis

ROC Receiver Operating Characteristics

ROI Region of Interest

SIFT Scale Invariant Feature Transform

SURF Speeded Up Robust Features

LDA Linear Discriminant Analysis

DCT Discrete Cosine Transform

xxxiii

Page 34: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

xxxiv Abbreviations

POC Phase Only Correlation

BLPOC Band Limited Phase Only Correlation

LGBP Local Gradient Binary Pattern

LOG Laplacian of Gaussian

DI Discriminative Index

GvI Graph Genuine Vs Imposter Graph

FTE Failure to Enroll Rate

LBP Local Binary Pattern

EBGM Elastic Bunch Graph Matching

NIR Near Infra Red

SVM Support Vector Machine

DFT Discrete Fourier Transform

IDFT Inverse Discrete Fourier Transform

HMM Hidden Markov Model

CCD Charged Coupled Device

SOTA State of the Art

vcode Vertical code

hcode Horizontal code

CG Curvature Gabor filter

JNB Just Noticeable Blur

CIOF Corners having Inconsistent Optical Flow

Page 35: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Chapter 1

Introduction

Personal authentication plays an important role in the society. It requires at least

some level of security to assure the identity. Security can be realized through one

of the three levels.

1. Level 1 [ Possession ] : The user possesses something which is required to

be produced at the time of authentication. For example, key of a car or room.

2. Level 2 [ Knowledge ] : The user knows something which is used for

authentication. For example, PIN (personal identification number), password,

or credit card CVV (card verification value) .

3. Level 3 [ Biometrics ] : The user owns certain unique physiological and be-

havioral characteristics, known as biometric which are used for authentication.

For example, face, iris, fingerprint, signature, gait.

There are some cases where more then one level of security are used to enhance

the accuracy of any authentication system. However, there are drawbacks in Level

1 and Level 2 security. For example, key or smart-cards may be lost or mishandled

while passwords or PIN may be forgotten or guessed. Since both possession and

Page 36: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

2 CHAPTER 1. INTRODUCTION

knowledge are not intrinsic user properties, they are difficult to be managed by the

user. But this is not the case with Level 3 security which is based on biometrics which

can be considered as the science of personal authentication using the physiological

(eg. fingerprint, face, iris, etc.) and behavioral characteristics of human beings (e.g

signature, gait,voice, etc.). Examples of some well known biometric traits are shown

in Figs. 1.1.

(a) Physiological

(b) Behavioral

Figure 1.1: Some Biometric Traits

Any biometrics based authentication system is better than the traditional pos-

session or knowledge based system because of the following reasons.

• Biometric traits are intrinsically related to the user. They cannot be lost,

forgotten or misplaced; hence they are easy to manage.

• There is a need of physical presence of the trait for authentication.

• Features characteristics are unique.

Some of the vital properties that any biometric trait should possess are given

below.

Page 37: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

3

1. Uniqueness: Characteristics associated with the biometric trait should be

different for everyone.

2. Universality: The biometric trait should be owned by everyone and should

not be lost.

3. Circumvention: The biometric trait should not be spoofed or forged easily.

4. Collectability: The biometric trait should be able to acquire by some digital

sensor.

5. Permanence: Characteristics associated with the biometric trait should be

time invariant (temporally stable).

6. Acceptability: The biometric trait should be accepted by the society without

any objection.

A biometric based personal authentication is a multi-staged process. In the

initial stage, the raw image is captured using an acquisition sensor. This is very

critical and important stage because accuracy of any biometric system is highly

dependent on the quality of images. In the second stage, the desired part from the

image, termed as region of interest (ROI), is extracted from the acquired image.

Third stage estimates the quality of the ROI. If the quality of the ROI is poor, then

one may go for re-acquisition of the image. In next stage, the ROI is preprocessed

using some enhancement technique. Some transformations are also performed to

get the robust ROI. Discriminative features from the enhanced ROI are extracted

in the next stage. Finally, features of a query image have matched against those of

image(s) in the database to authenticate the claim.

Page 38: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4 CHAPTER 1. INTRODUCTION

1.1 Biometric Traits

There does not exist any biometric trait which satisfies all desired properties strictly.

For example, facial features are not permanent throughout the life span, fingerprints

are not visible for hard working people etc. However, there exist several well known

biometric traits which satisfy more or less all biometric properties. Biometric traits

can be divided based on physiological and behavioral characteristics.

1.1.1 Physiological based Traits

This subsection discusses some of the well known biometric traits which are based on

physiological characteristics. These traits are face, fingerprint, ear, iris, palmprint

and knuckleprint.

• Face : It is one of the most common and well known biometric traits. Images

can be captured from distance and even can be extracted from video frames.

In a typical face recognition system, initially face is detected from a test image

and features are extracted. These features are matched against those stored

in the database. An appropriate matching algorithm is used to obtain the

matching score. Most commonly geometric distances between facial key-points

such as eyes, nose, mouth. There are several real time applications such as

surveillance, criminal identification, access control management where it is

being used. Face recognition is non-intrusive but due to several challenges like

pose, illumination, occlusion, aging and expression, its performance is found

to be restricted.

– Advantages : Face is the most common and non-intrusive trait and

hence, can be captured easily using cheap sensors. It is widely acceptable

in the society.

Page 39: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.1. BIOMETRIC TRAITS 5

– Challenges : The face pose varies with the viewing angle along with

illumination. The facial expressions can deform the face significantly

while partial occlusion may hide some facial regions. Aging is a big

challenge to deal with because facial features do not vary in a specific

pattern.

• Fingerprint : For quiet a long time, fingerprints are being used for personal

authentication. Any typical fingerprint is made up ridges. There exist huge

amount of discriminative textures and patterns such as loop, arch, whorl over

a fingerprint. The ridge ending and the ridge bifurcation are known as minutia

features. These features are assumed to be unique and stable. Several minutia

based fingerprint matching algorithms are proposed. This type of algorithms

uses the co-ordinates of minutiae point along with its orientation. Fingerprint

sensors are easily available but the good quality fingerprint is a major chal-

lenge. The poor quality of fingerprints may be due to poor quality of sensor

or external factors such as dirt, oil, sweat etc. It is also observed that laborers

possess poor quality fingerprints due to their nature of work.

– Advantages : Fingerprints are unique. Lesser amount of user cooper-

ation is required for its acquisition and can be captured through cheap

sensors.

– Challenges : The major challenge is to get good quality fingerprints.

• Ear : Like other biometric traits, it contains robust, unique and discrimina-

tive line based features. In an ear recognition system, ear is segmented from

the raw profile face image. Features obtained from ear are matched against

those that are stored in database. The major disadvantage of ear is the occlu-

sion which occurs due to hair or any other foreign body such as ear ring, cap,

ear phones etc.

Page 40: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

6 CHAPTER 1. INTRODUCTION

– Advantages : Similar to face, ear can also be acquired non-intrusively

using cheap sensors. They are universal and have robust shape that do

not vary too much. The social acceptance of ear is high.

– Challenges : Ear recognition performance suffers in varying illumina-

tion, pose and translation. The scale and external body occlusion like

hair and ear rings are other challenges to deal with.

• Iris : Iris is considered as one of the best known biometric traits. It is

basically a donut shape annular region that is bounded by sclera and pupil.

There exist discriminative textures within iris in the form of furrows, ridges,

crypts. It is difficult to capture iris in visible light as it is sensitive to light.

Hence, iris acquisition is captured in NIR (Near Infra-Red) light in high

resolution. Iris is segmented and normalized into a fixed size rectangular

strip. There exist several texture based techniques to extract binary features

which are matched using hamming distance for authentication. The iris based

recognition systems are usually found to be very accurate and robust. But

iris can be spoofed through contact lense. The major problem for iris is that

it requires stringent user cooperation during acquisition. Also, iris is very

sensitive to any external stimulus and it is very hard to control it.

– Advantages : Iris posseses highly discriminative unique texture that is

naturally well protected. They are difficult to alter and faster to process.

Iris images can be acquired touchlessly.

– Challenges : Accurate iris segmentation in varying illumination is

very challenging. The eyelid and eyelash along with motion blur and

specular reflection is another issue. Off-angle iris recognition is also a

very challenging problem.

Page 41: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.1. BIOMETRIC TRAITS 7

• Palmprint : The inner part of a palm image is considered as palmprint. It

includes ridges, minutia, principle line, delta points and rich palmprint texture

in abundance. These features are assumed to be stable and unique. Binary

features are extracted using several textures as well as statistical and structural

properties. Stability of palmprint features is not yet critically studied.

– Advantages : Palmprint can be captured using the low cost sensors

in a touch-less manner. The extracted palm ROI is large and contains

discriminative and unique features.

– Challenges : Variation of illumination and rotation as well as transla-

tion and handling the problem of occlusion are the major challenges.

• Knuckleprint : The outer part of a finger is considered as finger knuckle.

This can be acquired through any sensor. The line based structural features

can be extracted from them and are assumed to be discriminative features.

Several gabor based texture and orientation based techniques are used to ex-

tract binary features.

– Advantages : It is naturally well protected and can be acquired using

low cost sensors. Unique and discriminative features are available over

knuckleprint ROI.

– Challenges : Major challenges are the ways to handle the problem of

illumination variation as well as rotation and translation.

1.1.2 Behavioral

Some of the most popular behavioral biometric characteristics are discussed below.

• Gait : It is one of the behavioral biometric traits. It considers the character-

istics that are lying on the way that a person walks. Gait data can be acquired

Page 42: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

8 CHAPTER 1. INTRODUCTION

using moving light displays or video streams. Further, there are sensors that

can record various crucial parameters such as pressure and step patterns that

can be used for identification.

– Advantages : It can be acquired non-intrusively and from distance;

hence its user acceptance is good.

– Challenges : Major challenges are the way of handling variations due

to background, clothing or walking surface.

• Signature : The hand-written signature of a person is an instance of personal

verification. Generally it is used for verification of the owner of bank cheques or

other off-line documents. From any signature, the orientation and co-ordinate

based features in X and Y directions are extracted. They are matched with

the features that are already available in the existing database of the claimed

identity. General features such as writing angle, breakpoint and curvatures are

termed as static while pen-speed, writing time, pressure applied are termed

as dynamic features. There are two modes in which such system works, one

is on-line while other is off-line. Image of the off-line signature is obtained

by scanning handwritten signature while digital signature is acquired through

digital signature pad or tablet.It is not universal as illiterate persons do not

know how to sign. It is not permanent as it can vary with time and can be

spoofed/forged easily.

– Advantages : It has well acceptance socially.

– Challenges : Signature can vary due to aging, emotion or writing

surface posing big challenge to automated signature recognition.

• Voice : It considers how a person makes sounds while speaking. It is believed

that every person has its own natural texture and tonal quality that depends

Page 43: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.2. MODES OF BIOMETRIC SYSTEM 9

on nasal tone, cadence and inflection. Several voice based characteristic are

extracted and used for voice based authentication. Features are not permanent

and can be imitated/spoofed easily. It is not universally owned as dumb

persons cannot speak anything.

– Advantages : It is well accepted by the society.

– Challenges : Voice suffers from aging, emotion and other environmen-

tal variation severely.

1.2 Modes of Biometric System

There are three different possible modes in which any biometric system can be

operated and they are enrollment, verification and identification. Every user of the

system needs to be enrolled to the system by providing images of the biometric trait.

Features are extracted from the images and are stored in the database. In case of

verification or identification, features of the query image are matched against those

of the enrolled users.

1.2.1 Enrollment

In this step, a user enrolls or registers to the existing database. A biometric sensor is

used to acquire the image from the user and its quality is evaluated. If the quality is

above than a threshold set a priori features are extracted and a unique identification

number is assigned to it. Otherwise, image is re-captured. This process of capturing

is repeated until the desired quality image is captured. The process of enrollment

subsystem is shown in Fig. 1.2.

Page 44: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

10 CHAPTER 1. INTRODUCTION

Figure 1.2: Flow Diagram of Enrollment Process

1.2.2 Verification

It is also known as One-to-One (i.e 1:1) matching. In this mode, user claims an

identity and the system verifies the correctness of the claim. Features from the

biometric trait provided by the user are matched with the features of the claimed

identity stored in the existing database. If the similarity matching score is more

than a pre-computed threshold, then the claim is verified and the user is considered

as a genuine user. Otherwise, the claim is rejected and the user is an imposter. The

flow diagram of the verification subsystem is shown in Fig. 1.3.

Suppose, F1 and F2 are the feature vectors of a biometric trait of two subjects,

s1 and s2. By the term “Matching between s1 and s2”, we mean the similarity or

dissimilarity between their feature vectors F1 and F2. One can make use of any

distance measure to obtain the similarity or the dissimilarity score between the two

feature vectors. Without any loss of generality, let us assume that the distance mea-

sure provides the similarity score. If the score is greater than a predefined threshold

then we can conclude that the two images are matched (genuine); otherwise they

Page 45: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.2. MODES OF BIOMETRIC SYSTEM 11

are not matched (imposter).

Figure 1.3: Flow diagram of Authentication Process

1.2.3 Identification

It is also known as One-to-Many (i.e 1:N) matching. In this mode, the system

may not have any information other than the presented biometric trait. It attempts

to determine the correct identity of that user; hence it is termed as Identification.

The ROI of biometric image is preprocessed and features are extracted. The feature

vector is matched with feature vectors of all users in the database and the top best

matches are obtained.

More clearly, let F1 be probe features of a biometric trait. The matching score

may be computed using a variety of distance metrics to obtain the similarity or

the dissimilarity scores between F1 and all feature vectors that are stored in the

database. Out of all these scores, the best N matching images are reported as N

probable matches. The identification subsystem is shown in Fig. 1.4.

Page 46: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

12 CHAPTER 1. INTRODUCTION

Figure 1.4: Flow Diagram of Identification Process

1.3 Multi-biometric System

The performance of any unimodal biometric system is often got restricted due to

variation and uncontrolled environmental condition, sensor precision and reliability

as well as several trait specific challenges such as pose, expression, aging etc for face.

Moreover, it considers features to take decision on matching and hence, it is difficult

to improve its accuracy. Hence one can explore the possibility of fusing more than

one biometric samples, traits or algorithms. This is termed as multi-biometrics [16].

There exist different types of multi-biometric system. Some of them are discussed

below.

1.3.1 Types of Multi-biometric System

Any biometric based authentication system consists of several stages and has many

challenges and limitations. The aim of any multi-biometric system is to improve the

performance of the system.

Page 47: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.3. MULTI-BIOMETRIC SYSTEM 13

• Multi-sensor System : It considers images of same biometric trait where

images are captured with the help of multiple sensors. Fig. 1.5 shows three

types of fingerprint scanners which can be used to build a multi-sensor bio-

metric system. These three sensors use using different technologies to acquire

data and hence the quality as well as discriminative features of their samples

are significantly different.

Figure 1.5: Fingerprint Sensors

• Multi-algorithm System : It considers multiple matching algorithms

to improve the performance of the system. Images of the selected trait are

captured using single sensor. In Fig. 1.6, it is shown that one can use different

algorithms applied over the same image. One algorithm may be using some

global texture like orientation field features while other one may use minutia

based local features. Fusion of these matchers is expected to perform better

than any of these two algorithms.

• Multi-instance System : It considers more than one image of the same

trait per user. Multiple samples are collected. In Fig. 1.7, three samples

of the same finger collected under controlled environment are shown. This

redundant information is useful to address the issues related to local as well

Page 48: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

14 CHAPTER 1. INTRODUCTION

(a) Orientation Field BasedAlgorithm

(b) Minutia Based Algorithm

Figure 1.6: Two Fingerprint Matchers (Multi-algorithm)

as environmental condition variations.

Figure 1.7: Samples of Same Fingerprint (Multi-instance)

• Multi-modal System : It considers multiple biometric traits for authenti-

cation. Uncorrelated traits are considered to achieve better performance [34].

Also it makes system more robust against spoofing attacks as it becomes more

and more difficult to imitate all selected traits at-once. But still print-attack

and spoof-attack may circumvent these systems [33]. Hence trait selection

with better spoofing algorithms is desirable. In Fig. 1.8, a multi-modal sys-

tem is shown in which face, fingerprint, ear, iris and palmprint samples are

used.

Page 49: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.3. MULTI-BIOMETRIC SYSTEM 15

(a) Face (b) Fingerprint (c) Ear (d) Iris (e) Palmprint

Figure 1.8: Five different Biometric Traits (Multi-Modal)

1.3.2 Fusion Levels

There are several ways that can be used to fuse various characteristics in a multi-

biometric system. Fusion can be done at various levels which are discussed below.

• Score Level : Samples of different traits are matched with their corre-

sponding trait individually and scores are obtained. Scores are normalized to

same scale and converted into either dissimilarity or similarity. These scores

are combined to obtain the fused score. There exist several score level fusion

techniques such as max, min, average or weighted average. For example, let

a face similarity score be 91 in a scale of [0 to 100] and an iris dissimilarity

score be 0.27 in a scale of 0 to 1.0. Either face score should be scaled to [0 to

1.0] and converted into dissimilarity (i.e 1 − 91100

= 0.09), or iris score should

be scaled to [0 to 100] and converted into similarity (i.e 100−0.27∗100 = 73),

before fusion hence for fusion of face and iris, one can used weighted average

of either 91 and 73 or 0.09 and 0.27 scores.

• Feature Level : All features from each trait are first extracted individ-

ually for every subject. These features should be of the same type and are

concatenated into one single multi-biometric template which is used for au-

thentication. For example, LBP based histogram features extracted from face

and iris images can be fused by concatenating both histogram one after the

Page 50: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

16 CHAPTER 1. INTRODUCTION

other to obtain a single histogram. The χ2 dissimilarity measure can be used

to obtain the fused multi-modal matching score.

• Decision Level : All samples of different traits are matched with their

corresponding trait to obtain individual scores. These scores are thresholded

to obtain individual decision for each trait. The final decision is taken by

fusing them using OR, AND or other rules. For example, for face, iris and

palmprint traits let the individual decisions be Face = Accept, Iris = Reject

and Palmprint = Accept. Simple rules like at-least 2 Accept for matching can

be used to make the decision.

1.4 Motivation of Thesis

There exist several potentially viable physiological and behavioral biometric traits.

Each biometric trait has its own unique and complex anatomical structure. The dy-

namics of this structure accounts to the discriminative power as well as its stability.

1.4.1 Iris

It is a ring made up of tissues. The detailed iris related anatomical features are

described in [12] and are shown in Fig. 1.9(a). Thin circular diaphragm between

cornea and lens is called as iris which has abundance of micro-textures as crypts,

furrows, ridges, corona, freckles and pigment spots. These textures are randomly

distributed; hence they are believed to be unique [32]. Iris texture is very fine and

most of its details are developed during the embryonic development. Textures of

two subjects are believed to be unique and even the right eye of the same subject

is different from the left eye [21]. Iris is a naturally well-protected biometric as

compared to the other traits and is also assumed to be invariant to age.

Page 51: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.4. MOTIVATION OF THESIS 17

Issues such as iris occlusion due to eyelid and eyelashes and specular reflection

remain to be addressed. Also accurate iris localization is a challenge. Huge amount

of efforts are required to improve the accuracy and reliability of any iris based

system. Estimating the iris quality is also an important issue that has to be resolved.

In this thesis, we have addressed some of these issues.

(a) Iris Anatomy (b) Knuckle Anatomy (c) Palmprint Anatomy

Figure 1.9: Iris, Knuckle and Palmprint Anatomy

1.4.2 Knuckleprint

Anatomical structure of the knuckleprint is shown in Fig. 1.9(b). The line like (i.e.

knuckle lines) rich pattern structures in vertical as well as horizontal directions exist

over it knuckleprint. These horizontal and vertical pattern formations are believed

to be very discriminative [80]. The knuckleprint texture is developed very early and

last very long because it occurs on the outer side of the hand and no one can use

them for almost any work except boxers. Negligible weir and tire as well as print

quality degradation with time and age are observed. Its failure to enrollment rate is

also expected to be very low as compared to the fingerprint and it does not require

much user cooperation.

Since it is a very new trait, there exist several challenges that have to be ad-

Page 52: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

18 CHAPTER 1. INTRODUCTION

dressed. The central knuckle line extraction is a major issue that is required to

register knuckleprints. There is a need to design a reliable feature selection as well

as knuckleprint matching algorithm. Estimating the quality of a knuckleprint is a

challenge that requires significant attention. This thesis has addressed some of these

issues.

1.4.3 Palmprint

The inner part of the hand is called as palm and the extracted region of interest

in between fingers and wrist is termed as palmprint which is shown in Fig. 1.9(c).

Even monozygotic twins are found to have different palmprint patterns [37]. Pat-

tern formation within this region is supposed to be stable as well as unique [9].

Huge amount of textures in the form of palm-lines, ridges, wrinkles etc. is available

over palmprint as shown in Fig. 1.9(c). Prime advantage of palmprint over finger-

print includes its higher social acceptance because it is never being associated with

criminals. It has larger ROI area as compared to fingerprint images that ensures

abundance of structural features including principle lines, wrinkles, creases and tex-

ture pattern. Due to larger ROI, even low resolution palmprint images can be used

to enhance system’s speed but to reduce the cost.

There are several challenges that are to be addressed. The accurate palmprint

localization is one of the key issue that is required to extract registered palmprints.

One needs to design reliable features selection as well as palmprint matching algo-

rithm. Methods of computing the quality of a palmprint is a challenge that requires

significant attention. This thesis has dealt with some of these issues.

1.4.4 Multi-biometrics

A comparative study between iris, knuckleprint and palmprint is presented in Table

1.1. It has been observed experimentally that such a fusion of two or more biometric

Page 53: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.4. MOTIVATION OF THESIS 19

Property Meaning Iris Palmprint KnuckleprintUniversality Every individual must pos-

sessM M M

Uniqueness Features should be distinctacross individuals

H M M

Permanence Features should be constantover a long period of time

H H H

Collectability Trait can be easily acquired L M HPerformance Possess high performance H M MAcceptability Acceptable to a large per-

centage of the populationL M H

Circumvention Difficult to mask or manip-ulate

M M H

Table 1.1: Biometric Properties (L = Low; M = Medium; H = High)

modalities facilitates the system to reject the imposters much more confidently and

hence boosting the overall system performance significantly. Most of the state-of-

the-art uni-modal biometric based authentication systems perform block by block

and pair-wise matching [6], [7], [9], [36], [65], [76]; hence they require pre-registered

images. But they cannot produce highly accurate systems.

Any multi-modal system makes use of some biometric traits to enhance system’s

performance. Multimodal systems are more relevant when the number of enrolled

users is very large. The false acceptance rate grows rapidly with the increase in the

size of the database [5]; hence multiple traits are used to achieve better performance.

Also multi-modal systems can enable us to deal with missing trait. The spoof

vulnerability is much lesser than any unimodal system; hence it is ideal for outdoor

and non-controlled environments. To best of the knowlegde not much work has

been done in this area mainly due to the non-availability of multi-modal datasets.

Iris, knuckleprint and palmprint traits have uncorrelated features; hence this thesis

has considered these traits for fusion to enhance the system performance. Further,

these traits are never used to design a multimodal system in the past. All of them

Page 54: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

20 CHAPTER 1. INTRODUCTION

are well protected and also their textures cannot be altered or deformed easily.

1.5 Performance Analysis Metrics

It is necessary to analyze the performance of any biometric system. The inferences

drawn over performance analysis is used in judging the suitability for its application.

There exist several performance measures to analyze a verification or identification

system.

1.5.1 Verification Performance Metrics

Like any pattern recognition system, there are two types of errors viz.; False Ac-

ceptance Rate (FAR) and False Rejection Rate (FRR). When two feature vectors

are matched, it generates a matching score. This score is either dissimilarity or

similarity score. For a dissimilarity (similarity) score, if it is less (greater) than a

predefined threshold, we assume, these two feature vectors are matched. FAR is the

probability of accepting an imposter as a genuine user wrongly. More clearly, if we

perform N distinct imposter matchings and M of them have got accepted wrongly

as genuine matching then FAR is given by :

FAR =M

N× 100% (1.1)

Similarly, FRR is defined as the probability of rejecting a genuine user wrongly.

That means, if we perform N distinct genuine matchings and M of them have been

got rejected wrongly then FRR is given by :

FRR =M

N× 100% (1.2)

For various thresholds, if we plot FAR or FRR, we get a curve which is known

Page 55: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.5. PERFORMANCE ANALYSIS METRICS 21

(a) Hypothetical FAR curve (b) Hypothetical FRR curve

Figure 1.10: Graphical Representation of FAR and FRR

as FAR or FRR curve. Sample FAR and FRR curves are shown in Fig. 1.10.

[a] EER : Equal error rate (EER) is the value of FAR for which FAR and

FRR are equal.That means, EER is the point of intersection of FAR and FRR

curves. Also, if we draw a curve of FAR vs FRR for all thresholds and draw a line

at 45◦ from origin then EER is the point of intersection of that line with the FAR

vs FRR curve. It is shown in Fig. 1.11(a).

[c] Accuracy : If T is the threshold for which FAR+FRR2

is minimum for all

FAR and FRR at different threshold, then we define the accuracy at T as

Accuracy =

(100− FART + FRRT

2

)% (1.3)

where FART , FRRT are FAR and FRR at threshold T . The threshold (T ) at

which the combination of FAR and FRR gives the highest accuracy, is considered

as the optimum threshold. It can be observed that accuracy may not be maximum

at the threshold of EER.

[c] Receiver Operating Characteristics (ROC) Curve : It is a graph

Page 56: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

22 CHAPTER 1. INTRODUCTION

(a) Hypothetical EER curve (b) Hypothetical Accuracy curve

Figure 1.11: Graphical Representation of EER and Accuracy

plotting FAR against various FRRs. It helps to analyze the behavior of FAR

against FRR as shown in Fig. 1.11(a). It quantifies the discriminative power of the

system between genuine and imposter’s score. An ideal ROC curve would include a

point at FRR = 0, FAR = 0 which signifies EER = 0. The curve provides a good

way to compare the performance of two biometric systems. Lower the ROC curve

(towards both co-ordinate axis) better is the system since area under the curve (i.e

error) is lesser.

[d] Error under ROC Curve (EUC) : It is a scalar quantity defined as the

area under the ROC curve. It estimates the amount of error incurred while one

makes decision on genuine and imposter matchings.

[f ] Decidability Index : It measures separability between imposter and

genuine matching scores and is defined by:

d′=|µG − µI |√

σ2G+σ

2I

2

(1.4)

where µG and µI are the mean and σG and σI are the standard deviation of the

Page 57: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.5. PERFORMANCE ANALYSIS METRICS 23

Figure 1.12: Graph Showing Genuine and Imposter Score Distribution (here∫ t0(s(x|HG)dx) represents the genuine similarity score distribution)

genuine and imposter scores respectively. In Fig. 1.12, the genuine and the imposter

score distributions are shown. Higher the value of d′, better is seperation between

two distributions; hence error is less.

1.5.2 Identification Performance Metrics

For identification, performance analysis is done by correct recognition rate, CRR

and genuine vs imposter best match graph.

[a] CRR : The correct recognition rate, CRR, is also known as the Rank 1

accuracy. It is defined as the ratio of the number of correct (Non-false) top best

matches and the total number of matching performed in the query set. More clearly,

if we have N images in the test set and out of that, M images have got the Non-false

top best match then CRR is given by :

CRR =M

N× 100% (1.5)

[b] Genuine vs Imposter Best Match Graph (GvI Graph) : The graph

which shows separation of genuine vs imposter best matching scores plots the best

Page 58: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

24 CHAPTER 1. INTRODUCTION

Figure 1.13: Graph showing Genuine Vs Imposter Best Score Graph

genuine and the best imposter scores for all probe images. From the graph, sepa-

ration between the best genuine and the imposter matching scores can be analyzed

visually. In Fig. 1.13, one such plot is shown from which one can observe that

genuine matching scores are well separated from imposters and overlapping scores

are errors.

1.6 Thesis Contribution

This thesis deals with the problem of designing some efficient biometric systems. It

has considered three biometrics traits viz. iris, knuckleprint and palmprint. Finally,

it has proposed an efficient multimodal biometric system which makes use of three

traits. Score level fusion is performed to obtain the matching score of the multi-

modal system. The contribution of this thesis spans over all stages involved in the

development of any biometric system.

Page 59: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.6. THESIS CONTRIBUTION 25

1.6.1 ROI Extraction

The data acquisition system captures the biometric samples. It is required to extract

the region of interest (ROI) from the acquired samples. We have proposed efficient

algorithms to extract the ROI from iris, knuckle and palm samples.

1. Iris ROI Extraction : It requires the exact localization of inner as well as outer

iris boundary. Improved Hough and Integro-differential transformations are

used in the way that they can complement to each other to extract the inner

and outer boundaries respectively. The iris ROI is extracted efficiently by

using the modified hough and sector restricted integro-differential transform.

2. Knuckleprint ROI Extraction : It contains different type of texture and struc-

ture. Gabor filter is modified to Curvature Gabor filters (CG) to model the

knuckleprint ROI. It is used as a template to localize the central phalangeal

joint and to segment the knuckleprint ROI.

3. Palmprint ROI Extraction : The palmprint has a very peculiar and well

defined structure. Some landmark key-points such as valley and hill points

are used to segment the palmprint ROI.

1.6.2 Quality Estimation

The quality of the extracted biometric ROI plays a significant role in the overall

performance of any system. Hence, several general as well as trait specific qual-

ity parameters are proposed to estimate the quality of any iris, knuckleprint and

palmprint sample.

1. Iris Quality Estimation : The quality of an iris sample is modeled as a function

of six attributes, namely focus, motion blur, occlusion, contrast and illumina-

tion, dilation and specular reflection.

Page 60: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

26 CHAPTER 1. INTRODUCTION

2. Knuckleprint Quality Estimation : The knuckleprint image quality is obtained

by computing the amount of well focus edges, amount of clutter, distribution

of focused edges, block-wise entropy of focused edges, reflection caused by light

source or camera flash and the amount of contrast.

3. Palmprint Quality Estimation : The palmprint image quality is obtained by

estimating the amount of three primary features, namely palm principle lines,

ridges and wrinkles.

1.6.3 Biometric Feature Enhancement and Transformation

An enhancement algorithm which can be used to obtain robust iris, knuckle and

palmprint ROI has been proposed. ROIs are transformed using the proposed local

gradient based binary pattern to obtain highly discriminative as well as robust

texture representation.

1.6.4 Biometric Feature Extraction and Matching

A feature extraction and matching algorithm is proposed to perform biometric

sample identification/recognition. The discriminative corner features are used for

matching. The matching algorithm uses the concept of sparse point tracking but

under three constrains viz. vicinity, correlation and patch-wise error bounds. The

proposed matching algorithm is parameterized and is fine tuned to make the match-

ing.

1.6.5 Multimodal Biometric System

We have proposed an efficient multimodal biometric system using three traits, viz.

iris, knuckleprint and palmprint. Since same matching algorithm is used to matching

iris, knuckle and palm, simple score level fusion is done. To analyze the performance

Page 61: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

1.7. THESIS ORGANIZATION 27

of the system, we have created several chimeric multi-modal biometric databases.

The system is optimized to perform efficiently so that it can be considered as a

real-time application. In this thesis we have shown that uncorrelated features of

different modalities can be fused to achieve better performance.

1.7 Thesis Organization

This thesis contains eight chapters. A general biometric system with its properties

has been discussed in first chapter. Also motivation of the thesis presented.

The literature review on each of the three traits viz. iris, knuckleprint, palmprint

is presented in Chapter 2. Also, it has presented the some well known work on

various multi-modal based biometric system. Some image processing and computer

vision based techniques which are used to design our systems have been presented

in the next chapter.

In Chapter 4, an efficient iris based recognition system has been proposed. Each

iris is efficiently segmented using an improved circular hough transform for inner iris

boundary (i.e pupil) detection. The robust integro-differential operator is used to

detect outer iris boundary that makes use of the pupil location. If the quality of the

acquired iris sample is less than a predefined threshold then the image is recaptured.

This early quality assessment is very crucial to handle the problem of poor qual-

ity and non-ideal imagery. The segmented iris is normalized to polar coordinates

(i.e. rectangular strips) and LGBP (Local Gradient Binary Pattern) algorithm is

proposed to obtain robust features. The corner features are extracted and matched

using the proposed dissimilarity measure CIOF (Corners having Inconsistent Op-

tical Flow). The system has been tested over publicly available CASIA 4.0 Interval

and Lamp iris databases which consist of 2, 639 and 16, 212 images respectively.

In the next chapter, a knuckleprint based recognition system has been proposed.

Page 62: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

28 CHAPTER 1. INTRODUCTION

The segmentation of the knuckleprint is done by estimating the central knuckle-line.

It has used a new modified version of gabor filter to extract the knuckleprint. We

have proposed a technique to extract the quality of the acquired knuckleprint sam-

ple. If it is less than a predefined threshold then it is recaptured. The segmented

knuckleprint ROI is preprocessed using the proposed LGBP (Local Gradient Binary

Pattern) to obtain robust features. The corner features are extracted and matched

using the proposed dissimilarity measure, CIOF (Corners having Inconsistent Op-

tical Flow). The proposed system has been tested over the publicly available PolyU

knuckleprint database consisting of 7, 920 images.

An effective palmprint based recognition system has been proposed in Chapter

6. The palmprint segmentation is done by obtaining the two valley points and then

a square shaped ROI is extracted. The quality of the acquired palmprint sample

is estimated using the proposed quality assessment parameters and if the quality

is less than a predefined threshold, it is recaptured. The segmented palmprint is

preprocessed using the proposed LGBP (Local Gradient Binary Pattern) to obtain

robust features. The KLT based corner features are extracted and matched us-

ing the proposed dissimilarity measure CIOF (Corners having Inconsistent Optical

Flow). The proposed system has been tested over publicly available CASIA and

PolyU palmprint databases which consist of 4, 528 and 7, 720 images respectively.

In Chapter 7, an efficient multi-modal based recognition system has been pro-

posed. Four different fused multimodal systems viz. iris and knuckleprint, knuck-

leprint and palmprint, palmprint and iris and finally iris, knuckleprint and palmprint

have been proposed and these systems are analyzed on the chimeric databases cre-

ated by us. Conclusions along with the future scope of work have been presented

in the last chapter.

Page 63: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Chapter 2

Literature Review

This chapter presents the literature survey on the work carried out on these three

traits.

2.1 Iris based Biometric System

The iris ROI should be accurately localized which is challenging due to lot of varia-

tion in illumination and other extrinsic factors. The iris sample suffers from dimen-

sional inconsistency among eye images which is caused due to pupil dilation, head

movement, eye movement etc. Hence, the ROI is normalized to a rectangular strip

of fixed size.

The pioneering work in iris recognition system is done by Daugman [19]. The

integro-differential operator is used to find circular boundaries by detecting heavy

jump or drop in summation of pixel intensities over the circle. The multi-scale

quadrature 2D gabor wavelet coefficients are used. A binary feature vector (i.e.

iriscode) of 256 bytes is generated which is shown in Fig. 2.1(a). Feature vectors

obtained from two iris images are matched by the hamming distance.

Wildes [69] has used circular hough transform which finds the circle that fits max-

Page 64: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

30 CHAPTER 2. LITERATURE REVIEW

(a) Localized Iris (b) iriscode

Figure 2.1: Images are taken from [19, 18]

imum edge points. Huang et.al [31] have used rescaled image to find iris boundaries

and have used it to guide the search on the original image. It has improved the

approach presented in [69]. The algorithm proposed by Liu et al. [42] segments

the iris image by ignoring high intensity edge pixels around specular reflections. Lili

and Mei [41] have found a coarse location of iris. It is based on the assumption that

image histogram has three main peaks and they are due to pupil, iris and sclera.

He and Shi [29] have proposed an approach in which image is binarized to locate

the pupil. It uses edge detection and circular Hough transform to find the limbic

boundary. Feng et al. [25] have used coarse-to-fine strategy to find iris boundary

and have suggested an improvement to use lower part of the pupil because it is

stable even under occlusion. Xu et al. [72] have divided the image into grids and

have used the minimum mean intensity across the grids as a threshold. It has con-

sidered to locate pupil and subsequently the limbic boundary. Zaim et al. [74] have

used a split and merge algorithm to detect pupil as the connected region of uniform

intensity.

Camus and Wildes [13] have presented an algorithm which is based on Daug-

man’s integro-differential operator that searches in cubic space of (x, y, r). Instead

of applying the operator directly taking all points as candidate center points, they

have used local minimas of intensity as seed points, making the process faster by

Page 65: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

2.1. IRIS BASED BIOMETRIC SYSTEM 31

3.5 times as compared to Daugman’s algorithm [18]. Actual iris center is examined

by measuring the image patch gradient information. Also uniformity is measured

across the eight rays passing from the assumed potential center at a π4◦ angle dif-

ference, as shown in Fig. 2.2. It gives an accuracy of 99.5% in case of having no

glasses and that of 66.6% with glasses.

(a) Incorrect Center (b) Correct Center

Figure 2.2: Mapping of radial rays into a polar representation pupil and iris bound-aries become vertical edges [13]

Bonney et al. [11] have extracted pupil by using least significant bit-plane along

with erosion and dilation operations. Using pupil area, the standard deviation in

horizontal and vertical direction is computed to search for limbic boundary; both

boundaries are modeled as ellipses and is shown in Fig. 2.3. This method does not

rely on circular edges detection; hence it is not restricted to circular iris.

Figure 2.3: (a) Most Significant bit-plane 7 (b) Bit-plane 6 (c) Bit-plane 5 (d) Bit-plane 4 (e) Bit-plane 3 (f) Bit-plane 2 (g) Bit-plane 1 (h) Least Significant bit-plane0 [11].

Page 66: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

32 CHAPTER 2. LITERATURE REVIEW

In [30], He et al. have proposed a Viola and Jones style cascade of classifiers

[67] for detecting the presence of pupil region and for optimizing the boundary of

pupil. Miyazawa and Ito [50] have thresholded iris image and have used the center

of gravity of binary image to find out pupil center followed by ellipse fitting to

determine pupil boundary. The iris boundary is modeled as an ellipse and is found

using the integro-differential operator. Pupil center is detected using the center of

(a) Changing Iris (b) 10 parameter based deformable iris model

Figure 2.4: Iris Deformable Model [50]

gravity of the binarised image. The two ellipses, as shown in Fig. 2.4 are extracted

by computing a parameter based deformable iris model as shown in Fig. 2.4(b).

Segmented iris is used to extract features and matching is performed using band

limited phase only correlation, (BLPOC). Preprocessing of an iris image is shown

in Fig. 2.5.

The recent trend in iris segmentation deals with the off-angle images i.e. person

looking at some angle rather than straight. Dorairaj et al. [22] have proposed

the global ICA encoding for non-ideal iris based recognition. It has used an initial

available estimate of angle of rotation and then the integro-differential operator for

ellipse has been used for detection and refinement of result. The off-angle image is

projected to front-view image. The overview of the proposed scheme is shown in

Fig. 2.6(b).

Page 67: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

2.1. IRIS BASED BIOMETRIC SYSTEM 33

Figure 2.5: Iris Image Preprocessing [50] : (a) Original Image, (b) Detected InnerBoundary, (c) Detected Outer Boundary, (d) Lower Half of Iris, (e) Normalized, (f)Eyelid Masking and (g) Enhanced Image

(a) Non-Ideal Iris Samples (b) Overview

Figure 2.6: Overview of Non-ideal Iris Recognition System [22]

Li et al. [40] have fitted an ellipse to pupil boundary and have used rotation

and scaling to transform it to circular boundary. It is shown that this calibration

improves the intra-class and inter-class separation that is achieved by Daugman-like

recognition algorithm [19].

One of the major factors that reduces the iris quality significantly is off-angle.

Page 68: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

34 CHAPTER 2. LITERATURE REVIEW

It is almost impossible to give iris data without any off-angle. Abhyankar et al.

[2] have proposed an approach involving bi-orthogonal wavelet networks for non-

ideal imagery. Non-ideal and poor quality iris imaging may cause irrecoverable loss

of information as iris segmentation starts to behave erroneous. The circle fitting

algorithm performs very poorly when one attempts to segment off-angle images, as

shown in Fig. 2.7. Some projective geometry and affine transformations are used

Figure 2.7: Off Angle Iris Segmentation [2]

to distribute the radial resolution uniformly in all quadrants. Synthetic iris samples

are generated by rotating them at an angle of 10◦ and 20◦. Active shape model

(ASM) [3] is used to find elliptical iris boundaries from off-angle images.

In [19], gabor wavelet responses are quantized to generate feature vector and

matching is done using hamming distance. The phase demodulation process is used

to encode iris patterns. Local regions of an iris are projected onto quadrature 2D

Gabor wavelets. The angle of each phase is quantized to one of the four quadrants

by setting two bits of phase information as shown in Fig. 2.8(a).

In [69], hough transform is used for iris localization and Laplacian of Gaussian

(LOG) is used for matching. In [45], vanishing and appearing of important image

structures are considered as key local variations and dyadic wavelets are used to

transform 2D image signals into 1D signals to obtain unique features. They have

located positions of key variations accurately by easily computable dyadic wavelets.

The overview of the system and the dyadic wavelet transformation of any 2D signal

Page 69: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

2.1. IRIS BASED BIOMETRIC SYSTEM 35

(a) Phasor [19] (b) Overview [45]

Figure 2.8: Two Most Popular Iris Systems

(S(X)) are shown in Fig. 2.8(b). The response is converted into a 1D binary

sequence and dissimilarity score is computed using hamming distance for matching.

In [24], gabor wavelet with elastic graph matching is used for iris recogni-

tion. In [51], Discrete Cosine Transform (DCT ) coefficients are extracted from

non-overlapping rectangular blocks at various angles and of variable sizes and are

quantized. The binary feature for each rectangular block is computed. Average

is done across the width to get a 1D vector that is windowed using the hanning

window to suppress the spectral leakage. The DCT is applied over it and the DCT

coefficients are used as features. The differences between the DCT coefficients of

adjacent patch is binarised to obtain a binary code using zero crossings. Steps in-

volved in feature extraction are given in Fig. 2.9. The hamming distance based

matching is performed for score computation.

In [50], Phase Only Correlation (POC) and Band Limited Phase Only Corre-

lation (BLPOC) are used for accurate iris recognition. In [60], variational model

is applied to localize iris while modified contribution selection algorithm (MCSA)

Page 70: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

36 CHAPTER 2. LITERATURE REVIEW

Figure 2.9: Steps Involved for Feature Vector Extraction [51]

is used for iris feature ranking. In [66], compact and highly efficient ordinal mea-

sures are applied for iris recognition. The relationship between iris patches selected

over the complete normalized iris is considered to be robust and is shown in Fig.

2.10(a). Hence, such ordinal relationships can be used as key signature of the iris

patch and this ordinal relationship is encoded as a binary feature. These ordinal

relationship between iris patches is efficiently computed using multi-lobe differential

filters (MLDF Filters) as shown in Fig. 2.10(b). Also, there are some significant

contributions like application of Principle Component Analysis (PCA) and Inde-

pendent Component Analysis (ICA) in iris recognition [17], [31]. A comprehensive

survey on iris biometrics is available in [12].

2.2 Knuckleprint based Biometric System

The finger knuckleprint is relatively new trait and a limited amount of work has

been done. In [77], Zhang et.al have extracted the knuckleprint ROI using convex

direction coding as shown in Fig. 2.11 .

The correlation between two knuckleprints is used for identification. It is calcu-

Page 71: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

2.2. KNUCKLEPRINT BASED BIOMETRIC SYSTEM 37

(a) Ordinal Relationship (b) Multi-lobe Differential Filters (MLDF Filters)

Figure 2.10: Ordinal Measures for Iris Recognition [66]

Figure 2.11: Knuckleprint Acquisition System [77], (a) Device, (b) Raw Sample, (c)ROI Localized, (d) Cropped.

lated using band limited phase only correlation (BLPOC). The knuckleprints are

assumed to possess small range of frequencies and hence, the Fourier based frequency

band is restricted and phase only correlation (POC) is calculated. The POC value

between two images can be used as similarity score and is obtained using the cross-

spread spectrum of the Fourier transform of both images.

In [52], knuckleprints are enhanced using CLAHE to address non-uniform re-

flection because they can be affected due to noise and improper illumination. There

Page 72: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

38 CHAPTER 2. LITERATURE REVIEW

Figure 2.12: Ensemble of Local and Global Features [80]

can be some variation becasue of scale and rotation; hence, scale invariant feature

transform (SIFT ) key-points can be used for matching. In [71], a knuckleprint

based recognition system that extracts features using local gabor binary patterns

has been proposed. Gabor filter is applied over a pixel and its neighborhood. A

discriminating local pattern is extracted to represent that pixel.

In [80], local and global features are fused to achieve better performance. Global

features are extracted by band limited phase only correlation (BLPOC) [77] while

local features are found using gabor filter [35] which is a Gaussian envelop modulated

Page 73: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

2.2. KNUCKLEPRINT BASED BIOMETRIC SYSTEM 39

by a sinusoidal plain wave. It ensures that the major contribution in convolution

response comes from the local small patch. The local orientation of a pixel is

estimated by applying six gabor filters at an angle of π6. The maximum responsive

gabor filter is considered to code that pixel. Encoding is done into three bit code for

each pixel. Hamming distance is used for matching. Both local and global scores

are fused to get the better result.

In [79], a bank of six gabor filters at an angle of π6

is applied to extract features

for those pixels that are having varying gabor responses. All pixels do not contribute

to discrimination equally as some of them may be at a similar background patch;

hence, they may not possess any dominant orientation. Such pixels are ignored and

the orientation code is computed as shown in Fig. 2.13(b). Similarly, magnitude

code is obtained by the real and imaginary part of the gabor filers as shown in Fig.

2.13(c). The orientation and magnitude information are fused to achieve better

results.

(a) Original (b) ImCompCode (c) MagCode

Figure 2.13: Improvement over Gabor based Compcode. (a) Original Image, (b)Orientation Code (ImCompCode), (c) Magnitude Code (MagCode) (Images takenfrom [79])

In [78], three local features viz. phase congruency, local orientation and local

phase as shown in Fig. 2.14 are extracted. All of them are computed using the

quadrature pair filter and are fused at score level. Finally, all are fused along with

the gabor based local features and BLPOC based global features [80] to achieve

the better performance.

Page 74: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

40 CHAPTER 2. LITERATURE REVIEW

Figure 2.14: Local Feature Extraction

2.3 Palmprint based Biometric System

Palmprint recognition systems are broadly based on structural and statistical fea-

tures. In [28], line-like structural features are extracted by applying morphological

operations over edge-maps. In [23], structural features such as points on principle

line and some isolated points are utilized for palmprint authentication.

Gabor based directional filtering is used to develop several palmprint based

recognition systems such as [35], [36], [65], [76]. In [76], single fixed orientation

gabor filter is applied over the palmprint and the resulting Gabor phase is binarized

using zero crossings. In [36], bank of elliptical Gabor filters with different orienta-

tions is employed to extract the phase information of the palmprint image and is

merged according to a fusion rule to produce feature vector. In [35], the palmprint

is processed using the bank of Gabor filters with different orientations. The highest

filter response is preserved as features. Further, to improve the performance, a mod-

ified algorithm using fuzzy C-means is used to cluster the orientation of each Gabor

filter in [73]. In [65], the palmprint is processed using the bank of orthogonal Gabor

filters. Their differences are considered as palmprint features . All these systems

use hamming distance for matching two palmprints. Statistical techniques such as

Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), In-

dependent Component Analysis (ICA) and their combinations are also applied to

palmprints to achieve better performance [43],[62],[70].

Several other techniques such as Stockwell [6], Zernike moments [7], Discrete

Cosine Transforms (DCT) [8] and Fourier [9] transforms are also applied to achieve

Page 75: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

2.3. PALMPRINT BASED BIOMETRIC SYSTEM 41

good performance. The palmprint samples are partitioned into several different

ways to compliment the transform or the method that is finally used to extract the

features.

Figure 2.15: Partitioning for Zernike based Palmprint System [7]

In [7], each palmprint is partitioned into non-overlapping blocks as shown in Fig.

2.15. From each block, Zernike moment based features are computed. The order

of the moment signifies the degree of the detailed information. Higher the order,

more is the information. Lower order moments are used to compute features as they

can coarsely represent the image information. Blocks are also weighed based on the

entropy of the block and the binarised feature vector is matched using the hamming

distance.

In [9], palmprint samples are divided into overlapping square blocks as shown

in Fig. 2.16. The 2D block is converted into two 1D signal by performing averag-

ing operation in horizontal and vertical directions. The phase difference between

horizontal and vertical signals is binarised using zero-crossing to obtain the binary

vector. The hamming distance is used for matching.

In [6], the palmprint is divided into circular strips. The radius of the strip is a

variable and is optimized for performance as shown in Fig. 2.17. Similar to Fourier

transform, Stockwell transform can also be used for spectral decomposition. But the

Page 76: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

42 CHAPTER 2. LITERATURE REVIEW

Figure 2.16: Partitioning for Phase based Palmprint System [9]

Figure 2.17: Partitioning for Stockwell based Palmprint System [6]

instantaneous − phase used by Stockwell transform produces feature vector that

encodes both phase and time which is more useful than only phase or magnitude

as we can get from Fourier transform. Hence, features using Stockwell transform

are computed from this annular ring as shown in Fig.2.17. The 2D annular ring is

converted into 1D average vector over which Stockwell transform is applied. These

features are binarised and hamming distance is used for matching.

In [8], the palmprint samples are divided into oriented rectangular boxes of

variable size and orientation as shown in Fig. 2.18. The 2D DCT can perform the

image compression and hence, is used to save energy of the image more efficiently.

Page 77: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

2.4. MULTI-MODAL BIOMETRIC SYSTEM 43

Figure 2.18: Partitioning for DCT based Palmprint System [8]

The DCT based feature for each rectangular block is computed. For each rectangle,

average in the vertical direction is computed to get a 1D vector that is windowed

using the hanning window to suppress the spectral leakage. On each windowed

signal, the DCT is applied and the coefficients are used as features. Features of

the adjacent block are subtracted and binarised to obtain the binary feature vector.

The hamming distance is used to determine its matching score.

2.4 Multi-modal Biometric System

Not much work has been reported in this area largely because of non-availability

of multi-modal biometric database. In [64], 2D discrete wavelets have been used to

extract low dimensional features from iris and face. A reduced joint feature vector

set is obtained using Direct Linear Discriminant Analysis (DLDA) which is finally

used for classification. In [58], face and iris (left and right both) are fused using

SIFT feature vector. The classification is done using nearest neighborhood ratio

matching. In [81], iris and face are considered and PCA coefficients along with

Daughman’s gabor filter approaches are used for face and iris images respectively.

Scores are fused after min-max normalization.

Page 78: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

44 CHAPTER 2. LITERATURE REVIEW

In [59], scores obtained by eigenfinger and eigenpalm are fused while in [39] hand

shape and palm features are fused. In [38], finger geometry and dorsal finger sur-

face information are fused to improve the performance as compared to its unimodal

version. In [49], features which are automatically detected by tracking are encoded

using efficient directional coding while Ridgelet transformation is used for feature

matching and these scores are fused using SVM. In [48], 1D gabor filters are used to

extract features from knuckle and palmprint. In [82], fusion of knuckle and palm-

print information is done at score level. Sharp edge based knuckleprint features are

denoised using wavelet. Corner features with their local descriptors are considered

for palmprint images. Finally, matching is done by cosine similarity function and

hierarchical hand metric. In [53], radon and haar transforms are used for feature

extraction from knuckleprints of multiple fingers and nonlinear fisher transforma-

tion is done for dimensionality reduction. Matching is done using parzen window

classification. In [47], score level fusion is performed on palm and knuckleprints

using phase only correlation (POC) function.

Page 79: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Chapter 3

Preliminaries

Their exist several techniques for extracting features from iris, knuckle and palm-

print. These techniques can broadly be divided into two categories: global and

local. Any global feature extraction technique considers whole image to obtain fea-

tures from the image while on the other hand, local techniques consider only a few

neighboring pixels or a fixed size in each of local patches. Some of the well known

global extraction techniques are designed with the help of Gabor filters, local bi-

nary patterns, Radon based transforms, Gaussian based ordinal measures, Wavelets,

Stockwell transform, Zernike moment and discrete cosine transform (DCT). Simi-

larly, there are several local feature extraction techniques to extract features. Some

of them are PCA, LDA, Gabor − PCA, LBP − PCA, DFT etc. Advantages

of the use of global features is that they can be computed at once for the whole

image; hence they should be easy to compute and fast. But full image approxi-

mation may create the problem of under-estimation. Local features are extracted

for each pixel, block or key point; hence, they are computationally intensive. But

performance-wise, they are found to be better because of the effective use of smaller

neighborhood and similar patch approximation.

Generally, features of iris, knuckle and palmprint are converted into binary vector

Page 80: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

46 CHAPTER 3. PRELIMINARIES

and hamming distance based measures are used to compute matching scores between

two feature vectors. One of the disadvantages of such a matching strategy is that

if the images are not well registered, the performance of the system gets severely

affected. This is because it never attempts to compute the pixel or patch level

correspondence.

This chapter discusses some well known feature extraction techniques which are

used to design biometric systems based on Iris, Knuckleprint and Palmprint. These

techniques are local binary pattern (LBP) [54], Phase Only Correlation (POC) [80],

KLT Corner Extraction [63] and LK-tracking algorithms [44].

3.1 Local Binary Pattern (LBP)

It is a well known texture operator [54] which is simple and computationally efficient

and effective. It assumes that any image texture has some pattern and a strength

is associated to it. It is invariant to illumination and its intensity has the range

from 0 to 255; hence, it can be realized as a gray-scale image. It is based on the

assumption that pixel’s relative gray value with respect to its 8-neighborhood pixels

can be more stable than its own intensity value. A 8-bit local binary pattern is

computed by thresholding its 8 neighbors with respect to itself and by representing

the ordinal relation as shown in Fig. 3.1.

The LBP features of each image can be computed in the form of LBP his-

tograms. These histograms are saved in the form of 1-D vectors and are used to

compute distance. Let S and M be the histograms of the probe image and the

gallery image respectively. To discriminate histogram features, there exist several

possible dissimilarity measures as discussed below.

Page 81: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

3.1. LOCAL BINARY PATTERN (LBP) 47

Figure 3.1: LBP Computation

• Histogram intersection:

D (S,M) =∑i

min (Si,Mi) (3.1)

• Log-likelihood statistic:

L (S,M) = −∑i

Si logMi (3.2)

• Chi square statistic (χ2)

χ2 (S,M) =∑i

(Si −Mi)2

Si +Mi

(3.3)

where Si and Mi are ith bin of histograms S and M respectively.

The LBP based histogram features can be used for face recognition. Let us

assume that we have two facial images probe and gallery between which we have

to find out the dissimilarity score. Regions like left eye, right eye, nose and lips of

each facial image are divided into 8× 8 = 64 blocks. For each block, a histogram is

Page 82: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

48 CHAPTER 3. PRELIMINARIES

(a) Original (b) Key Region (c) LBP Histogram

(d) Original (e) Key Region (f) LBP Histogram

Figure 3.2: Application of LBP in Face Recognition

calculated using decimal values of the binary patterns as labels. The concatenation

of the histograms for each block in each region acts as the feature vector of that image

(say Hprobe and Hgallery for probe and gallery respectively). This feature vector is

used for calculation of dissimilarity between the probe and the gallery images. The

Chi square (χ2) measure can be used to obtain the dissimilarity between probe and

gallery image as

χ2 (Hprobe, Hgallery) =255∑i=0

(H iprobe −H i

gallery

)2H iprobe +H i

gallery

(3.4)

where H iprobe and H i

gallery are the values for ith label of probe and gallery his-

tograms respectively. Overall dissimilarity score for probe and gallery images can

be expressed as

Page 83: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

3.2. CORNER POINT DETECTION 49

D (probe, gallery) =∑∀Region

∑∀Blocks

χ2 (Hprobe, Hgallery) (3.5)

It can be noted that since D provides the dissimilarity information between two

samples, lower the value of D, better is the match.

3.2 Corner Point Detection

Some key points that are visually significant, lie on the edges or sharp discontinuities.

But all points on the edges in an image cannot be considered as key feature points

because they all look similar along that edge. Corners have strong derivative in

two orthogonal directions and can provide enough robust information for tracking.

We have considered corner points as features because of their repeatability and

discrimination.

The autocorrelation matrix M [63] can be used to calculate corner points that

are having strong orthogonal derivatives. The matrix M can be defined for any

pixel at the ith row of the jth column of an image as:

M(i, j) =

A B

C D

(3.6)

where

Page 84: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

50 CHAPTER 3. PRELIMINARIES

A =∑

−K≤a,b≤K

w(a, b).I2x(i+ a, j + b)

B =∑

−K≤a,b≤K

w(a, b).Ix(i+ a, j + b).Iy(i+ a, j + b)

C =∑

−K≤a,b≤K

w(a, b).Iy(i+ a, j + b).Ix(i+ a, j + b)

D =∑

−K≤a,b≤K

w(a, b).I2y (i+ a, j + b)

(3.7)

and w(a, b) is the weight given to the neighborhood,Ix(i + a, j + b) and Iy(i +

a, j+b) are the partial derivatives sampled within a patch of size (2K+1)×(2K+1)

centered at the pixel (i, j). However, all neighbors may not have same weight.

The matrix M can have two eigen values λ1 and λ2 such that λ1 ≥ λ2 with e1 and

e2 as the corresponding eigenvectors. All pixels having λ2 ≥ T (smaller eigen value

greater than a threshold) are considered as corner points. An example is shown in

Fig. 3.3.

(a) Original (b) Extracted Corners

Figure 3.3: Corner Features shown in Red

3.3 Sparse Point Tracking (KLT)

The feature point correspondence problem is the key issue in authentication. We

have used corner points as the feature points; corner point correspondence issue can

Page 85: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

3.3. SPARSE POINT TRACKING (KLT) 51

be tackled with the help of a suitable tracking algorithm. The KL-tracking algo-

rithm [44] localizes the set of features points of an image in another image when two

images are taken of the same object but at different time. This tracking algorithm

makes an effort to estimate the optical flow vector for each sparsely populated fea-

ture point and uses that flow vector to estimate the final position of that feature in

the subsequent image.

Let there be a feature at location (x, y) at a time instant t with intensity I(x, y, t)

and this feature has moved to the location (x + δx, y + δy) at the time instant

t+ δt. Three basic assumptions that are used by KL Tracking to perform tracking

successfully are:

• Brightness Consistency: Features in a frame do not change much for the

small change in the value of δt, i.e

I(x, y, t) ≈ I(x+ δx, y + δy, t+ δt) (3.8)

• Temporal Persistence: Features in a frame moves only within a small neigh-

borhood. It is assumed that features have not made much movement for the

small change in the value of δt. Using Taylor series and neglecting the high

order terms, one can estimate I(x+ δx, y + δy, t+ δt) as

δI

δxδx+

δI

δyδy +

δI

δtδt = 0 (3.9)

Dividing both sides of Eq 3.9 by δt one gets,

IxVx + IyVy = −It (3.10)

where Vx, Vy are the respective components of the optical flow velocity for

Page 86: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

52 CHAPTER 3. PRELIMINARIES

the pixel I(x, y, t) and Ix, Iy and It are the derivatives in the corresponding

directions.

• Spatial Coherency: In Eq. 3.10, there are two unknown variables Vx and Vy

for every feature point. Hence finding unique Vx and Vy for every feature point

is an ill-posed problem1. The spatial coherency assumption is used to solve this

problem. It assumes that a local mask of pixels moves coherently. Therefore,

one can estimate motion of the central pixel by assuming the local constant

flow. The KL tracking uses a non-iterative method by considering flow vector

(Vx, Vy) as constant within 5 × 5 neighborhood (i.e 25 neighboring pixels,

P1, P2 . . . P25) around the current feature point (center pixel) to estimate its

optical flow. The said assumption is fair as all pixels within a mask of 5×5 can

have coherent movement. Hence, one can obtain an over-determined system

of 25 linear equations which can be represented by

Ix(P1) Iy(P1)

......

Ix(P25) Iy(P25)

︸ ︷︷ ︸

C

×

VxVy

︸ ︷︷ ︸

V

= −

It(P1)

...

It(P25)

︸ ︷︷ ︸

D

(3.11)

where rows of the matrix C represent the derivatives of image I in x and

y directions and those of D are the temporal derivative at 25 neighboring

pixels. This system of linear equations is used to compute the estimated flow

of current feature point V using least square method. The 2× 1 matrix V is

defined as

V = (CTC)−1CT (−D) (3.12)

1There are two variables to evaluate using only one equation

Page 87: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

3.4. PHASE ONLY CORRELATION (POC) 53

The final location F of any feature point is estimated with the help of its initial

position vector I and the estimated flow vector V as

F = I + V (3.13)

(a) Hand Image (b) Hand Tracking (Feature correspondence)

Figure 3.4: Hand Tracking Application

A hand tracking system can make use of the tracking algorithm as shown in

Fig. 3.4. It can estimate the sparse optical flow using KL-tracking algorithm. The

optical flow which is nothing but the direction of feature motion (i.e motion vectors)

for each feature is computed for all video frames. Hence, feature correspondence

can be used to track hand in a video while it is moving freely.

3.4 Phase Only Correlation (POC)

The POC can play an important role in registering two images. Let f and g be two

M × N images. Assume M = 2M0 + 1, N = 2N0 + 1 and the image center is at

location (0, 0). If F and G are the Discrete Fourier Transform (DFT ) of f and g

Page 88: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

54 CHAPTER 3. PRELIMINARIES

images respectively, then

F (u, v) =

M0∑m=−M0

N0∑n=−N0

f(m,n)e−j2π(muM

+nvN

) (3.14)

= AF (u, v)ejθF (u,v) (3.15)

G(u, v) =

M0∑m=−M0

N0∑n=−N0

g(m,n)e−j2π(muM

+nvN

) (3.16)

= AG(u, v)ejθG(u,v) (3.17)

Cross Phase Spectrum between G and F is given by:

RGF (u, v) =G(u, v)× F ∗(u, v)

|G(u, v)× F ∗(u, v)|= ej(θG(u,v)−θF (u,v)) (3.18)

(a) Original Face (b) Translated Face (c) Sharp Peak

Figure 3.5: Phase Only Correlation (POC) between two translated images. Imagein (c) represents the corresponding POC values for each spatial location

Phase Only Correlation (POC) is the IDFT of RGF (u, v) and is given by :

Pgf (m,n) =1

MN

M0∑u=−M0

N0∑v=−N0

RGF (u, v)ej2π(muM

+nvN

) (3.19)

From [50], we can have the following. When two images are “Same”, the POC

function Pgf (m,n) becomes kronecker delta function δ(m,n). If two images are

“Similar”, the POC function gives a distinct sharp peak; otherwise, the peak value

Page 89: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

3.4. PHASE ONLY CORRELATION (POC) 55

drops significantly. Hence peak height of Pgf (m,n) can be considered as a similarity

measure.

Image in Figure 3.5(b) is a translated version of image in Figure 3.5(a). One can

clearly observe from Fig. 3.5(c) that there exists a sharp distinct peak which points

to the location (tx, ty) at which translation is most likely to be expected. In other

words, image in Fig. 3.5(b) can be registered by translating image in Fig. 3.5(a) by

txunits in x-direction and ty units in y-direction.

Page 90: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

56 CHAPTER 3. PRELIMINARIES

Page 91: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Chapter 4

Iris Recognition System

This chapter deals with the problem of designing an efficient iris based recognition

system. Two state-of-the-art techniques (Integro-differential and Hough transfor-

mation) are applied in designing the system so that they can compliment to each

other for the efficient iris segmentation. The iris contains good amount of texture

but it is required to be enhanced. Identification/authentication is performed by

tracking corner features. Like any other biometric system, iris based recognition

system consists of five major tasks, viz. ROI extraction, quality estimation, ROI

preprocessing, feature extraction and matching. The overall architecture of the

proposed iris based recognition system is shown in Fig. 4.1.

4.1 Iris ROI Extraction

This section proposes an efficient iris segmentation technique. It approximates pupil

and limbic boundaries by circles. It extracts iris ROI by applying hough and integro-

differential transformations. A modified hough transform is used to locate the inner

boundary and then sector based integro-differential operator is applied to localize

the outer iris boundary. The detected iris is projected into a rectangular strip which

Page 92: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

58 CHAPTER 4. IRIS RECOGNITION SYSTEM

Figure 4.1: Overall Architecture of the Proposed Iris Recognition System

is shown in Fig. 4.2.

(a) Original Iris (b) Donut Shaped (c) Cropped Iris (d) Normalized Iris

Figure 4.2: Iris Segmentation

4.1.1 Iris Inner Boundary Localization

The pupil of an eye can be modeled as a dark circular region within the iris. Each

eye image is scaled down for faster processing and thresholded based on image

brightness to filter out pixels of the pupil and to get a binary image It. This reduces

Page 93: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.1. IRIS ROI EXTRACTION 59

the computation for pupil boundary (i.e iris inner boundary) on only dark pixels in

the image. However, there may be some eyelashes, eyebrows or shadow points which

can act as noise in inner boundary detection. Also, specular reflection on the pupil

may cause the incorrect detection of the boundary. Hence, morphological operator

of flood-filling with four neighbors is applied to the binary image It. Pixels which

cannot be reached by flood-filling the background form holes are removed. Thus,

specular reflection inside the pupil region and other dark spots caused by eyelashes

are removed as shown in Figure. 4.3(b) represented as Itf .

(a) I1(Original) (b) I1tf (Thresholded)

Figure 4.3: Iris Pupil Segmentation (Thresholding)

The Sobel filters in both horizontal and vertical directions are applied on the iris

image to obtain the horizontal and vertical gradient images. The Sobel derivative

is an approximation to image intensity gradient in a given direction. The gradient

magnitude image Ig is obtained by

Ig(x, y) =√

(I2gh(x, y) + I2gv(x, y)) (4.1)

where Igh and Igv are horizontal and vertical gradient images.

The gradient magnitude image Ig is thresholded to generate binary image Igb as

shown in Figure. 4.4(b), representing only strong edges. Edge orientation at any

Page 94: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

60 CHAPTER 4. IRIS RECOGNITION SYSTEM

pixel (x, y) in the generated binary image is obtained by:

θ(x, y) = tan−1(Igv(x, y)

Igh(x, y)

)(4.2)

(a) I1(Original) (b) I1gb(Sobel)

Figure 4.4: Iris Pupil Segmentation (Gradient based Thresholding using Sobel)

An improved circular Hough transform [10] is applied over Itf to detect the pupil

boundary. The top left-most corner is considered as the origin. The original Hough

transform for circle detection has three parameters; hence the parametric space is

of 3D viz. x and y coordinates of the center and the radius r. In order to reduce 3D

parametric search space, the Hough transform is modified so that the search can be

performed on 1D space (i.e. only on the radius r).

Modified Hough Transform : Suppose, an edge pixel (x, y) in the generated

binary image is on a circle. For any fixed radius, there can only be two potential pupil

centers (cx1 , cy1) and (cx2 , c

y2). They must be on the opposite sides of the tangential

line to the circle at that point and over the normal line as shown in Fig. 4.5.

These potential centers are calculated for different radii r using the normal to

the orientation angle θ(x, y) by using the following equations:

cx1 = x+ r · sin(θ(x, y)− π/2) (4.3)

cy1 = y − r · cos(θ(x, y)− π/2) (4.4)

Page 95: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.1. IRIS ROI EXTRACTION 61

Figure 4.5: For any radius r, any point (x, y) having 2 pupil centers (cx1 , cy1) and

(cx2 , cy2)

cx2 = x− r · sin(θ(x, y)− π/2) (4.5)

cy2 = y + r · cos(θ(x, y)− π/2) (4.6)

(a) I1(Original) (b) I1ps(Pupil)

Figure 4.6: Segmented Iris Pupil

A voting scheme is devised in which all edge pixels cast a vote for its likely-to-

be-circle. A 3-D array A is created for storing the votes for every set of possible

parameter set where A(i, j, r) represents the number of edge pixels casting vote

for the circle having center at (i, j) pixel with radius r. Each edge pixel for a fix

radius r can vote for only two circles that have their center at a distance of r from

itself in either side as shown in Fig. 4.5. The values A(cx1 , cy1, r) and A(cx2 , c

y2, r)

Page 96: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

62 CHAPTER 4. IRIS RECOGNITION SYSTEM

are incremented by 1 for each edge point (x, y), if the candidate points (cx1 , cy1)

and (cx2 , cy2) are lying within the image boundary of the generated binary image.

The parameter set with maximum votes give the coordinates of the center and the

radius of the iris inner boundary as shown in Figure 4.6(b). The algorithm for inner

boundary localization is described in Algorithm 4.1. Results obtained at each step

of the algorithm are shown in Figure 4.7.

(a) I2(Original) (b) I2tf (Thresholded) (c) I2gb(Sobel) (d) I2ps(Pupil)

Figure 4.7: All Steps in Iris Pupil Segmentation

4.1.2 Iris Outer Boundary Localization

(a) Integro-differential Oper-ator

(b) Area of iris considered forouter boundary localization

Figure 4.8: Application of Integro-differential Operator

The contrast across the outer iris boundary is less compared to the inner bound-

ary. Thus, any edge detector may fail to detect the true edges. To tackle this issue,

circular integro-differential operator [19] which uses raw derivative information is

used.

Page 97: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.1. IRIS ROI EXTRACTION 63

Algorithm 4.1 Pupil Segmentation

Require:Iris image I of dimension m× n,prmin : minimum pupil radius,prmax : maximum pupil radius,t: binary threshold.

Ensure:Pupil center cp with co-ordinates as (cxp , c

yp),

Pupil radius pr.

1: It ← threshold(I, t); // generate binary image after thresholding2: Itf ← remove specular reflection(It); // flood filling3: Ig, Igh, Igv ← Sobel(Itf ); // Sobel edge detection4: Igb ← Threshold Gradient(Ig); // choosing the best edge points5: E ← white pixels in Igb; // collect the edge points in Igb6: for all Edge pixels (x, y) ε E do

7: θ(x, y)← tan−1(Igv(x,y)

Igh(x,y)

); // edge orientation at a point

8: end for

9: A(m,n, prmax)← 0; // 3-D array initialization for voting

10: for all Edge pixels (x, y) ε E do11: for r = prmin to prmax do12: Compute (cx1 , c

y1), (cx2 , c

y2) by putting (x, y) and θ(x, y) in eqs. (4.3)-(4.6);

13: if Point (cx1 , cy1) lies within Igb image then

14: A(cx1 , cy1, r)← A(cx1 , c

y1, r) + 1; // Vote Casting

15: end if16: if Point (cx2 , c

y2) lies within Igb image then

17: A(cx2 , cy2, r)← A(cx2 , c

y2, r) + 1; // Vote Casting

18: end if19: end for20: end for21: cxp ← argmax(i) A(i, j, k);22: cyp ← argmax(j) A(i, j, k);23: pr ← argmax(k) A(i, j, k);

Integro-differential operator: For any circle with center (cx, cy) and radius

r, the integro-differential operator adds the intensity values over this circle and

calculates the difference in the summation over the concentric circle of higher radius

Page 98: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

64 CHAPTER 4. IRIS RECOGNITION SYSTEM

(r + δr). The circle having the maximum jump in terms of this summation gives

the outer iris boundary as shown in Fig. 4.8(a). The integro-differential operator

can be expressed as:

Gσ(r)∗(∑

θ[I(cx − (r + δr) sin θ, cy + (r + δr) cos θ)− I(cx − r sin θ, cy + r cos θ)]

2πr

)(4.7)

where Gσ(r) is radial Gaussian operator with a standard deviation of σ and ∗ is

the convolution operator. The arguments (cx, cy) and r for the maximum response

value of this operator over the image give the coordinates of iris center and radius

respectively.

4.1.3 Proposed Segmentation Algorithm

The image is smoothened using 2-D Gaussian filter to remove any stray noise. Iris

center and pupil center may not be concentric, but are usually closed to each other.

Hence iris center is searched within a W×W window around the pupil center (cxp , cyp).

For each point within this window, intensity values over an arc of (−π/4, π/6)

∪(5π/6, 5π/4) radian angular sectors (with respect to horizontal as shown in Figure.

4.8(b)) over the perimeter are summed up for any particular radius. We have chosen

only these sectors because in these sectors of the iris, occlusion is empirically found to

be minimum as compared to other iris areas. The gradient for this sum is calculated

for all radius. Parameters of the circle with maximum gradient is considered as the

coordinates of iris center and iris radius.

The algorithm for outer iris boundary localization is explained in Algorithm 4.2.

It uses,

(a) The co-ordinates of pupil center (cxp , cyp), the original image of dimension

m× n,

Page 99: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.1. IRIS ROI EXTRACTION 65

Figure 4.9: Iris Localization (1st Row : Original Iris, 2nd Row : Segmented Iris)

(b) The iris radius range is (irmin , irmax),

(c) The search window of size W ×W around the pupil center within which iris

center is assumed to be located,

(d) There is a occlusion free range of sector angles αrange.

The original image is smoothened to remove noise and circular integro-differential

operator is applied over the occlusion free sectors using the candidate iris center

points around the pupil center and the radii in the range. The circular summation

of intensity for all candidate centers and for all radii is computed. The center

coordinates and radius for which the maximum difference is obtained are considered

as the iris center and radius. Algorithm 4.2 finds the parameters for outer iris

boundary. Some typical eye images after iris localization are shown in Figure 4.9.

4.1.4 Iris Normalization

The iris samples suffer from dimensional inconsistencies among eye images mainly

due to the following reasons:

• Iris stretching caused by pupil dilation due to varying illumination

• Varying imaging distance

• Rotation of the camera

Page 100: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

66 CHAPTER 4. IRIS RECOGNITION SYSTEM

Algorithm 4.2 Iris Segmentation

Require:Iris image I of dimension m× n,irmin : minimum iris radius,irmax : maximum iris radius,(cxp , c

yp): Pupil center,

pr: pupil radius,W : search window,αrange: angular range defining the occlusion free sectors.

Ensure:Iris center ci(c

xi , c

yi ),

iris radius ir.

1: Is ← GaussSmooth(I, σ = 0.5, k = 3); // k: kernel size, Gaussian noise removal2: maxdiff ← 0; // the maximum change in contour summation3: for all points (cx, cy) ε [W ×W ] window around (cxp , c

yp) do

4: prevsum ← 0; // previous circular summation of intensity values5: startflag ← True; // no circle has yet been summed up6: for r = irmin to irmax do7: csum ← 0;8: for all α ε αrange do9: csum ← csum + I(cx − r sin(α), cy + r cos(α));//sector-wise summation

10: end for11: diffsum ← csum − prevsum; // calculation of difference of sum12: prevsum ← csum;13: if diffsum > maxdiff and startflag 6= True then14: maxdiff ← diffsum;15: cxi ← cx, cyi ← cy, ir ← r; // update the parameters16: end if17: startflag ← False; // a circle has been summed up18: end for19: end for

• Head tilt

• Rotation of the eye within the eye socket.

The iris normalization process produces iris regions which have the constant dimen-

sion. Hence, two images of the same iris under different environmental conditions

Page 101: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.1. IRIS ROI EXTRACTION 67

(a) Segmented Iris ROI (b) Normalized Iris ROI

Figure 4.10: Normalization of the Cropped Iris

have their unique and random characteristic features at more or less same spatial

location. Therefore, the iris can be transformed to a fixed dimension rectangular

strip as shown in Fig. 4.10(b). For the normalization of the segmented iris, rubber

sheet model [20], which assumes iris area to be stretched like rubber, is used. It

handles the above mentioned inconsistencies by mapping the iris ring points from

Cartesian to polar coordinates, using the following equations:

I(x(r, θ), y(r, θ))→ I(r, θ) (4.8)

x(r, θ) = (1− r) · xi(θ) + r · xo(θ) (4.9)

y(r, θ) = (1− r) · yi(θ) + r · yo(θ) (4.10)

where I denotes the segmented iris image, (xi(θ), yi(θ)) and (xo(θ), yo(θ)) are co-

ordinates of the inner and the outer boundary at an angle θ (w.r.t the horizontal

direction) of pupil and iris centers respectively. The radius r ranges within the

interval [0, 1]. The circular segmented iris region is divided into m equi-spaced con-

centric circles and n sectors to obtain m× n sized rectangular iris strip as shown in

Fig. 4.10(b). Each intersection point is mapped to polar coordinate using equations

(4.8) - (4.10).

Page 102: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

68 CHAPTER 4. IRIS RECOGNITION SYSTEM

4.2 Iris Quality Estimation

Quality of an image plays an important role in any identification system. Higher

the quality, lesser the false accept and false reject rates. The quality of any iris

image is affected by the presence of eyelashes and eyelids over the iris region, lack of

focus over iris region, improper lightning condition, dilation of the iris etc. In this

section, an algorithm has been proposed to classify iris images into different quality

based classes lying between 1 and 5. Higher class index indicates better quality.

The quality of the iris image has been modeled as a function of the following six

attributes: Focus (F ), Motion Blur (MB), Occlusion (O), Contrast and Illumination

(CI), Dilation (D), Specular Reflection (SR). Hence overall quality of the iris image

can be represented by

Quality = f(F,MB,O,CI,D, SR) (4.11)

where f is a function which is learned from the training data using a Support Vector

Machine (SVM). Quality attributes from the training set of iris images along with

their respective true quality labels are used to train the SVM classifier. This creates

a model based classifier which can be used to predict the quality of an iris image.

The six quality attributes computed are

1. Focus (F ) : It refers to the amount of blur due to defocus in the image.

2. Motion Blur (MB) : It is the blurring effect caused due to the movement of

the camera relative to the subject or vice-versa during image acquisition.

3. Occlusion (O) : It is one of the major hurdles in iris recognition. It occurs

due to eyelids, eyelashes, specular reflection and shadows.

4. Contrast and Illumination (CI) : It is defined as the range of its intensity level.

Any high contrast image generally has a uniformly distributed histogram.

Page 103: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.2. IRIS QUALITY ESTIMATION 69

5. Dilation (D) : It is a natural process of expansion of the pupil area due to the

lack of proper lightning within the surroundings that reduces the amount of

iris area.

6. Specular Reflection (SR) : They are caused due to light source that is used

to illuminate the iris. Iris is very reflective, so the light reflection is visible in

the acquired image.

4.2.1 Focus (F) based Quality Attribute

Focus of an image refers to the amount of blur due to defocus in the image. It is the

blurring introduced in the image when the focal point is outside the depth of the

field of object being captured. Greater the distance, higher the amount of defocus

blur. The 2D Fourier spectrum of any well focused image is uniform; but for a

defocus image, the spectrum is concentrated more towards the low frequencies. The

focus of an image can be estimated by calculating the high frequency components

in the image spectrum. This is done by convolving the image with the proposed

6× 6 kernel.

K =

−1 −1 −1 −1 −1 −1

−1 −1 −1 −1 −1 −1

−1 −1 8 8 −1 −1

−1 −1 8 8 −1 −1

−1 −1 −1 −1 −1 −1

−1 −1 −1 −1 −1 −1

(4.12)

This kernel can approximate the 2D Fourier spectrum high frequency band pass

filter. Higher the response from the image, more focused the image is. For the

proposed 6 × 6 kernel, its corresponding filter responses are shown in Figure 4.11.

One can observe that well focus images have higher responses as compared to the

Page 104: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

70 CHAPTER 4. IRIS RECOGNITION SYSTEM

defocus images.

(a) Focus Image (b) Response (c) Defocus Image (d) Response

Figure 4.11: Focus Filter Response

4.2.2 Motion Blur (MB) based Quality Attribute

The iris acquisition often gets affected by motion blur because iris movement is

very hard to control. The iris image becomes blurred whenever there is a relative

movement between the acquisition sensor and subject’s eye. The amount of blur is

directly proportional to the amount of motion.

The edge width at any edge pixel is used to compute the motion blur. The edge

pixels in an image are computed using sobel operators. The edge width at any edge

pixel is estimated as the number of pixels between the local extremities at either

side of the edge pixel as shown in Fig. 4.2.2. It is assumed that blurred edges are

having more width than the sharper edges.

In [26], the amount of motion blur is computed by defining Just Noticeable Blur

(JNB) which is the minimum amount of blur required to be perceived by humans.

The edge width from which JNB becomes noticeable is called JNBwidth. It is

determined by intensity value range within a small neighborhood as given by

JNBwidth =

5 if Intensity > 50

3 if Intensity ≤ 50

(4.13)

Page 105: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.2. IRIS QUALITY ESTIMATION 71

Figure 4.12: JNB Edge Width

The motion blur is computed for an image by dividing it into blocks of size

64× 64. The blocks having edge pixels more than a pre-defined threshold are called

edge blocks and only these blocks are considered for computation. For every edge

pixel in an edge block, the probability of blur is defined by a psychometric function

P (eij) = 1− e−(edgewidth(eij)

JNBwidth(B))β

(4.14)

where B is an edge block, P (eij) is the probability of blur, edgewidth(eij) is the

edge width at eij and JNBwidth(B) is the just noticeable blur width for the block

B. The parameter β is a value empirically selected as 3.6. As the edge width

increases, the probability of blur increases. The motion blur becomes noticeable

when edgewidth(eij) is greater than or equal to JNBwidth(B) for that block. From

Eq. (4.14), it is seen that the probability of blur increases up to 63.42% when

edgewidth(eij) becomes equal to JNBwidth(B).

Hence any edge pixel with the blur probability less than 64% is considered as

a pixel of a sharp edge; otherwise, the pixel is considered as a blurred pixel. The

motion blur parameter (MB) for any image is defined as the ratio of the total

number of pixels belonging to blurred edges to the total number of edge pixels.

Page 106: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

72 CHAPTER 4. IRIS RECOGNITION SYSTEM

4.2.3 Contrast and Illumination (CI) based Quality Attribute

Contrast of an image is defined by the range of its intensity level. Any high contrast

image generally obeys uniform distribution. Any such image becomes either too

dark or too light. It predicts whether the image is uniformly illuminated or not.

The contrast can be estimated by removing the extreme gray values which are

basically noise. The range of pixel intensities is divided into three groups which are

(0, 35), (36, 220), (221, 255). The intensities in the region (36, 220) indicate moderate

intensity levels. The ratio of the pixels in this region to the total number of pixels

is used to quantify the amount of uniformity in illumination in the image.

4.2.4 Dilation (D) based Quality Attribute

Dilation of the pupil is a natural process of expansion of the pupil area due to the

lack of proper lightning within the surroundings that reduces the amount of iris

area. The dilation metric is defined as the ratio of the available iris region to the

total area of the iris outer boundary circle. The dilation of the image affects severely

the normalization of the iris image. Higher the dilation in the iris image, more is the

pixel value redundancy in the normalized image. This also causes the iris texture

pattern to shift its spatial position within the normalized iris image. Thus, the

dilation attribute can be estimated by

dilation = Area of irisArea of iris outer circle

= 1−r2pr2i

(4.15)

where ri and rp are the radius of the iris outer and pupil’s outer boundary respec-

tively.

Page 107: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.2. IRIS QUALITY ESTIMATION 73

4.2.5 Specular Reflection (SR) based Quality Attribute

Specular reflections are caused due to the light sources used to acquire a well il-

luminated iris image. But iris is very reflective; so the light reflection is visible in

the acquired image. Reflection causes white spots in the image which affect the

matching score due to missing/wrong data at that spot. The specular reflection can

be identified by using an adaptive thresholding. The acquired image is successively

thresholded to identify the most obvious reflection points in the image using a high

pixel intensity threshold. Some area around the reflection may not be captured.

The threshold is lowered by some value to obtain new threshold and the number of

pixels is counted again. If the number of pixels is slightly increased, then the area of

specular reflection patch is also improved a little. The threshold is further reduced

and the process is repeated until the amount of specular reflection is saturated to

ensure detection of all pixels affected by reflection.

4.2.6 Occlusion (O) based Quality Attribute

One of the major hurdles in iris recognition is occlusion (hiding of iris) which occurs

due to eyelids, eyelashes, specular reflection and shadows as shown in Figure 4.13.

Occlusion hides the useful iris texture and introduces irrelevant parts like eyelids

and eyelashes which are not even an integral part of any iris image. Occlusion

cannot be ignored because it may pose a difficulty to match intra-class irides and

also may introduce false matches with other irides. Hence, it is important to locate

and to segregate occlusion from a normalized image. Occlusion is detected from the

normalized image, instead of original iris image, to reduce the working area which

detects occlusion efficiently. It is done in three steps: eyelid detection followed by

eyelash and reflection detection.

[A] Eyelid Detection :

Page 108: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

74 CHAPTER 4. IRIS RECOGNITION SYSTEM

Figure 4.13: Occlusion in Normalized Iris

Major portion of the occluded area in an iris image is constituted by lower and

upper eyelids. Statistically, upper eyelid can be found at the center of left half

while lower eyelid at the center of the right half of the normalized image. One

cannot discard some area of eyelids heuristically from the normalized image because

their sizes are different due to the degree of occlusion; also their locations are not

perfectly determined. Eyelids have almost uniform texture and a boundary flooded

with eyelashes and shadows. Traditional techniques of parabola/ellipse fitting over

eyelids may fail due to the noisy edge information and non-standard shape of eyelids.

Eyelids may be extended up to the pupil as well. It means that its upper boundary

may not be visible in a normalized image. These challenges have motivated us

to use region-growing approach to determine the eyelids. It uses only the texture

information to separate eyelid region from the rest.

Region growing is a morphological flooding operation which helps to find objects

of uniform texture. Since eyelids have uniform texture, this operation helps to find

the eyelid region. In [4], a set S of seed points is selected from the image lying in

the region Sd, that is required to be detected. All pixels in 8-neighborhood of any

pixel in the set S are checked for their intensity difference with the mean intensity

of the set S. Pixels which are having this difference less than a certain threshold are

added to S. This process is iterated until no pixel can be added further. Finally,

S covers the desired region Sd. Figure 4.14 shows the results of the region-growing

algorithm on an image after a few iterations.

Eyelid detection from the normalized iris strip of size r×c requires two seed points

for region-growing, one for each lower and upper eyelid. They are selected as (r, c4)

Page 109: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.2. IRIS QUALITY ESTIMATION 75

(a) Start of growing a region (b) Growing Process after few iterations

Figure 4.14: Application of Region-Growing

and (r, 3c4

) for upper and lower eyelid respectively as shown in Fig. 4.14. These

two seed points are chosen because after normalization, upper and lower eyelids

are centered mostly at(π2

)◦and

(3π2

)◦angles w.r.t. x-axis. Region-growing begins

with these seeds using a low threshold and expands the region until a dissimilar

region is encountered. This gives the expected lower and the expected upper eyelid

regions. Detection eyelids are shown in Figure 4.15. To obtain both lower and upper

eyelid regions, Algorithm 4.3 can be used with different seed points. Region-growing

overcomes the shape irregularity of eyelids and gives the exact area which is occluded

by eyelids. It fails when eyelid boundary does not have good contrast and hence,

it grows outside the eyelid region. To prevent this, region-growing is repeated with

a lower threshold so that it does not grow outside the statistical bounds for eyelid

regions. If region grows beyond a limit, it indicates that there is no eyelid. Finally,

a binary mask is generated in which all eyelid pixels are set to 1. An example is

shown in Figure 4.15.

(a) Normalized Image (b) Eyelid Mask

Figure 4.15: Eyelid Regions: Arrows Denote the Direction for Region-Growing

Page 110: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

76 CHAPTER 4. IRIS RECOGNITION SYSTEM

Algorithm 4.3 Eyelid Detection

Require: Normalized Iris image NI of dimension m × n, s : (sx, sy): initial seedpoint in NI, t: region-growing threshold

Ensure: Eyelid region LID, Eyelid Mask Maskeyelid

1: S ← s // Seed point is added to eyelid set S2: M = NI(sx, sy) // Mean of set S3: while True do4: min← Infinity, minPoint← φ5: for all unallocated neighboring pixels (x, y) of S do6: if |NI(x, y)−M | < min then7: min← |NI(x, y)−M | // absolute difference with region’s mean8: minPoint← (x, y) // keep track of the best point9: end if

10: end for11: if min > t or size(S)==size(NI) then12: break // stop region-growing13: end if14: S ← S ∪minPoint // add the best point to the set15: M ← mean(S) // update the mean value16: end while17: LID ← S18: Maskeyelid ← NI(LID) // eyelid mask

[B] Eyelash Detection :

There are two types of eyelashes: separable and multiple. Separable eyelashes are

like thin threads whereas multiple eyelashes constitute a shadow like region. Eye-

lashes have lower intensity compared to iris texture. But any predefined threshold

which can be used to separate them from the rest of iris is difficult to be determined

because it changes with illumination condition. Also, since the proportion of eye-

lashes in the image is not constant, histogram-based thresholding cannot be used.

In this section, a new eyelash detection approach to detect both types of eyelashes

has been proposed.

Eyelashes have high contrast with their surrounding pixels, but having low in-

Page 111: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.2. IRIS QUALITY ESTIMATION 77

tensity. As a result, standard deviation of gray values within a small region around

separable eyelashes is high. The standard deviation for every pixel in a normalized

image is computed using its 8-neighborhood. It is high in areas where there are

separable eyelashes. Multiple eyelashes may have high standard deviation, but they

also have dark intensity value. Hence, the low gray value intensity is also given

some weight. The computed standard deviation for each pixel is normalized using

max − min normalization method and is saved in a 2D-array SD. If SD is used

alone for segregating eyelash regions, then multiple eyelashes may not be detected

and iris texture which has large standard deviation at some points gets wrongly

classified as eyelashes. Hence for each pixel, a fused value F (i, j) is computed which

considers both the computed standard deviation as well as the gray value intensity

of that pixel defined as

F (i, j) = 0.5× SD(i, j) + 0.5× (1−N(i, j)) (4.16)

where N(i, j) is the normalized gray intensity values (0 − 1) and SD(i, j) is the

standard deviation computed using 8 neighborhood pixel intensities for the pixel

(i, j). This fused value F (i, j) boosts up the gap between eyelash and non-eyelash

part. The image histogram FH of F has two distinct clusters: a cluster of low

values of F consisting of the iris pixels and the second cluster with high values of

F representing eyelash pixels. To identify the two clusters, Otsu thresholding is

applied on the histogram FH of F to obtain binary eyelash mask. It determines two

clusters in a histogram by considering all possible pairs of clusters and chooses that

clustering threshold that minimizes the intra-cluster variance. It thus separates the

eyelash portion from the iris portion. The detected eyelash of an iris image is shown

in Figure 4.16. The algorithm for eyelash detection is presented in Algorithm 4.4.

[C] Reflection Detection :

Pixels which exceed a threshold value in gray-scale image are declared as reflec-

Page 112: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

78 CHAPTER 4. IRIS RECOGNITION SYSTEM

Algorithm 4.4 Eyelash Detection

Require: Normalized Iris image NI of dimension r × c

Ensure: Eyelash Mask Maskeyelash

1: SD ← filter2D(NI, std(3, 3))// 2D filtering with 3x3 standard deviation filter2: SD ← SD

max(SD)// normalize w.r.t. maximum value

3: N = NI255

// Normalized intensity values(0-1) of NI4: F = 0.5× SD + 0.5× (1−N) // fusion of std. deviation and intensity values.5: FH ← imhist(F ) // image histogram6: Thresh← Otsu(FH) // determine Otsu threshold7: Maskeyelash ← threshold(FH , Thresh) // Otsu thresholding: Maskeyelash(x, y)

is set only if (x, y) is an eyelash pixel

(a) Normalized Image (b) Eyelash Mask

Figure 4.16: Determination of Eyelash Region

tions because reflections are very bright in every acquisition setting. Also, since

occlusion due to reflection is not a major component, it is chosen not to do complex

computation to remove reflection. Detected reflection from a sample image is shown

in Figure 4.17. A binary mask Maskreflection (reflection mask) is generated in which

pixels affected by reflection are set to 1.

(a) Normalized Image (b) Reflection Mask

Figure 4.17: Reflection Detection by Thresholding

Final occlusion mask is generated by addition (logical OR) of the binary masks of

eyelid, eyelash and reflection. The algorithm to determine occlusion maskMaskocclusion

from a normalized image is described in Algorithm 4.5. Detected occlusion of a sam-

Page 113: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.2. IRIS QUALITY ESTIMATION 79

ple image is shown in Figure 4.18.

Algorithm 4.5 Occlusion Mask Determination

Require: Normalized Iris image NI of dimension m× n, Eyelid mask Maskeyelid,Eyelash Mask Maskeyelash, Specular Reflection Mask Maskreflection

Ensure: Occlusion Mask (Maskocclusion)

1: Maskocclusion ←Maskeyelid|Maskeyelash|Maskreflection // logical OR-ing2: return (Maskocclusion)

(a) Normalized Image (b) Eyelid Mask (Maskeyelid)

(c) Eyelash Mask (Maskeyelash) (d) Reflection Mask (Maskreflection)

(e) Occlusion Mask (Maskocclusion)

Figure 4.18: Overall Occlusion Mask

4.2.7 Quality Class Determination

The Support Vector Machine (SVM) is trained using a set of 1000 iris images from

CASIA Lamp database. The actual quality classes (i.e ground truth) in the training

set of images are assigned manually. This set of quality attributes along with the

quality class label of the iris image is used to train the SVM to learn the function

f as defined in Eq. (4.11). The trained classifier based model is used for future

prediction of the quality. When a new iris image is given to the system, all quality

parameters are normalized to the range (0,1) and are passed to the trained SVM

classifier which provides the quality class corresponding to the iris image.

Page 114: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

80 CHAPTER 4. IRIS RECOGNITION SYSTEM

(a) Image 1 (b) Image 2 (c) Image 3 (d) Image 4 (e) Image 5

Figure 4.19: Quality Estimation

In Fig. 4.19, some images from the database are shown. Quality parameters of

these images and the final quality score are given in Table 4.1. It shows that as

the quality of image degrades, its overall quality score also reduces. A poor score is

assigned to the image with higher occlusion.

Image Focus Blur Occlusion Contrast Dilation Reflection QualityImage 1 0.2034 0.6783 0.8834 0.8873 0.8348 0.9865 5Image 2 0.1985 0.6138 0.7289 0.8915 0.8517 0.9937 4Image 3 0.1536 0.4974 0.7049 0.7760 0.7739 0.9103 3Image 4 0.1648 0.5088 0.6790 0.7954 0.7856 1.0000 2Image 5 0.1156 0.4067 0.2819 0.6659 0.7840 0.9589 1

Table 4.1: Quality Parameters and Quality Class for Images in Fig. 4.19

4.3 Iris Image Preprocessing

The extracted region of interest (ROI) of iris contains texture information but

generally is of poor contrast. Suitable image enhancement technique is required to

apply on the ROI. In order to obtain a robust representation that can tolerate small

amount of illumination variation in iris images are transformed. In this section, the

technique that we have used to enhance and to transform the normalized iris images

is discussed.

Page 115: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.3. IRIS IMAGE PREPROCESSING 81

4.3.1 Iris Image Enhancement

(a) Original Iris (b) Esti. Illumination

(c) Uniform Illumination (d) Weiner Filtering

Figure 4.20: Iris Texture Enhancement

The iris texture is enhanced in such a way that it increases its richness as well as

its discriminative power. The iris ROI is divided into blocks and the mean of each

block is considered as the coarse illumination of that block. This mean is expanded

to the original block size as shown in Fig. 4.20(b). Selection of block size plays an

important role. It should be such that the mean of the block truly represents the

illumination effect of the block. So, larger block may produce improper estimate.

We have seen that a block size of 8×8 is the best choice for our experiment. The es-

timated illumination of each block is subtracted from the corresponding block of the

original image to obtain the uniformly illuminated ROI as shown in Fig. 4.20(c).

The contrast of the resultant ROI is enhanced using Contrast Limited Adaptive

Histogram Equalization (CLAHE) [55]. It removes the artificially induced block-

ing effect using bilinear interpolation and enhances the contrast of image without

introducing much external noise. Finally, Wiener filter [68] is applied to reduce

constant power additive noise and the enhanced iris texture is obtained shown in

Figure 4.20(d).

Page 116: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

82 CHAPTER 4. IRIS RECOGNITION SYSTEM

4.3.2 Iris Image Transformation

(a) Original (b) Transformed (kernel=3) (c) Transformed (kernel=9)

Figure 4.21: LGBP Transformation (Red: -ve gradient;Green: +ve grad.;Blue: zerograd.)

It transforms the iris ROI into a representation that can provide robust corner

features. The gradient (approximated by pixel difference) of any edge pixel is pos-

itive if it lies on an edge created due to light to dark shade (i.e. high to low gray

value) transition. Hence all edge pixels can be divided into three classes of +ve,

−ve and zero gradient values as shown in Fig. 4.21. The sobel kernel fails to hold

rotational symmetry; hence more consistent scharr kernels [61] which are obtained

by minimizing the angular error is applied.

Scharrx =

3 0 −3

10 0 −10

3 0 −3

Scharry =

3 10 3

0 0 0

−3 −10 −3

The scharr x-direction kernels of size 3× 3 and 9× 9 are applied to obtain the

transformed image of Fig. 4.21(a). Resultant images are shown in Fig. 4.21(b) and

4.21(c). Bigger size kernel produces coarse level features as shown in Fig. 4.21(c).

This gradient augmented information of each edge pixel can be more discriminative

and robust. The transformation uses this information to calculate a 8-bit code for

each pixel. It uses gradient values of x- direction and y-direction of its 8 neighboring

pixels to obtain vcode and hcode respectively as discussed below.

The vcode and hcode Generation : Let Pi,j be the (i, j)th pixel of any

biometric image P and Neigh[k], k = 1, 2, ...8 are the gradients of 8 neighboring

Page 117: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.3. IRIS IMAGE PREPROCESSING 83

pixels centered at pixel Pi,j obtained by applying x- direction or y- direction scharr

kernel to obtain vcode or hcode respectively. Then the kth bit of the 8-bit code

(termed as lgbp code) is given by

lgbp code[k] =

1 if Neigh[k] > 0

0 otherwise

(4.17)

In vcode or hcode, every pixel is represented by its lgbp code as shown in Fig.

4.22(d) and 4.22(g). The pattern of edges within a neighborhood is assumed to be

robust; hence each pixel’s lgbp code is considered which is just an encoding of edge

pattern in its 8-neighborhood. Also, lgbp code of any pixel considers only the sign

of the derivative within its specified neighborhood; hence it ensures the robustness

of the proposed transformation in illumination variation.

(a) Original Iris (b) Original Iris (c) Original Iris

(d) Iris vcode (e) Iris vcode (f) Iris vcode

(g) Iris hcode (h) Iris hcode (i) Iris hcode

Figure 4.22: Original and Transformed (vcode, hcode) for Iris ROI’s

Page 118: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

84 CHAPTER 4. IRIS RECOGNITION SYSTEM

4.4 Iris Feature Extraction and Matching

The corner features [63] are extracted from both vcode and hcode obtained from

any iris ROI. The KL tracking [44] has been used to track the corner features in

the corresponding images for matching two iris ROIs.

The KL tracking makes use of three assumptions, namely brightness consis-

tency, temporal persistence and spatial coherency. Hence its performance depends

completely on how well these three assumptions are satisfied. It can be safely as-

sumed that these three assumptions are more likely to be satisfied while tracking

is performed between features of same subject (genuine matching) and degrades

substantially for others (imposter matching). Therefore, one can infer that the per-

formance of KL tracking algorithm is good in genuine matching as compared to the

imposter ones.

4.4.1 Corners having Inconsistent Optical Flow (CIOF)

The direction of pixel motion which termed as optical flow of that pixel, can be com-

puted by KL-tracking algorithm. A dissimilarity measure CIOF (Corners having

Inconsistent Optical Flow) has been proposed to estimate the KL-tracking perfor-

mance. It evaluates two geometrical quantities for each potential matching feature

pair given by KL-tracking algorithm.

[a] Vicinity Constraints: Euclidean distance between any corner and its es-

timated tracked location should be less than or equal to an empirically selected

threshold (Td), which depends upon the amount of translation and rotation in the

sample images. High threshold value signifies more translation and vise-versa.

[b] Patch-wise Dissimilarity: Tracking error is defined as pixel-wise sum of

absolute difference between a local patch centered at current corner and that of its

estimated tracked location patch. This error should be less than or equal to an

Page 119: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.4. IRIS FEATURE EXTRACTION AND MATCHING 85

empirically selected threshold (Te), which ensures that the matching corners must

have similar neighborhood patch around it.

However, all tracked corner features may not be the true matches because of

noise, local non-rigid distortions in the biometric samples and also less difference

in inter class matching and more in intra class matching. Hence, the direction of

pixel motion (i.e optical flow) for each pixel is used to prune out some of the false

matching corners.

Consistent Optical Flow: It can be noted that true matches have the optical

flow which can be aligned with the actual affine transformation between two images

that are being matched. The estimated optical flow angle is quantized into eight

directions and the most consistent direction is the one which has the largest number

of successfully tracked corner features. Any corner matching pair (i.e corner and its

corresponding corner) having optical flow direction other than the most consistent

direction is considered as false matching pair and has to be discarded.

4.4.2 Matching Algorithm

Given vcode and hcode of two iris, Irisa and Irisb, Algorithm 4.6 can be used

to compute a dissimilarity score using CIOF measure. The vcode of Irisa and

Irisb are matched to obtain the vertical matching score while the respective hcode

are matched to generate horizontal matching score. The corner features that are

having their tracked position and local patch dissimilarity within the thresholds are

considered as successfully tracked. Since both Irisa to Irisb and Irisb to Irisa

matchings are considered, four sets of successfully tracked corners are computed

viz. stcvAB, stchAB, stc

vBA and stchBA. Out of these four corner sets, two correspond

to vertical while other two correspond to horizontal matching. The optical flow

which is the pixel motion direction is computed for each successfully tracked corner

and is quantized into eight bins at an interval of π8. Four histograms (of eight bins

Page 120: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

86 CHAPTER 4. IRIS RECOGNITION SYSTEM

each) are obtained, one for each set of successfully tracked corners represented by

HvAB, H

hAB, H

vBA and Hh

BA. The maximum value in each histogram represents the

total number of corners having consistent optical flow and these are represented as

cof vAB, cofhAB, cof

vBA and cofhBA. Finally they are normalized by the total number of

corners and are converted into horizontal and vertical dissimilarity scores. The final

score, CIOF (Irisa, Irisb) is obtained by using sum rule of horizontal and vertical

matching scores. Such a fusion can significantly boost-up the performance of the

proposed system because some of the images are having more discrimination in

vertical direction while others have it in horizontal direction.

Algorithm 4.6 CIOF (Irisa, Irisb)

Require:(a) The vcode IvA,IvB of two biometric sample images Irisa, Irisb respectively(b) The hcode IhA,IhB of two biometric sample images Irisa, Irisb respectively.(c) N v

a , N vb ,Nh

a and Nhb are the number of corners in IvA, I

vB, I

hA, and IhB respectively.

Ensure: Return CIOF (Irisa, Irisb).1: Track all the corners of vcode IvA in vcode IvB and that of hcode IhA in hcode IhB.2: Obtain the set of corners successfully tracked in vcode tracking (i.e. stcvAB) andhcode tracking (i.e. stchAB) that have their tracked position within Td and theirlocal patch dissimilarity under Te.

3: Similarly compute successfully tracked corners of vcode IvB in vcode IvA (i.e.stcvBA) as well as hcode IhB in hcode IhA (i.e. stchBA).

4: Quantize optical flow direction for each successfully tracked corners into eightdirections (i.e. at an interval of π

8) and obtain 4 histograms Hv

AB, HhAB, H

vBA and

HhBA using these four corner set stcvAB, stc

hAB, stc

vBA and stchBA respectively.

5: For each histogram, out of 8 bins the bin (i.e. direction) having the maxi-mum number of corners is considered as the consistent optical flow direction.The maximum value obtained from each histogram is termed as corners havingconsistent optical flow represented as cof vAB, cof

hAB, cof

vBA and cofhBA.

6: ciof vAB = 1− cofvABNva

; [Corners with Inconsis. Optical Flow (vcode)]

7: ciof vBA = 1− cofvBANvb

; [Corners with Inconsis. Optical Flow (vcode)]

8: ciofhAB = 1− cofhABNha

; [Corners with Inconsis. Optical Flow (hcode)]

9: ciofhBA = 1− cofhBANhb

; [Corners with Inconsis. Optical Flow (hcode)]

10: return CIOF (Irisa, Irisb) =ciofvAB+ciofhAB+ciofvBA+ciof

hBA

4;

Page 121: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.5. DATABASES 87

4.5 Databases

Two publicly available iris databases have been used to analyze the proposed iris

system. All possible inter-session matchings are done to perform various experi-

ments.

4.5.1 Databases Used

CASIA V4 Interval Database [14]: It contains 2, 639 iris images which are

collected from 249 subjects. It has 395 distinct irises and contains at-most 7 images

of each iris. Left and right eye of the same individual are considered as two distinct

iris since these iris patterns are distinct. Iris images are taken in two sessions under

controlled environment. The time gap between two sessions is at least one month.

The acquired images are of resolution 320× 280 and have very clear iris texture.

(a) CASIA Interval Iris Database

(b) CASIA Lamp Iris Database

Figure 4.23: Iris Images

CASIA V4 Lamp [14]: It consists of 16, 212 images acquired from 411

subjects. It contains 819 distinct irises and at most 20 images of each iris. These

iris images are obtained in only one session under controlled environment with lamp

Page 122: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

88 CHAPTER 4. IRIS RECOGNITION SYSTEM

on/off. The image resolution is 640×480. These images are found to be challenging

for our study because there exists nonlinear deformation due to variation of visible

illumination. Also, size of this database is huge and it is a challenge to get good

results on any large database because number of false acceptances grows very fast

with the increase in database size [5]. For both iris databases, specifications of

images are given in Table 4.2 and sample images are shown in Fig. 4.23.

Dataset Traits Subject Pose Total ImgsCASIA-Interval Left+Right Eye 395 7 2639CASIA-Lamp Left+Right Eye 819 20 16212

Table 4.2: Iris Database Specifications

4.5.2 Testing Strategy

CASIA V4 Interval database : Images of the first session are taken as training

while the remaining images are used for testing. Hence a total of 3, 657 genuine

and 1, 272, 636 imposter matchings are performed for Interval database testing.

CASIA V4 Lamp database : First 10 images are considered as training while

the rest are used for testing. Hence there are 78, 300 genuine and 61, 230, 600

imposter matchings for Lamp database.

Subject Pose Total Training Testing GenuineMatch-ing

ImposterMatch-ing

Casia V4 Interval (Iris)249 (395 Iris) 7 2,639 First 3 Rest 3,657 1,272,636

Casia V4 Lamp (Iris)411 (819 Iris) 20 16,212 First 10 Last 10 78,300 61,230,600

Table 4.3: Database and Testing Strategy Specifications

Page 123: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.6. EXPERIMENTAL RESULTS 89

Default ParametersDb s t prmin prmax irmin irmax W αrange(radians)

Interval 0.5 0.41 20 90 80 130 15 (−π/4, π/6) ∪ (5π/6, 5π/4)Lamp 0.5 0.125 16 70 65 120 11 (−π/3, 0) ∪ (π, 4π/3)

Table 4.4: Values of Various Parameters used in Algorithm 4.1 and 4.2

Each database along with the testing strategy specifications is given in Table

4.3. One can observe that a large number of genuine as well as imposter matchings

are considered to evaluate the performance of the proposed system.

4.6 Experimental Results

The experimental results of the proposed iris based authentication system are dis-

cussed in this section.

4.6.1 Iris Segmentation Analysis

The proposed iris segmentation algorithm has been used to segment iris. It has been

found that the segmentation accuracy is 94.5% and 94.63% for Interval and Lamp

database respectively. The parameters used for different databases are given in

Table 4.4. Errors on these databases can be classified broadly into three categories:

1. Occlusion :

(a) Eyelid occlusion (affects outer boundary localisation)

(b) Eyelash occlusion (affects inner and outer boundary localisation)

2. Noise :

(a) Specular reflection (reflection on pupil boundary)

(b) Pupil Boundary Noise (Non-circular pupil shape or noise points)

Page 124: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

90 CHAPTER 4. IRIS RECOGNITION SYSTEM

Figure 4.24: Segmentation Error Hierarchy (Numbers Mentioned for each Subcate-gory Specify the Number Images of that Category for Interval Database.)

Threshold parameter (t) valueSub-Category Mean Minimum Maximum Standard Deviation

Eyelid 0.39 0.36 0.41 0.022Eyelash 0.39 0.3 0.43 0.040

Specular Reflection 0.44 0.36 0.5 0.066Pupil Boundary Noise 0.4 0.26 0.5 0.056

Bright image 0.46 0.35 0.52 0.06Dark image 0.36 0.25 0.51 0.052

Table 4.5: Variation in Threshold needed for Correct Segmentation

3. Illumination :

(a) Bright image (excess illumination)

(b) Dark image (lack of illumination)

The hierarchy of errors is shown in Figure 4.24. There are only two critical

parameters viz. threshold (t) and angular range (αrange) that are required to be

adjusted as suggested in Table 4.5 and Table 4.6 for accurate segmentation. The

erroneous segmentations are critically analyzed and are corrected by adjusting these

parameters. This adjustment helps to achieve an accuracy of more than 99.6% for

both the databases. Some very critically occluded images are segmented manually

(< 0.4%). Some of the images where the proposed algorithm has been failed are

Page 125: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.6. EXPERIMENTAL RESULTS 91

Sub-Category t αrangeEyelid - ↓

Eyelash ↓- ↓Specular Reflection ↑ -

Pupil Boundary Noise ↑↓ -Bright image ↑ -Dark image ↓ -

Table 4.6: Required Parametric Variations: ’-’ stands for no change, ’↑’ stands forincreasing and ’↓’ for decreasing the parameter value

Eyelid Occlusion Eyelash Occlusion Specular Reflection

Interval=4,Lamp=300 Interval=15,Lamp=288 Interval=5,Lamp=68Pupil Noise Bright Image Dark Image

Interval=71,Lamp=160 Interval=11,Lamp=1 Interval=39,Lamp=53

Table 4.7: Segmentation Error. (Rows Below Images: Errors Occurred in Interval,Lamp Databases)

shown in Table 4.7.

4.6.2 Threshold Selection

Matching performance has been obtained by varying two thresholds used to compute

CIOF dissimilarity measure. These values for any database are selected in such a

way that the performance of the system is maximized over the validation set, using

only vcode matching. The validation set contains images of first 100 subjects only

Page 126: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

92 CHAPTER 4. IRIS RECOGNITION SYSTEM

Td Te DI EER(%) Accuracy(%) EUC CRR(%)Casia V4 Interval Iris Database

6 500 1.596 0.87499 99.368 0.2496 99.83597 550 1.8290 0.4102 99.7221 0.06821 99.91798 500 1.7033 0.24606 99.766 0.0138 1008 550 1.8843 0.2187 99.8250 0.00912 1009 500 1.7247 0.2734 99.7830 0.01551 1009 550 1.9120 0.2187 99.8191 0.011963 1009 600 2.0519 0.2187 99.80 0.0048 10010 500 1.7351 0.2460 99.7836 0.010 10010 550 1.9272 0.1677 99.832 0.0068 10010 600 2.069 0.1659 99.837 0.0052 10011 500 1.7398 0.1958 99.804 0.0029 10011 600 2.077 0.2151 99.798 0.0071 10012 500 1.7412 0.2461 99.7975 0.0040 10012 550 1.9361 0.2450 99.8096 0.0051 100

Casia V4 Lamp Iris Database5 600 1.999 0.7619 99.4087 0.1772 1005 700 2.2760 0.8571 99.3399 0.1729 99.84125 800 2.3272 1.2388 98.9611 0.2888 99.68257 600 2.0072 0.741992 99.4653 0.1418 1007 700 2.2787 0.8714 99.3229 0.1592 99.84127 800 2.3062 1.3491 99.7938 0.2904 99.3650

Table 4.8: Threshold Selection for CASIA V4 Interval and Lamp Iris Database usingonly vcode Matching over Validation Set of First 100 Subjects.

from that database. One threshold Te [used for patch-wise dissimilarity] depends

upon the patch size while other one Td [used for vicinity constraints] depends upon

the amount of affine transformation in between database images; hence it varies for

different databases.

The typical value of Td for the system with maximum CRR and minimum EER

is 7 for Lamp and 10 for Interval database with Te = 600 having patch size of 5× 5.

The results are shown in Fig. 4.26, Fig. 4.25 along with Table 4.8. This analysis

has inferred that Interval database has more affine transformation in iris images of

Page 127: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.6. EXPERIMENTAL RESULTS 93

Figure 4.25: Parameterized ROC Curve of the Proposed System for Iris IntervalDatabase (only vcode matching)

Description DI EER(%) Accuracy(%) EUC CRR(%)vcode 1.9663 0.4375 99.657 0.0433 99.917

Enhanced vcode 2.069 0.1659 99.837 0.0052 100hcode 1.387 2.761 97.481 0.4782 99.343

Enhanced hcode 1.701 0.6835 99.357 0.0539 99.7538fusion 1.8637 0.3280 99.719 0.026 99.917

Enhanced fusion 2.0182 0.1093 99.910 0.0009 100

Table 4.9: Enhancement based Performance boost-up over Iris Interval Databases(In the above Table fusion referred as vcode+hcode)

same subject than Lamp database.

4.6.3 Impact of Enhancement on System Performance

This section analyses the impact of enhancement on system performance boost-

up over CASIA V4 Iris Interval [14] database to justify the use of enhancement

Page 128: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

94 CHAPTER 4. IRIS RECOGNITION SYSTEM

Figure 4.26: Parameterized ROC Analysis of the Proposed System for Iris LampDatabase (only vcode matching)

algorithm. It is observed that the enhancement method significantly improves the

iris texture as evident from the graphs shown in Fig. 4.27 and Table 4.9. From ROC

curve, it is seen that for vcode, hcode or fusion, the performance of the system is

significantly improved after enhancement. One can observe clearly how the proposed

enhancement can boost-up the dissimilarity score of imposters which reduces the

errors effectively. From Table 4.9, it can be seen that EER obtained using hcode

only is reduced by a factor of 4 times (i.e from 2.761% to 0.684%) and similar

improvement has been observed in the case of vcode and fusion approaches.

4.6.4 Comparative Analysis

It is seen from Table 4.10 that CRR (Rank 1 accuracy) of the proposed system

is 100% and 99.87% for Interval and Lamp databases respectively. Further, EER

for Interval and Lamp are 0.109% and 1.3% respectively. It can also be observed

Page 129: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.6. EXPERIMENTAL RESULTS 95

Figure 4.27: Enhancement based Performance boost-up in terms of Receiver Oper-ating Characteristic Curves for the Proposed System

Database Performance ParametersDI CRR% EER% Accuracy% EUC

Interval 2.018 100 0.109 99.91 0.0009Lamp 1.50 99.87 1.30 98.85 0.240

Table 4.10: Performance Analysis of Proposed System

that the genuine and the imposter scores are well separated as the decidability

index is found to be 2.02 and 1.50 for Interval and Lamp databases respectively.

Accuracy and error under the curve, EUC, for Interval and Lamp are observed

as 99.91, 0.001 and 98.85, 0.24 respectively that can be considered as good for any

verification system.

The proposed system has been compared with state-of-the-art iris recognition

systems [19],[45],[46],[60]. We have implemented the systems proposed in [19],[46]

while we have used the results stated in [60] of the systems in [45],[60]. The compar-

Page 130: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

96 CHAPTER 4. IRIS RECOGNITION SYSTEM

Systems IntervalDI CRR% EER%

Daugman [19] 1.96 99.46 1.88Li Ma [60] - 95.54 2.07Masek [46] 1.99 99.58 1.09K. Roy [60] - 97.21 0.71Proposed 2.02 100 0.109

LampDaugman [19] 1.2420 98.90 5.59Proposed 1.50 99.87 1.300

Table 4.11: Comparative Performance Analysis over CASIA V4 Interval and LampIris Database

ison of the proposed system with other state-of-the-art systems is shown in Table

4.11. The performance of the proposed system is found to be better than the re-

ported systems. For both Interval and Lamp iris databases, Receiver Operating

Characteristics (ROC) curves of the proposed system are shown in Fig. 4.28 and

Fig. 4.29 respectively. ROC curves of the proposed system are compared with the

open source Masek’s system (Log-Gabor) [46] and Daugman’s system (Gabor) [19].

It is observed that vcode results are always better than hcode results. Also fusion of

vcode and hcode performs much better than vcode alone. This performance gain oc-

curs because there are several images that are mis-classified as genuine or imposter

by vcode features, but its corresponding hcode contains discrimination. Hence the

ability of discrimination of fused score is enhanced. The Masek’s system has not

been tested on Lamp database as the optimal parameters are not known and with

default set of parameters, it has performed very poorly.

Page 131: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

4.6. EXPERIMENTAL RESULTS 97

Figure 4.28: Comparative Analysis of Proposed System over Interval Database

Figure 4.29: Comparative Analysis of Proposed System over Lamp Database

Page 132: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

98 CHAPTER 4. IRIS RECOGNITION SYSTEM

Page 133: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Chapter 5

Knuckleprint Recognition System

This chapter deals with the problem of designing an efficient knuckleprint based

recognition system. The knuckleprint ROI is extracted by applying a modified

version of gabor filter to estimate the central knuckle line and point as shown in Fig.

5.2(b). The central knuckle point is used to extract consistently the knuckleprint

ROI from any image. The vertical and horizontal knuckle line based features may

be of poor quality; hence they are required to be enhanced and transformed to

achieve robustness against varying illumination. It consists of five major tasks,

viz. ROI extraction, quality estimation, ROI preprocessing, feature extraction and

matching. The overall architecture of the proposed knuckleprint based recognition

system is shown in Fig. 5.1. The publicly available knuckleprint PolyU database

[56] is used to test the proposed system. The PolyU database contains images that

are acquired using normal web-cam of resolution 384 × 288 using their indigenous

capturing device. The device allows the user to place only one finger at a time and

acquires images that are horizontally aligned images. Therefore, at the time of ROI

extraction, raw knuckleprint images are assumed to be horizontal and contain single

finger knuckle.

Page 134: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

100 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

Figure 5.1: Overall Architecture of the Proposed Knuckleprint Recognition System

5.1 Knuckleprint ROI Extraction

This section proposes an efficient knuckleprint ROI extraction technique. The prime

objective of any ROI extraction technique is to segment same region of interest

consistently from all images. The central knuckle point as shown in Figure 5.2(b)

can be used to segment any knuckleprint. Since knuckleprint is aligned horizontally,

one can easily extract the central region of interest from any knuckleprint that

contains rich and discriminative texture as shown in Fig. 5.2(c). The proposed

ROI extraction algorithm performs in three steps; detection of knuckle area, central

knuckle-line and central knuckle-point.

5.1.1 Knuckle Area Detection

In this step, the whole knuckle area is segmented from the background to discard

background region. The acquired knuckleprint may be of poor quality. Each such

Page 135: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.1. KNUCKLEPRINT ROI EXTRACTION 101

(a) Raw knuckleprint (b) Annotated Knuckleprint (c) Knuckleprint ROI

Figure 5.2: Knuckleprint ROI Annotation

knuckleprint is enhanced using contrast limited adaptive histogram equalization [55]

(CLAHE) to obtain better edge representation which helps to detect the knuckle

area. CLAHE divides the whole image into blocks of size 8×8 and applies histogram

equalization over each block. The enhanced image is binarized using Otsu thresh-

olding that segments the image into two clusters (knuckle region and background

region) based on their gray values. Such a binary image is shown in Fig. 5.3(b).

It can be observed that the knuckle region may not be accurate because of sensor

noise and background clutter. This can be obtained by using canny edge detection.

A resultant image is shown in Fig. 5.3(c) and the largest connected component is

considered as the required knuckle boundary. The detected boundary is eroded to

smoothen it as well as to remove any discontinuity as shown in Fig. 5.3(d). Fi-

nally all pixels within the convex hull of the knuckle boundary are considered as the

knuckle area. Figure 5.3(e) shows a knuckle area. Some top and bottom rows are

assumed to be background and are discarded from the raw image.

(a) Raw (b) Binary (c) Canny (d) Boundary (e) FKP Area

Figure 5.3: Knuckle Area Detection

Page 136: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

102 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

5.1.2 Central Knuckle-Line Detection

The central knuckle line is defined as that column of the image with respect to which

the knuckle can be considered as symmetric. It can be observed from Figure 5.2(b).

This line is used to extract the knuckleprint ROI. A very specific and symmetric

texture is observed around the central knuckle line which is used for its detection.

To perceive such a specific texture, a knuckle filter is created by modifying the gabor

filter which is defined below.

(a) 2D Gaussian (b) 2D Sinusoid (c) 2D Gabor (d) 2D Gabor Filter

Figure 5.4: Conventional Gabor Filter

[A] Knuckle Filter : The conventional gabor filter is created when a complex

sinusoid is multiplied with a Gaussian envelope as defined in Eq .(5.1) and is shown

in Figure 5.4(c).

G(x, y; γ, θ, ψ, λ, σ) = e−(X2+Y 2·γ2

2·σ2)︸ ︷︷ ︸

Gaussian Envelope

× ei(2πXλ

+ψ)︸ ︷︷ ︸Complex Sinusoid

(5.1)

where x and y are the spatial co-ordinates of the filter and X, Y are obtained by

rotating x, y by an angle θ using the following equations :

X = x ∗ cos(θ) + y ∗ sin(θ) (5.2)

Y = −x ∗ sin(θ) + y ∗ cos(θ) (5.3)

Page 137: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.1. KNUCKLEPRINT ROI EXTRACTION 103

(a) Curved Knuckle Lines (b) Curvature Knuckle Filter

Figure 5.5: Curvature Knuckle Filter

In order to model the curved convex knuckle lines, a knuckle filter is obtained by

introducing curvature parameter in the conventional gabor filter. The basic gabor

filter equation remains to be the same (as in Eq. (5.1)). Only X and Y co-ordinates

are modified as follows :

X = x ∗ cos(θ) + y ∗ sin(θ) + c ∗ (−x ∗ sin(θ) + y ∗ cos(θ))2 (5.4)

Y = −x ∗ sin(θ) + y ∗ cos(θ) (5.5)

The curvature of the gabor filter can be modulated by the curvature parameter.

This curved gabor filter with parameters (γ = 1, θ = π, ψ = 1, λ = 20, σ = 20) is

used for knuckle filter creation. The value of curvature parameter is varied as shown

in Fig. 5.6 and its optimal value for our database is selected heuristically. The

proposed knuckle filter is obtained by concatenating two such curved gabor filters

(f1, f2) ensuring that the distance between them is d. The first filter (f1) is obtained

using the above mentioned parameters while the second filter (f2 = f flip1 ) is just

the vertically flipped version of the first filter because knuckleprints are vertically

symmetric. In Figure 5.6, several knuckle filters are shown with varying curvature

and distance parameters. One can observe that increasing curvature parameter (c)

introduces more and more curvature in the filter. Finally c = 0.01 and d = 30 are

considered in the knuckle filter (F 0.01,30kp ).

[B] Knuckle Line Extraction : All pixels belonging to knuckle area are con-

Page 138: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

104 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

(a) c=0.00,d=0 (b) c=0.00,d=30

(c) c=0.01,d=0 (d) c=0.01,d=30

(e) c=0.04,d=0 (f) c=0.04,d=30

Figure 5.6: Knuckle Filter

volved with the knuckle filter F 0.01,30kp . Pixels over the central knuckle line must be

having the higher response as compared to others because of filter’s shape and con-

struction. The filter response for each pixel is binarized using threshold as f ∗max

where max is the maximum knuckle filter response and f ∈ 0 to 1 is a fractional

value. The binarized filter response is shown in Fig. 5.7(a) in which it is super im-

posed over knuckle area with blue color. The column-wise sum of the filter response

for each column is computed. The central knuckle line is considered as that column

which is having the maximum knuckle filter response as shown in Fig. 5.7(b).

5.1.3 Central Knuckle-Point Detection

The central knuckle point is required to crop the knuckle-print ROI that must lie

over central knuckle line. Hence the top and the bottom point over central knuckle

line, belonging to the knuckle area, are computed and their mid-point is considered

Page 139: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.1. KNUCKLEPRINT ROI EXTRACTION 105

(a) Knuckle Filter Response (b) Annotated (c) Knuckle ROI

Figure 5.7: Knuckleprint ROI detection. (a) Knuckle filter response is super imposeover the knuckle area with blue color, (b) Full Annotated and (c) FKP (FKPROI)

as the central knuckle point. It is shown in Figure 5.7(b). The required knuckleprint

ROI is extracted as a region of size (2∗w+1)×(2∗h+1) considering central knuckle

point as the center. It is shown in Fig. 5.7(c).

Algorithm 5.1 Knuckleprint ROI Detection

Require:Raw Knuckleprint image I of size m× n.

Ensure:The knuckleprint ROI FKPROI , of size (2 ∗ w + 1)× (2 ∗ h+ 1).

1: Enhance the FKP image I to Ie using CLAHE;2: Binarize Ie to Ib using Otsu thresholding;3: Apply Canny edge detection over Ib to get Icedges;4: Extract the largest connected component in Icedges as FKP raw boundary,

(FKP rawBound);

5: Erode the detected boundary FKP rawBound to obtain continuous and smooth FKP

boundary, FKP smoothBound ;

6: Extract the knuckle area Ka = All pixels in image I ∈ theConvexHull(FKP smooth

Bound );7: Apply the knuckle filter F 0.01,30

kp over all pixels ∈ Ka;8: Binarize the filter response using f ∗max as the threshold;9: The central knuckle line (ckl), is assigned as that column which is having the

maximum knuckle filter response;10: The mid-point of top and bottom boundary points over ckl ∈ Ka, is defined as

the central knuckle point (ckp).11: The knuckle ROI (FKPROI) is extracted as the region of size (2∗w+1)×(2∗h+1)

from raw knuckleprint image I, considering ckp as its center point.

Given a raw knuckleprint image I, Algorithm 5.1 can be used to extract the

knuckleprint ROI (FKPROI) from it. Some of the segmented PolyU knuckleprint

Page 140: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

106 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

(a) Subject A

(b) Subject B

Figure 5.8: Some Segmented Knuckleprints

database images are shown in Fig. 5.8. One can observe that the proposed algorithm

can extract consistently the ROI of images in the database.

5.2 Knuckleprint Image Quality Extraction

The images acquired from the sensors have inevitably wide distribution of quality.

Hence quality assessment should be done during early phases such as data acquisi-

tion so that one can discard the bad quality images and recapture the better one.

Also quality parameters can revel the type of deficiency in the image which can be

used to apply the suitable enhancement technique to reduce its effect. The image

quality of knuckleprint is degraded mostly due to fewer or blurred line like features,

defocus, poor contrast and high or low illumination and reflection produced by the

camera flash as shown in Fig.5.9. In this section an algorithm has been proposed

for knuckleprint quality assessment.

Quality of the knuckleprint image has been modeled as a function of the following

six attributes: Focus (F ), Clutter (C), Uniformity (S), Entropy (E), Reflection

(Re), Contrast and Illumination (Con). Hence overall quality of the knuckleprint

Page 141: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.2. KNUCKLEPRINT IMAGE QUALITY EXTRACTION 107

(a) Defocus (b) Less Features (c) High Reflection (d) Poor Uniformity

Figure 5.9: Poor Quality Knuckleprint ROI Samples

image can be represented by

Quality = f(F,C, S,E,Re, Con) (5.6)

where f is a function which is learned from the training data using a Support

Vector Machine (SVM). Quality attributes from the training set of knuckleprints

along with their respective true quality label are used to train the SVM classifier.

This creates a model based classifier which can be used to predict the quality of

a knuckleprint image using its quality attributes. The six quality attributes, are

amount of well focus edges F , amount of clutter C , distribution of focused edges S ,

block-wise entropy of focused edges E , reflection caused by light source and camera

flash Re and the amount of contrast Con .

The most important and vital features in knuckleprint images are vertical long

edges (vle) as shown in Fig. 5.10(a); hence most of the quality parameters tend to

analyse initially vle for knuckleprint quality assessment. The set of pixels that corre-

sponds to only vertical strong edge pixels is computed using sobel y-direction kernel

from the input image (I) and the connected components are computed. Finally,

out of all connected components, only long components (more than an empirically

selected threshold tcc) are retained that constitute the set of pixels which are “long”

and vertical termed as vle .

Page 142: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

108 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

5.2.1 Focus (F)

The defocus blurring effect occurs when the focal point of the sensor’s lens is not at

the reference object during image acquisition as shown in Fig. 5.9(a). The frequency

analysis of defocus images has reveled that its 2D Fourier spectrum is usually dense

towards lower frequencies while well focused image posseses uniform spectrum. The

number of well focused edge pixels is considered for quality assessment. It is com-

puted by convolving the image by the proposed 6×6 kernel K as defined in Eq (5.7)

that can well approximate the 2D Fourier spectrum’s high frequency band pass fil-

ter. Then only those pixels are retained that are well focused (i.e having convolved

value more than an empirically selected threshold tf ) constituting the set of pixels

which are well focused, termed as wf, as shown in Fig. 5.10(b).

K =

1 1 1 1 1 1

1 1 1 1 1 1

1 1 −8 −8 1 1

1 1 −8 −8 1 1

1 1 1 1 1 1

1 1 1 1 1 1

(5.7)

The set of pixels Fmap obtained by taking the set intersection of pixel sets vle and

wf, as shown in Fig. 5.10(c) is later used as the most significant region. Finally,

focus quality parameter F is defined as the number of well focused vertically aligned

long edge pixels which is computed by counting the number of pixels in Fmap.

5.2.2 Clutter (C )

The short vertical edge pixels that are well focused can be considered as clutter as

shown in Fig. 5.10(d) because that can degrade the quality of the image. They

are usually present due to abrupt discontinuity in the edge structure due to several

Page 143: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.2. KNUCKLEPRINT IMAGE QUALITY EXTRACTION 109

(a) vle (b) wf (c) Fmap (andingvle,wf )

(d) Clutter (ShortEdges)

Figure 5.10: Defocus based Quality Attribute F and C

possible extrinsic factors. Clutter creates false features that can confuse any recog-

nition algorithm. The quality parameter, clutter (C ), is defined as the ratio of short

vertically aligned strong edge pixels to the longer ones. It is inversely proportional

to the image quality.

5.2.3 Uniformity based Quality Attribute (S)

In any good quality image, the texture based features should be distributed uni-

formly through out the whole image. There are several images in the database

that are having well focused left or right half only. Some images are shown in Fig.

5.9(d). The focus parameter F may consider as good quality even though half of

the image is of very poor quality. Hence uniformity in texture distribution should

be given some importance. The quality attribute (S ) is proposed which is directly

proportional to the uniformity in texture distribution as shown in Fig. 5.11. The

pixel set Fmap as defined in focus parameter is clustered using K-Means algorithm

using K = 2, because knuckleprint images have some symmetry along Y - axis.

Some statistical and geometrical parameters of the two clusters that are obtained

as output of 2-mean clustering, are used to obtain the value of S and are described

in Algorithm 5.2.

Page 144: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

110 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

Algorithm 5.2 Uniformity based Quality Attribute (S )

Require: The vle and wf pixel set for the input image (I) of size m× n.Ensure: Return the value S for the input image (I).

1. Fmap = and(wf, vle); [focus mask]2. M1,M2=Mid-point of Left half (n

2, n2) and Right half (m+n

2, n2) of the input

image (I);3. Apply 2-Mean Clustering over pixel set Fmap;4. C1, C2, nc1, nc2, std1, std2=Mean loc., Number of pixels and Standard dev. ofLeft and Right cluster respectively;5. d1, d2= Euclidean Distance between point C1 and M1 and that of between C2

and M2 respectively;6. d = 0.7 ∗max(d1, d2) + 0.3 ∗min(d1, d2);

7.pr = max(nc1,nc2)min(nc1,nc2)

;[Cluster Point Ratio]

8.stdr = max(std1,std2)min(std1,std2)

;[Cluster Standard Dev. Ratio]9.combr = 0.8 ∗ pr + 0.2 ∗ stdr;10.Dstd = 1− d√

std21+std22

;

11.Dnc = 1− d√nc21+nc

22

;

12.S = 0.5 ∗ d+ 0.2 ∗ combr + 0.15 ∗Dstd + 0.15 ∗Dnc

(a) Non Uniform Texture (0.221) (b) Uniform Texture (0.622)

Figure 5.11: Uniformity based Quality Attribute (S )

5.2.4 Entropy based Quality Attribute (E)

The most common statistical measure that is used to quantify the amount of infor-

mation in any gray scale image (I) is the entropy value defined by:

e = −255∑i=0

hist[i] ∗ log(2 ∗ hist[i]) (5.8)

Page 145: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.2. KNUCKLEPRINT IMAGE QUALITY EXTRACTION 111

where hist[i] is the ith element of the 256 valued gray level histogram, hist, of the

input image I. The image is divided into blocks of size 5×5 and block-wise entropy

is calculated using Eq. (5.8) (as shown in Figs. 5.12(b) and 5.12(e)). Since all

blocks do not carry the same amount of importance, only blocks that are having

well focused long vertically aligned edge pixels (using Fmap as defined in Section

5.2.1 ) more than a predefined empirically selected threshold tfm are considered as

significant blocks. Finally, the entropy based quality attribute (E ) is obtained by

summing up the entropy values of all significant blocks. They are shown in Figs.

5.12(c) and 5.12(f).

(a) Low Informative Sample (b) Block-wise Entropy (c) Significant Blocks Entropy

(d) High Informative Sample (e) Block-wise Entropy (f) Significant Blocks Entropy

Figure 5.12: Entropy based Quality Attribute E

5.2.5 Reflection based Quality Attribute (Re)

High reflection can be caused due to light source or camera flash and creates a

patch of very high intensity gray values. The unique line based information within

this patch is completely ruined leading to severe image quality degradation. This

reflection patch is identified by using adaptive thresholding and it can be ignored

while matching. The sample knuckleprint is repeatedly thresholded to estimate the

most accurate reflection patch intensity level, starting from a high gray level and

Page 146: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

112 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

gradually reducing it. After each thresholding step, number of pixels that are having

gray value more than the threshold is calculated. This count keeps on changing

significantly because some of the nearby area around the reflection patch may not

be captured by the previous threshold. This thresholding procedure gets terminated

when this count is saturated (i.e when the difference in the count before and after

thresholding is less than an empirically selected value tr). After termination, the

full reflection patch is identified. It is shown in Fig. 5.13(b). The reflection based

quality attribute (Re) is defined as the fraction of pixels belonging to the reflection

patch; hence it is inversely proportional to the image quality.

(a) Original Image (b) Computed Reflection patch

Figure 5.13: Reflection based Quality Attribute Re

5.2.6 Contrast based Quality Attribute (Con)

Often quality of the knuckleprint image gets severely affected by very poor or rich

lighting condition. Large illumination variation can reduce the discriminative line

based features and hence can degrade the overall uniqueness of biometric images.

The contrast of the input image (I) can give some information about the dynamic

gray level range present in the image. Hence, it can be used to infer that image

is either too dark or light. Basically, we can use it to estimate the uniformity in

illumination through-out the image. The whole gray level range is divided into three

groups (0, 75), (76, 235), (236, 255). The contrast based quality attribute (Con) is

defined as the fraction of pixels belonging to the mid gray level range (i.e (76, 235))

because it indicates the moderated intensity range.

Page 147: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.2. KNUCKLEPRINT IMAGE QUALITY EXTRACTION 113

(a) Focus (b) Clutter

(c) Uniformity (d) Entropy

(e) Specular Reflection (f) Contrast

Figure 5.14: Knuckleprint Quality Estimation. Top, Middle, Bottom Images foreach Proposed Quality Parameter.

5.2.7 Quality Class Determination

Similar to the iris quality attribute fusion as discussed in Section 4.2.7, Support

Vector Machine (SVM) is trained using images from first 100 subjects from the

database. The actual quality classes (i.e ground truth) in the training set of images

are assigned manually. This set of quality attributes along with the quality class

label of the knuckleprint image is used to train the SVM to learn the function f as

defined in Eq. (5.6). The trained classifier based model is used for the prediction

of the quality. Whenever a new knuckleprint image is given to the algorithm, all

quality parameters are normalized to the range (0,1) and are passed to the trained

SVM classifier. The SVM classifier provides the quality class corresponding to the

knuckleprint image. In Fig. 5.14 some top, middle and bottom quality images with

respect to each proposed quality parameters are shown.

Page 148: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

114 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

5.3 Knuckleprint Image Preprocessing

The extracted region of interest (ROI) of knuckleprint is generally of poor contrast.

Hence image enhancement algorithm as discussed in Section 4.3.1 is applied over

the ROI. The enhanced knuckleprint image posseses better quality texture. It is

shown in Fig. 5.15.

(a) Original (b) Bg Illum. (c) Uni. Illum. (d) Enhanced (e) Noise Removal

Figure 5.15: Enhanced Knuckleprint

In order to obtain robust representations (vcode and hcode) that can tolerate

small amount of illumination variation, images are transformed using the proposed

LGBP transformation as discussed in Section 4.3.2 and is shown in Fig. 5.16. An

original knuckle along with its vcode and hcode are shown in Fig. 5.16.

(a) Orig. Knuckleprint (b) Knuckle vcode (c) Knuckle hcode

Figure 5.16: Original and Transformed (vcode, hcode) for Knuckleprint ROI’s

In Fig. 5.17, one raw knuckleprint image is considered under varying illumina-

tion and is shown along with the corresponding vcode. One can observe that the

original knuckleprint (as shown in Fig. 5.17(a)) has undergone very sever illumi-

nation variation (as shown in Figs. 5.17(b)-5.17(f)). But the corresponding vcodes

may not be varying much (as shown in Figs. 5.17(g)-5.17(l)). This justifies the use

of the proposed transformation and its robustness against varying illumination.

Page 149: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.4. KNUCKLEPRINT FEATURE EXTRACTION ANDMATCHING 115

(a) FKP Orig (b) FKP i1 (c) FKP i2 (d) FKP i3 (e) FKP i4 (f) FKP i5

(g) vcode Orig (h) vcode i1 (i) vcode i2 (j) vcode i3 (k) vcode i4 (l) vcode i5

Figure 5.17: Illumination Invariance of LGBP transformation. First row showssame image under five different illumination conditions. Second row shows theircorresponding vcode’s

5.4 Knuckleprint Feature Extraction and Match-

ing

The feature extraction and matching algorithm used in iris based recognition system

can be used for knuckleprint as well. The corner based features [63] are extracted

from both vcode and hcode obtained from any knuckleprint ROI. The KL tracking

[44] has been used to track the corner features in the corresponding images for

matching two knuckleprint ROI. The proposed CIOF dissimilarity measure is

used to estimate the performance of KL tracking to differentiate between genuine

and imposter matching. The performance of the KL tracking algorithm is good for

genuine matching and bad for imposter matchings. All steps applied over any raw

knuckleprint image for its recognition are shown in Fig. 5.18.

5.4.1 Corners having Inconsistent Optical Flow (CIOF)

The direction of pixel motion is termed as optical flow of that pixel. It can be

computed using KL-tracking algorithm. A dissimilarity measure CIOF (Corners

having Inconsistent Optical Flow) proposed in Section 4.4.1 is used to estimate the

KL-tracking performance. Apart from vicinity constraint and patch-wise dissimi-

Page 150: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

116 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

(a) Original (Raw) (b) Localized (c) Cropped (d) Uni. Illu.

(e) Enhanced (f) vcode (g) hcode (h) Features

Figure 5.18: Knuckleprint Recognition

larity, one more statistical constraint viz. correlation bound is evaluated for each

potential matching feature pair given by KL-tracking algorithm defined as follows:

Correlation Bound : The phase only correlation (POC) [47] (as defined in

Section 3.4) between a local patch centered at any feature and that of its estimated

tracked location patch should be at-least equal to an empirically selected threshold

Tcb. This bound is used to ensure that local patch around each potential matching

feature pair is correlated.

Given vcode and hcode of two images knucklea and knucleb, Algorithm 5.3 can be

used to compute a dissimilarity score using CIOF measure. The vcodes are matched

to obtain the vertical matching score, while the respective hcodes are matched to

generate the horizontal matching score. The corner features that are having their

tracked position which satisfy the above mentioned three constraints (viz. vicinity

constraint, patch-wise dissimilarity and correlation bound) are considered as suc-

cessfully tracked. Consistent global corner optical flow is further used to prune

out some of the false matching corners as mentioned in Section 4.4.1. The esti-

mated optical flow angle for each potential matching pair is quantized into eight

directions and the most consistent direction is chosen. Any corner matching pair

with different optical flow direction is discarded. Finally, the ratio of unsuccessfully

Page 151: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.5. DATABASE SPECIFICATIONS 117

tracked corners to the total number of corner is considered as the dissimilarity mea-

sure, CIOF . Algorithm 5.3 illustrates all steps. Some of the properties of CIOF

distance measure computed between two images A,B are show below :

1. CIOF (A,B) = CIOF (B,A).

2. CIOF (A,A) = 0.

3. CIOF (A,B) always lies in the range [0, 1].

4. CIOF (A,B) value will be high if A,B belongs different subjects.

5.5 Database Specifications

The largest publicly available knuckleprint database has been used to analyze the

performance of the proposed knuckleprint system. Inter-session matchings are done

to perform various experimentation.

Figure 5.19: Some Images from Knuckleprint Database [56] used in this work

PolyU [56] : It is a huge Knuckleprint database consisting of 7, 920 FKP

sample images obtained from 165 subjects in two sessions. On an average, time

interval between the sessions has been 25 days. In each session, 6 images from 4

fingers (distinct index and middle fingers of both hands) are collected. Hence, a

total of 165 × 4 = 660 distinct knuckleprints data is collected. Out of 165, 143

Page 152: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

118 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

Algorithm 5.3 CIOF (knucklea, knuckleb)

Require:(a) The vcode IvA,IvB of two knuckleprint images knucklea, knuckleb respectively.(b) The hcode IhA,IhB of two knuckleprint images knucklea, knuckleb respectively.(c) N v

a , N vb ,Nh

a and Nhb are the number of corners in IvA, I

vB, I

hA and IhB respectively.

Ensure: Return CIOF (knucklea, knuckleb).1: Track all the corners of vcode IvA in vcode IvB and that of hcode IhA in hcode IhB.2: Obtain the set of corners successfully tracked in vcode tracking (i.e. stcvAB) andhcode tracking (i.e. stchAB) that have their tracked position within Td, their localpatch dissimilarity under Te and also the patch-wise correlation is at-least equalto Tcb.

3: Similarly compute successfully tracked corners of vcode IvB in vcode IvA (i.e.stcvBA) as well as hcode IhB in hcode IhA (i.e. stchBA).

4: Quantize optical flow direction for each successfully tracked corners into eightdirections (i.e. at an interval of π

8) and obtain 4 histograms Hv

AB, HhAB, H

vBA and

HhBA using these four corner sets stcvAB, stc

hAB, stc

vBA and stchBA respectively.

5: For each histogram, out of 8 bins, the bin (i.e. direction) which is having themaximum number of corners is considered as the consistent optical flow direc-tion. The maximum value obtained from each histogram is termed as cornershaving consistent optical flow represented as cof vAB, cof

hAB, cof

vBA and cofhBA.

6: ciof vAB = 1− cofvABNva

; [Corners with Inconsis. Optical Flow (vcode)]

7: ciof vBA = 1− cofvBANvb

; [Corners with Inconsis. Optical Flow (vcode)]

8: ciofhAB = 1− cofhABNha

; [Corners with Inconsis. Optical Flow (hcode)]

9: ciofhBA = 1− cofhBANhb

; [Corners with Inconsis. Optical Flow (hcode)]

10: return CIOF (Knucklea, Knuckleb) =ciofvAB+ciofhAB+ciofvBA+ciof

hBA

4;

subjects are belonging to an age group 20− 30 and others are belonging to 30− 50

age group.

5.5.1 Testing Strategy

For testing, all 6 images of first session are taken for training while images of second

session are taken for testing. Hence, a total of 23, 760 genuine and 15, 657, 840

imposter matchings is considered. The database along with the specifications of

testing strategy is given in Table 5.1. One can observe that a large number of

Page 153: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.6. PERFORMANCE ANALYSIS 119

Table 5.1: Database Specifications

Subject Pose Total Training Testing GenuineMatching

ImposterMatching

PolyU (Knuckleprint)165 (660Knuckles)

12 7920 First 6 Last 6 23,760 15,657,840

genuine as well as imposter matching is needed to evaluate the performance of the

proposed knuckleprint system.

5.6 Performance Analysis

5.6.1 Knuckleprint Segmentation

The PolyU knuckleprint database contains 12 images of 660 subjects each, out of

which 7484 images are correctly segmented. Therefore, the proposed segmenta-

tion algorithm performs with an accuracy of 94.49% over PolyU [56] knuckleprint

database. One can observe that the proposed algorithm can extract the ROI con-

sistently. Some images for which the proposed algorithm fails to segment are shown

in Fig. 5.20. The algorithm fails mainly due to poor image quality. Some other rea-

sons for segmentation failure include lack of assumed horizontal knuckle alignment

and missing symmetric knuckle texture which is captured using knuckle filter. Also,

multiple finger knuckle in an image leads to improper segmentation. Such images

are segmented manually.

5.6.2 Threshold Selection

The matching performance of the proposed CIOF dissimilarity measure depends

on three thresholds viz. Te, Td and Tcb. The values of these parameters are selected

Page 154: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

120 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

(a) Poor Quality (b) Multiple Knuckle (c) Not Horizontal (d) Improperly Placed

Figure 5.20: Failed Knuckle ROI detection

to maximize the system performance over a validation set containing only left index

images of first 50 subjects and using only vcode matchings. Several sets of thresholds

are considered during testing and the best parametric set in terms of performance

is selected and is reported along with few other information in Table 5.2.

Table 5.2: Parameterized Performance Analysis of the proposed system on PolyUKnuckleprint Database. (only vcode matching)

PolyU Knuckleprint DatabaseTd Te Tcb DI EER(%) Accuracy(%) EUC CRR(%)22 2000 0.17 3.50 1.51 98.83 0.60 99.6924 2000 0.17 3.51 1.51 98.83 0.60 99.6922 2000 0.3 3.37 1.31 99.24 0.33 99.7924 2000 0.3 3.379 1.313 99.24 0.337 99.7922 2000 0.4 2.56 1.01 99.29 0.376 99.7922 2200 0.3 3.279 1.11 99.29 0.291 99.7920 2200 0.3 3.274 1.11 99.24 0.306 99.7917 2200 0.3 3.268 1.212 99.19 0.384 99.7931 2600 0.4508 2.183 1.212 99.141 0.381 99.79

The threshold values for which the proposed knuckleprint system is achieved

maximum CRR and minimum ERR are Te = 2000 with patch size of 11× 11 with

Td = 22 along with Tcb = 0.4 over PolyU Knuckleprint databases. They are shown

in Fig. 5.21 and in Table 5.2. It has bigger patch size along with higher Td values for

knuckleprint images as compared to iris images, because knuckleprint images have

long line like coarser features. Also, they are not normalized as in the case of iris.

Page 155: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

5.6. PERFORMANCE ANALYSIS 121

Figure 5.21: Parameterized ROC Analysis of the Proposed System for PolyU Knuck-leprint Database. First 50 subject’s left index images are used. (only vcode match-ings)

5.6.3 Comparative Analysis

Table 5.3: Comparative Performance Analysis over PolyU Knuckleprint (Results asreported in [27])

Algorithm Equal Error RateCompcode [35] 1.386

BOCV [1] 1.833ImCompcode and MagCode [71] 1.210

MoriCode [27] 1.201MtexCode [27] 1.816

MoriCode and MtexCode [27] 1.0481vcode 1.5151hcode 4.2929

vcode+ hcode 0.934343

The performance of the proposed system over PolyU knuckleprint database is

Page 156: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

122 CHAPTER 5. KNUCKLEPRINT RECOGNITION SYSTEM

compared with some well known systems. It is given in Table 5.3. The EER values

as reported in [27] are used for the comparison as they have also adopted the same

testing strategy. The ROC curve for performance analysis of the proposed system

over PolyU Database is shown in Fig. 5.22. One can observe from Table 5.3 and Fig.

5.22 that the individual hcode performance is not as impressive as vcode, but fusion

of both (vcode and hcode) significantly boost-up the system performance. Such a

fusion is very useful because some images may have more discrimination in vertical

direction while others have it in horizontal direction. Also, it is observed that the

lowest EER is achieved with the proposed fusion which is much better than fusion

of MoriCode and MtexCode [27].

Figure 5.22: Performance of the Proposed System over Knuckleprint PolyUDatabase

Page 157: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Chapter 6

Palmprint Recognition System

This chapter deals with the problem of designing an efficient palmprint based recog-

nition system. The palmprint ROI is extracted using key-point segmentation al-

gorithm proposed in [9]. Features like palm principle lines, wrinkles and ridges

are used for matching between two palmprints. Like any other biometric system,

palmprint based recognition system consists of five major tasks, viz. ROI extrac-

tion, quality estimation, ROI preprocessing, feature extraction and matching. The

overall architecture of the proposed system is shown in Fig. 6.1. Two publicly

available palmprint databases CASIA [15] and PolyU [57], are used to analyse the

performance of the proposed system.

6.1 Palmprint ROI Extraction

The algorithm proposed in [9] for palmprint ROI extraction has been used. The

hand images are thresholded to obtain the binarized image and the hand contour

is extracted. Four key-points (X1, X2, V1, V2) in the hand contour are computed as

shown in Fig. 6.2(b) where X1, X2 and V1, V2 are left and right-most hill and valley

points respectively. Two more key-points, C1 and C2 where C1 is the intersection

Page 158: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

124 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM

Figure 6.1: Overall Architecture of the Proposed Palmprint Recognition System

point of hand contour and the line passing from V1 with a slope of 45◦ and C2 is

the intersection point of hand countour and the line passing from V2 with a slope

of 60◦ as shown in Fig. 6.2(c). Finally, the midpoints of the line segment V1C1

and V2C2 are joined which is considered as one side of the square ROI. The final

extracted palmprint ROI is shown in Fig. 6.2(d). The various steps involved in ROI

extraction are shown in Fig. 6.2. Algorithm 6.1 can be used to extract the ROI

from any palmprint image.

(a) Original (b) Contour (c) Key Points (d) Palmprint ROI

Figure 6.2: Palmprint ROI Extraction (Images are taken from [9])

Page 159: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

6.2. PALMPRINT PREPROCESSING 125

Algorithm 6.1 Palmprint ROI Extraction

Require:Full acquired palmprint image I of dimension m× n as shown in Fig. 6.2(a).

Ensure:The cropped palmprint ROI as shown in Fig. 6.2(d).

1: Threshold the palmprint image Ip to extract the hand contour C.2: Over the hand contour C find the coordinates of four key points X1, X2, V1, V2

as shown in Fig.6.2(b).3: Compute C1 as the intersection point of hand contour and line passing from V1

with a slope of 45◦.4: Compute C2 as the intersection point of hand contour and line passing from V2

with a slope of 60◦.5: Midpoints of the line segments V1C1 and V2C2 are considered as one side of the

required square palmprint ROI.6: Extract the required square palmprint ROI as shown in Fig. 6.2(d).

6.2 Palmprint Preprocessing

The extracted region of interest (ROI) of palmprint is generally of poor contrast.

The image enhancement algorithm discussed in Section 4.3.1 is applied over the

ROI. The enhanced palmprint has better quality texture as shown in Fig. 6.3. It

uses local block average as the background illumination which is subtracted from the

original ROI to obtain uniformly illuminated ROI which is shown in Fig. 6.3(c).

Finally image is enhanced using CLAHE [55] and noise is removed using weiner

filtering [68]. The enhanced palmprint ROI is shown in Fig. 6.3(e).

(a) Original (b) Bg Illum. (c) Uni. Illum. (d) Enhanced (e) Noise Removal

Figure 6.3: Palmprint Image Enhancement

In order to obtain robust representation (vcode and hcode) that can tolerate

Page 160: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

126 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM

small amount of illumination variation, images are transformed using the LGBP

transformation . An original palm along with its vcode and hcode is shown in Fig.

6.4. It uses the sign of local gradient (vertical and horizontal) around any pixel to

obtain a 8 bit lgbp code. This is robust to small illumination variation.

(a) Orig. Palmprint (b) Palm vcode (c) Palm hcode

Figure 6.4: Original and Transformed (vcode, hcode) for Palmprint ROI

6.3 Palmprint Feature Extraction and Matching

The feature extraction and matching algorithm used for knuckleprint in Section 5.4

is also used for palmprint. Both of them posses similar type of features. In order to

match two palmprint ROIs, corner features [63] are extracted from both vcode and

hcode and are tracked using KL tracking [44] algorithm in the corresponding images.

The performance of the KL tracking algorithm is assumed to be good for genuine

matching and bad for imposter matching. Hence, a dissimilarity measure, Corners

having Inconsistent Optical Flow CIOF , can be used to estimate the performance of

KL tracking which can differentiate between genuine and imposter matching. Steps

that are applied over any raw palmprint image for its recognition are shown in Fig.

6.5.

Page 161: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

6.3. PALMPRINT FEATURE EXTRACTION AND MATCHING 127

(a) Original (Raw) (b) Localized (c) Cropped (d) Uni. Illu.

(e) Enhanced (f) vcode (g) hcode (h) Features

Figure 6.5: Steps of the Palmprint Recognition System

6.3.1 Corners having Inconsistent Optical Flow (CIOF)

All corners are tracked using KL-tracking. The direction in which any corner has

moved is termed as its optical flow direction. A dissimilarity measure CIOF

(Corners having Inconsistent Optical Flow) has been proposed to estimate the KL-

tracking performance. The CIOF measure is defined using three constraints viz.

vicinity, patch-wise dissimilarity and correlation bound defined as :

[a] Vicinity Constraints: Euclidean distance between any corner and its esti-

mated tracked location should not be more than an empirically selected threshold

Td. The parameter Td depends upon the amount of translation and rotation in the

sample images. High Td signifies more translation and vise-versa.

[b] Patch-wise Dissimilarity: Tracking error defined as pixel-wise sum of

absolute difference between a local patch centered at current corner and that of its

estimated tracked location patch. This error should not be more than an empirically

selected threshold Te. The parameter Te ensures that the matching corners must

have similar neighborhood patch around it.

[c] Correlation Bound : The phase only correlation (POC) [47] between a

local patch centered at any feature and that of its estimated tracked location patch

should be at-least equal to an empirically selected threshold Tcb. This bound is used

Page 162: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

128 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM

to ensure that local patch around each potential matching feature pair is correlated.

Given vcode and hcode of palma and palmb, Algorithm 6.2 is used to compute

a dissimilarity score using CIOF measure. The vcodes are matched to obtain the

vertical matching score, while the respective hcodes are matched to generate the

horizontal matching score. The corners having their tracked position satisfying the

above mentioned three constraints (viz. vicinity constraint, patch-wise dissimilarity

and correlation bound) are considered as successfully tracked. Consistent global

corner optical flow is further used to prune out some of the false matching corners

as discussed in Section 4.4.1. The estimated optical flow direction (i.e angle in

degrees) for each potential matching pair is quantized into eight directions and the

most consistent direction is chosen. Any corner matching pair with different optical

flow direction is discarded. Finally, the ratio of unsuccessfully tracked corners to

the total number of corners is considered as the dissimilarity measure between two

palmprints.

6.4 Database

The proposed system has been tested on two publicly available databases, viz. CA-

SIA [15] and PolyU [57]. All possible inter-session matchings are performed to

analyze the system.

CASIA [15] : The CASIA palmprint database has 5502 palmprints taken from

312 subjects (i.e 312 left and right palms is 624 distinct palms). From each subject

around 8 images are collected from both hands. The acquisition device uses CMOS

based sensor and is pegs free; hence anyone can place his hand easily. However,

there are a few subjects who have less than 8 images and they are discarded from

the experiment.

PolyU [57] : The PolyU palmprint database has 7752 palmprints of 193

Page 163: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

6.4. DATABASE 129

Algorithm 6.2 CIOF (palma, palmb)

Require:(a) The vcode IvA,IvB of two palmprint images palma, palmb respectively.(b) The hcode IhA,IhB of two palmprint images palma, palmb respectively.(c) N v

a , N vb ,Nh

a and Nhb are the number of corners in IvA, I

vB, I

hA and IhB respectively.

Ensure: Return CIOF (palma, palmb).1: Track all the corners of vcode IvA in vcode IvB and that of hcode IhA in hcode IhB.2: Obtain the set of corners successfully tracked in vcode tracking (i.e. stcvAB) andhcode tracking (i.e. stchAB) that have their tracked position within Td, their localpatch dissimilarity under Te and also the patch-wise correlation is at-least equalto Tcb.

3: Similarly compute successfully tracked corners of vcode IvB in vcode IvA (i.e.stcvBA) as well as hcode IhB in hcode IhA (i.e. stchBA).

4: Quantize optical flow direction for each successfully tracked corners into eightdirections (i.e. at an interval of π

8) and obtain 4 histograms Hv

AB, HhAB, H

vBA and

HhBA using these four corner sets stcvAB, stc

hAB, stc

vBA and stchBA respectively.

5: For each histogram, out of 8 bins, the bin (i.e. direction) which is having themaximum number of corners is considered as the consistent optical flow direc-tion. The maximum value obtained from each histogram is termed as cornershaving consistent optical flow represented as cof vAB, cof

hAB, cof

vBA and cofhBA.

6: ciof vAB = 1− cofvABNva

; [Corners with Inconsis. Optical Flow (vcode)]

7: ciof vBA = 1− cofvBANvb

; [Corners with Inconsis. Optical Flow (vcode)]

8: ciofhAB = 1− cofhABNha

; [Corners with Inconsis. Optical Flow (hcode)]

9: ciofhBA = 1− cofhBANhb

; [Corners with Inconsis. Optical Flow (hcode)]

10: return CIOF (palma, palmb) =ciofvAB+ciofhAB+ciofvBA+ciof

hBA

4;

subjects (i.e 192 left and right palms i.e. 386 distinct palms). From each subject,

20 images from both hands and in two sessions (10 images per session) are collected.

The acquisition device uses CCD based sensor with spatial resolution of 75 dots per

inch. But it contains pegs; hence user has to place his hand accordingly.

There are some palms in both CASIA and PolyU databases with incomplete

or missing data. These palms are also discarded for this experiment. Some of the

sample images taken from both databases are shown in Fig. 6.6 and specifications

of each image are given in Table 6.1. Testing is done over the left and the right

Page 164: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

130 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM

(a) Casia - A (b) Casia - B (c) Casia - C (d) Casia - D (e) Casia - E

(f) PolyU - A (g) PolyU - B (h) PolyU - C (i) PolyU - D (j) PolyU - E

Figure 6.6: Sample Palmprint Images from Casia and PolyU Databases

Table 6.1: Database Specifications

Subject Pose Total Training Testing GenuineMatching

ImposterMatching

Casia (Palmprint Left hand)290 Palms 8 2,320 First 4 Last 4 4,640 1,340,960

Casia (Palmprint Right hand)276 Palms 8 2,208 First 4 Last 4 4,416 1,214,400

Casia (Palmprint Left + Right hand)566 Palms 8 4,528 First 4 Last 4 9,056 5,116,640

PolyU (Palmprint Left hand)193 Palms 20 3,860 First 10 Last 10 19,300 3,705,600

PolyU (Palmprint Right hand)193 Palms 20 3,860 First 10 Last 10 19,300 3,705,600

PolyU (Palmprint Left + Right hand)386 Palms 20 7,720 First 10 Last 10 38,600 14,861,000

palm images of both databases to analyse the hand-wise system performance.

Page 165: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

6.5. PERFORMANCE ANALYSIS 131

6.4.1 Testing Strategy

All intersession matchings are done for performance analysis. Four images of CASIA

[15] palmprint database are taken for training while rest of them are kept for testing.

Hence, there are 9, 056 genuine and 5, 116, 640 imposter matchings. For PolyU [57]

palmprint database, ten images are used for training while rest are taken for testing.

Hence, a total of 38, 600 genuine and 14, 861, 000 imposter matchings are performed.

The database along with specifications of the testing strategy is given in Table 6.1.

One can observe that a large number of genuine as well as imposter matchings are

considered to evaluate the performance of the proposed system.

6.5 Performance Analysis

The palmprint ROIs are extracted using the algorithm proposed in [9]. It is ob-

served that the segmentation accuracy depends upon the key point extraction. In

both databases, we have seen in several images. It may fail to get key points

automatically. Hence, for those images key points are marked manually and seg-

mentation is done. The threshold values for which the proposed palmprint system

performs with maximum CRR and minimum ERR are Te = 750 with patch size of

5×5 with Td = 18 along with Tcb = 0.4 for CASIA and PolyU palmprint databases.

The graphs shown in Fig. 6.7 and Fig. 6.8 show the performance achieved by fus-

ing vcode and hcode information as compared with only vcode matchings, for both

palmprint databases.

CASIA Database : In Fig. 6.7, the ROC characteristics for all different

categories (i.e Left, Right and All) of CASIA palmprint database are shown. Table

6.1 shows the total number of genuine as well as imposter matchings. One can see

that the proposed system has performed equally well over all three categories of

CASIA databases and has shown huge performance boost-up after fusion of vcode

Page 166: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

132 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM

Figure 6.7: Vertical and Horizontal Information Fusion based Performance Boost-upfor CASIA Palmprint Databases (X − axis in Log scale)

and hcode. It has achieved EER of 0.2006, 0.1537 and 0.1551 with CRR of 100%

over Left, Right and All CASIA palmprint databases as shown in Table 6.2.

Hand Information d′

CRR EERPALM (CASIA) LEFT HAND 2.55 100 0.2006PALM (CASIA) RIGHT HAND 2.59 100 0.1537PALM (CASIA) FULL DB (LEFT + RIGHT) 2.43 100 0.1551

PALM (POLYU) LEFT HAND 2.38 99.89 0.5699PALM (POLYU) RIGHT HAND 2.56 100 0.2072PALM (POLYU) FULL DB (LEFT + RIGHT) 2.46 99.95 0.4145

Table 6.2: Palmprint Results (L=LEFT Hand,R=RIGHT Hand,ALL=L+R)

PolyU Database : In Fig. 6.8 the ROC characteristics for various PolyU

palmprint databases are shown. One can infer that overall system’s performance

with respect to EER is very good but as compared with CASIA database it is

poor. This is because our ROI cropping algorithm fails to segment accurately for

Page 167: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

6.5. PERFORMANCE ANALYSIS 133

Figure 6.8: Vertical and Horizontal Information Fusion based Performance Boost-upfor PolyU Palmprint Databases (X − axis in Log scale)

some subjects. Also the PolyU database is subject-wise much bigger than CASIA

database. Hence huge number of imposter and genuine matching are performed for

PolyU (as shown in Table 6.1) database. An EER of 0.5699, 0.2072 and 0.4145

with CRR more than 99.89% has been achieved over Left, Right and All PolyU

palmprint databases as shown in Table 6.2. Huge performance boost-up after fusion

has been observed.

Performance of the system when it is used the right palm is observed to be

better. This is mainly because of the fact that most of the subjects are right

handers and hence they are comfortable to provide their right hand data. The

generalized experimental analysis reveals that vcode is more discriminative than

hcode mainly because palms have mostly vertical edges. But for the matchings where

vcode fails to discriminate, hcode information has been used to enhance the system

performance as it can reduce the falsely accepted imposters. For both databases,

Page 168: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

134 CHAPTER 6. PALMPRINT RECOGNITION SYSTEM

Approach Database CRR % EER %PalmCode [76] Palm (CASIA) 99.62 3.67PalmCode [76] Palm (PolyU) 99.92 0.53CompCode [35] Palm (CASIA) 99.72 2.01CompCode [35] Palm (PolyU) 99.96 0.31OrdinalCode [65] Palm (CASIA) 99.84 1.75OrdinalCode [65] Palm (PolyU) 100.00 0.08Palm-Zernike [7] Palm (CASIA) 99.75 2.00Palm-Zernike [7] Palm (PolyU) 100.00 0.2939Proposed Palm (CASIA) 100.00 0.1551Proposed Palm (PolyU) 99.95 0.4145

Table 6.3: Comparative Performance with Other Systems

one can observe a significant performance boost-up by fusing vcode and hcode scores.

The analysis suggests that fusion can effectively reduces false acceptance rate and

hence significantly enhances the system performance.

The proposed palmprint recognition system is compared with the some well

known systems like [7],[35],[65],[76]. It can be seen that the EER of the fusion of

the proposed (vcode and hcode) over CASIA palmprint database is 0.1551 which

is better than all of these systems. But for PolyU database, the performance of

Ordinal code [65] is found to be better than all other systems. This performance is

due to the huge number of genuine/imposter matchings as shown in Table 6.1. Also,

some PolyU images are cropped inaccurately due to translation and illumination

variation. Hence, the overall performance of the proposed system is found to be

better or comparable with the most of known systems.

Page 169: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Chapter 7

Multimodal Based Recognition

System

In this chapter, details of multi-modal based recognition system have been discussed.

The major focus is over the creation of multimodal database and its experimental

results. Several multimodal systems are viz. iris and knuckleprint, knuckleprint and

palmprint and finally iris, knuckleprint and palmprint have been proposed. Testing

strategies such as intersession matching, one training and one testing and multiple

training and multiple testing are adopted. Two publicly available iris databases

(CASIA Interval [15] and LAMP [14]) are fused with two public palmprint databases

(CASIA [15], PolyU [57]) while for knuckleprint, the publicly available PolyU [57]

database is used.

Threshold Selection : In all the proposed multimodal systems the iris,

knuckleprint and palmprint matchings are performed using the proposed CIOF

dissimilarity measure as proposed previously. The threshold values (such as Td, Te,

Tcb) used for each database are obtained by optimizing the proposed system over a

validation set (that considers only first few subjects) of that database in terms of

performance as explained in Chapters 4, 5, 6.

Page 170: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

136CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

7.1 Knuckleprint and Palmprint Fusion

In this section, knuckleprint and palmprint images are considered for authentication

by fusing them at score level. The proposed system is tested over various combi-

nation of two publicly available benchmark palmprint databases CASIA [15] and

PolyU [57] along with the largest publicly available PolyU knuckleprint database

[75]. The CASIA palmprint database has 5502 palmprint images taken from 312

subjects (i.e 624 distinct palms). Eight, images are collected from both hands of

each subject. The PolyU palmprint database has 7752 palmprint images of 193 sub-

jects (i.e 386 distinct palms). From each subject, 20 images are collected from both

hand in two sessions (10 images per session). The PolyU knuckleprint database

consists of 7920 knuckleprint images taken from 165 subjects. Each subject has

given 6 knuckleprint images of left index (LI), left middle (LM), right index (RI)

and right middle (RM) finger in two sessions (i.e 660 distinct knuckles). There are

some palms in both CASIA and PolyU databases with incomplete or missing data.

Such palms are discarded for this experiment. Some of the sample images taken

from each database are shown in Fig. 7.1 and detailed database specifications are

given in Table 7.1.

7.1.1 Multi-modal Databases Creation

Eight multi-modal databases (viz. A1 to A8) consisting of palm and knuckleprint

images are constructed as defined below. Specifications of these databases are given

in Table 7.1. In A1 and A2, only two modalities are fused while in A4, A5, A7, A8

three modalities are considered for fusion. In A3 and A6, six modalities are fused

to analyze the performance boost-up.

Databases - A1 and A2 : The A1 dataset is created by considering 566 palm

subjects from CASIA databases along with first 566 knuckle subjects (out of 660)

Page 171: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.1. KNUCKLEPRINT AND PALMPRINT FUSION 137

(a) CASIA A (b) CASIA B (c) CASIA C (d) CASIA D (e) CASIA E

(f) Polyu A (g) Polyu B (h) Polyu C (i) Polyu D (j) Polyu E

(k) Polyu A (l) Polyu B (m) Polyu C (n) Polyu D (o) Polyu E

Figure 7.1: Sample Images from all Databases (Five Distinct Users)

from PolyU knuckleprint database. The CASIA palmprint database consists of 8

images per subject while knuckleprint database consists of 12 images per subject;

hence to make A1 database consistent, first and last 4 images per subject are con-

sidered to ensure inter-session matching. Therefore A1 dataset has 4528 palm and

knuckle-print images collected from 566 distinct palm and knuckle, with 8 palm as

well as knuckle images per subject. The first 4 images per trait of every subject are

considered as training images and last 4 images are considered as testing images.

In the similar way, A2 dataset has been constructed by considering all 386 palms

from PolyU palm databases and only first 386 (out of 660) knuckles from PolyU

knuckle database. The PolyU palmprint database consists of 20 images per subject

while knuckleprint database consists of 12 images per subject; to make A2 database

consistent, first and last 6 images per subject are considered to ensure inter-session

matching. First 6 images per trait for every subject are considered as training

images and last 6 images are considered as testing images.

Page 172: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

138CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

Databases - A3 and A6 : The databases A3 and A6 are constructed by

considering all 6 distinct traits per person viz. left and right palm (LP,RP ) along

with left index, left middle, right index and right middle (LI, LM,RI,RM) knuck-

leprint images, where palmprint images are taken from CASIA and PolyU palmprint

databases. All four distinct knuckleprints collected from 165 subjects of knuckleprint

database along with only first 165 subject from CASIA and PolyU palm databases

are used to obtain A3 and A6 respectively. First and last 4 images are considered

as training and testing images respectively for A3 database while for A6 database,

first and last 6 images are used for training and for testing images respectively.

Databases - A4 and A7 : These databases are constructed by considering 3

distinct traits per subject viz. left and right palm (LP,RP ) from CASIA and PolyU

palmprint databases along with the knuckleprints of first 276 and 193 subjects (out

of 660) respectively. First and last 4 images are considered as training and testing

images respectively for A4 database. For A7 database, first and last 6 images are

used for training and testing images respectively.

Databases - A5 and A8 : These databases also consider 3 distinct traits

that includes all 330 knuckles from left and right hand along with the palmprint

data of first 330 palms from CASIA and PolyU palmprint databases respectively.

First and last 4 images are considered as training and testing images respectively

for A5 database while for A8 database, first and last 6 images are used for training

and testing images.

7.1.2 Testing Strategy

The proposed fusion strategy is used over the databases A1 - A8 inter session match-

ings which is defined as follows. For each database, only inter session matchings are

performed to ensure that the two images participating in any matching are tem-

porally distant. The detailed testing specifications for all databases are given in

Page 173: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.1. KNUCKLEPRINT AND PALMPRINT FUSION 139

Traits / Fusion Specifica-tions

Sub Pos Total

Dataset UNIMODALPALM (CASIA) LEFT HAND 290 8 2320PALM (CASIA) RIGHT HAND 276 8 2208PALM (CASIA) FULL DB (LEFT + RIGHT) 566 8 4528PALM (POLYU) LEFT HAND 193 20 3860PALM (POLYU) RIGHT HAND 193 20 3860PALM (POLYU) FULL DB (LEFT + RIGHT) 386 20 7720

KNUCKLE (POLYU) LEFT HAND 330 12 3960KNUCKLE (POLYU) RIGHT HAND 330 12 3960KNUCKLE (POLYU) FULL DB (LEFT + RIGHT) 660 12 7920

Dataset Traits Fused MULTIMODALA1 2 ALL.PALM(CASIA),

ALL.KNUCKLE(POLYU)566 8 2×4528

= 9056A2 2 ALL.PALM(POLYU),

ALL.KNUCKLE(POLYU)386 12 2×4632

= 9264A3 6 L.PALM(CASIA),

R.PALM(CASIA),L.INDEXKNUCKLE(POLYU),L.MIDDLEKNUCKLE(POLYU),R.INDEXKNUCKLE(POLYU),R.MIDDLEKNUCKLE(POLYU)

165 8 6×1320= 7920

A4 3 L.PALM(CASIA),R.PALM(CASIA),ALL.KNUCKLE(POLYU)

276 8 3×2208= 6624

A5 3 ALL.PALM(CASIA),L.KUNCKLE(POLYU)[LI+LM],R.KUNCKLE(POLYU)[RI+RM]

330 8 3×2640= 7920

A6 6 L.PALM(POLYU),R.PALM(POLYU),L.INDEXKNUCKLE(POLYU),L.MIDDLEKNUCKLE(POLYU),R.INDEXKNUCKLE(POLYU),R.MIDDLEKNUCKLE(POLYU)

165 12 6×1980=11880

A7 3 L.PALM(POLYU),R.PALM(POLYU),ALL.KNUCKLE(POLYU)

193 12 3×2316= 6948

A8 3 ALL.PALM(POLYU),L.KNUCKLE(POLYU)[LI+LM],R.KUNCKLE(POLYU)[RI+RM]

330 12 3×3960=11880

Table 7.1: Database Specifications (L=LEFT Hand,R=RIGHT ,ALL=L+R)

Table 7.2. A matching is termed as genuine matching if both of the constituting

images are of the same subject; otherwise, it is termed as an imposter matching.

Page 174: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

140CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

The number of imposter matchings considered for the performance analysis of the

proposed system ranges from half a million to 16 million along with the genuine

matchings ranging from 3 to 40 thousand. One can observe that huge number of

matchings is considered to compute performance parameters of the system. This

testing strategy is considered because most of the state-of-the-art systems of palm-

print and knuckleprint uses the same for the performance analysis of their systems.

7.1.3 Performance Analysis

All results obtained for unimodal and multi-modal systems are presented in Table

7.3. It has been found that the proposed multimodal system performs much better

than the unimodal systems that clearly justifies the fusion strategy. It is found that

CRR of the proposed multi-modal system is 100.00% with an EER less than 0.01%

over all eight multi-modal databases which is much better than their corresponding

unimodal systems.

Also, the performance of the proposed system is observed to be better than other

proposed state-of-the-art multimodal based biometric systems [64], [58], [81] mainly

because they have fused face and iris. Their performance got restricted primarily

due to several face trait specific issues and challenges (such as pose, expression,

aging, illuminations e.t.c.). In this work, palm and knuckleprint are considered

for fusion as both of them have huge amount of unique and discriminative texture

information to compliment each other. Also, both of them are hand based which

reduce the data acquisition time, user cooperation and increases the user acceptance

and data quality.

The EER of the proposed multi-modal system for all 8 multi-modal databases

(A1 to A8) is found to be either zero or very low as given in Table 7.3. Hence, all

ROC curves look very similar to each other. Generally for such highly accurate sys-

tems, in order to draw any significant conclusion, decidability index d′

and genuine

Page 175: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.1. KNUCKLEPRINT AND PALMPRINT FUSION 141

Traits / Fusion Specifica-tions

TrainingImages

TestingImages

GenuineMatches

ImposterMatches

Dataset UNIMODALPALM (CASIA) LEFT HAND 290×4

= 1160290×4= 1160

4640 1340960

PALM (CASIA) RIGHT HAND 276×4= 1104

276×4= 1104

4416 1214400

PALM (CASIA) FULL DB (LEFT + RIGHT) 566×4= 2264

566×4= 2264

9056 5116640

PALM (POLYU) LEFT HAND 193×10= 1930

193×10= 1930

19300 3705600

PALM (POLYU) RIGHT HAND 193×10= 1930

193×10= 1930

19300 3705600

PALM (POLYU) FULL DB (LEFT + RIGHT) 386×10= 3860

386×10= 3860

38600 14861000

KNUCKLE (POLYU) LEFT HAND 330×6= 1980

330×6= 1980

11880 3908520

KNUCKLE (POLYU) RIGHT HAND 330×6= 1980

330×6= 1980

11880 3908520

KNUCKLE (POLYU) FULL DB (LEFT + RIGHT) 660×6= 3960

660×6= 3960

23760 15681600

Dataset TraitsFused

MULTIMODAL

A1 2 ALL.PALM(CASIA),ALL.KNUCKLE(POLYU)

566×4= 2264

566×4= 2264

9056 5116640

A2 2 ALL.PALM(POLYU),ALL.KNUCKLE(POLYU)

386×6= 2316

386×6= 2316

13896 5349960

A3 6 L.PALM(CASIA),R.PALM(CASIA),L.INDEXKNUCKLE(POLYU),L.MIDDLEKNUCKLE(POLYU),R.INDEXKNUCKLE(POLYU),R.MIDDLEKNUCKLE(POLYU)

165×4= 660

165×4= 660

2640 432960

A4 3 L.PALM(CASIA),R.PALM(CASIA),ALL.KNUCKLE(POLYU)

276×4= 1104

276×4= 1104

4416 1214400

A5 3 ALL.PALM(CASIA),L.KUNCKLE(POLYU)[LI+LM],R.KUNCKLE(POLYU)[RI+RM]

330×4= 1320

330×4= 1320

5280 1737120

A6 6 L.PALM(POLYU),R.PALM(POLYU),L.INDEXKNUCKLE(POLYU),L.MIDDLEKNUCKLE(POLYU),R.INDEXKNUCKLE(POLYU),R.MIDDLEKNUCKLE(POLYU)

165×6= 990

165×6= 990

5940 974160

A7 3 L.PALM(POLYU),R.PALM(POLYU),ALL.KNUCKLE(POLYU)

193×6= 1158

193×6= 1158

6948 1334016

A8 3 ALL.PALM(POLYU),L.KNUCKLE(POLYU)[LI+LM],R.KUNCKLE(POLYU)[RI+RM]

330×6= 1980

330×6= 1980

11880 3908520

Table 7.2: Testing Strategy Specifications (L=LEFT ,R=RIGHT ,ALL=L+R)

Page 176: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

142CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

Traits / Fusion Specifica-tions

d′

CRR EER

Dataset UNIMODALPALM (CASIA) LEFT HAND 2.57 99.91 0.30PALM (CASIA) RIGHT HAND 2.53 100 0.34PALM (CASIA) FULL DB (LEFT + RIGHT) 2.30 99.96 0.29PALM (POLYU) LEFT HAND 1.72 99.89 2.09PALM (POLYU) RIGHT HAND 1.79 100 1.34PALM (POLYU) FULL DB (LEFT + RIGHT) 1.71 99.95 1.5

KNUCKLE (POLYU) LEFT HAND 2.07 99.40 3.22KNUCKLE (POLYU) RIGHT HAND 2.06 99.55 3.22KNUCKLE (POLYU) FULL DB (LEFT + RIGHT) 2.08 99.41 3.06

Dataset Traits Fused MULTIMODALA1 2 ALL.PALM(CASIA),

ALL.KNUCKLE(POLYU)2.78 100 0.02

A2 2 ALL.PALM(POLYU),ALL.KNUCKLE(POLYU)

2.43 100 0.12

A3 6 L.PALM(CASIA),R.PALM(CASIA),L.INDEXKNUCKLE(POLYU),L.MIDDLEKNUCKLE(POLYU),R.INDEXKNUCKLE(POLYU),R.MIDDLEKNUCKLE(POLYU)

4.62 100 0.0

A4 3 L.PALM(CASIA),R.PALM(CASIA),ALL.KNUCKLE(POLYU)

3.90 100 0.0

A5 3 ALL.PALM(CASIA),L.KUNCKLE(POLYU)[LI+LM],R.KUNCKLE(POLYU)[RI+RM]

3.39 100 0.0

A6 6 L.PALM(POLYU),R.PALM(POLYU),L.INDEXKNUCKLE(POLYU),L.MIDDLEKNUCKLE(POLYU),R.INDEXKNUCKLE(POLYU),R.MIDDLEKNUCKLE(POLYU)

2.89 100 0.02

A7 3 L.PALM(POLYU),R.PALM(POLYU),ALL.KNUCKLE(POLYU)

2.95 100 0.0

A8 3 ALL.PALM(POLYU),L.KNUCKLE(POLYU)[LI+LM],R.KUNCKLE(POLYU)[RI+RM]

2.53 100 0.04

Table 7.3: Consolidated Results (L=LEFT Hand,R=RIGHT Hand,ALL=L+R)

Page 177: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.1. KNUCKLEPRINT AND PALMPRINT FUSION 143

(a) A1 (b) A2

(c) A3 (d) A4

(e) A5 (f) A6

(g) A7 (h) A8

Figure 7.2: Genuine Vs Imposter Best Matching Score Separation Graphs

Page 178: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

144CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

vs imposter best matching score separation graphs are used.

Decidability Index (d′) : The decidability index d

′measures the separability

of all genuine and imposter matching scores. Higher d′

signifies better separation

and hence superior performance. From Table 7.3, it is observed that d′

value for all

multi-modal databases is more than 2.5 and for some databases (databases consisting

more than two traits fusion) it is as high as 4.6 which is supposed to be very good.

Genuine vs Imposter Best Match Graph : Genuine vs imposter best

matching score separation graph plots the best genuine and the best imposter score

for all probe images. From these graphs, the best genuine and the best imposter

matching score separation can be analyzed visually. Such graphs for all multi-

modal databases (A1 to A8) are shown in Fig. 7.2. It can be observed that only

for one or two probe images, the genuine and imposters scores are comparable. For

all other multi-modal databases, clear-cut and discriminative separation between

genuine and imposter best matchings for all probe images can be seen; hence the

overall performance of the proposed system is formed to be very good.

7.2 Iris and Knuckleprint Fusion

In this section iris and knuckleprint images are considered for authentication by

fusing them at score level. The proposed system is tested on two publicly available

CASIA V4 Interval [14] and Lamp [14] iris databases along with the largest publicly

available PolyU [56] knuckleprint database. The CASIA V4 Interval database con-

tains 2, 639 iris images collected from 249 subjects having 395 distinct irises with

about 7 images per iris. Left and right eye of the same individual are considered as

different subjects since iris patterns are formed randomly. Interval iris images are

taken in two sessions under indoor environment. The CASIA V4 Lamp is a huge

database consisting of 16, 212 images collected from 411 subjects having 819 distinct

Page 179: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.2. IRIS AND KNUCKLEPRINT FUSION 145

irises and 20 images per iris. The Lamp iris images are taken in only one session

under indoor environment with lamp on/off. Iris images in the Lamp database are

more challenging than the Interval database because of nonlinear deformation due

to variation of visible illumination. The PolyU Knuckleprint is a database consisting

of 7, 920 FKP images obtained from 165 subjects in two sessions. At each session,

6 images from 4 fingers (distinct index and middle fingers of both hands) are col-

lected. Hence a total of 165× 4 = 660 distinct knuckleprints is collected. Later two

chimeric multimodal databases are created for testing. The database specifications

are given in Table 7.4. There are very few subjects in CASIA V4 Interval and Lamp

iris database which do not have sufficient images for matching (not even a single

test image). Such subjects are discarded for this experiment.

7.2.1 Multimodal Database Creation

Two chimeric multimodal databases are created by fusion of the above mentioned

iris and knuckleprint databases. The iris samples are totally uncorrelated with

knuckleprint samples; both data taken from different subjects are merged and scores

are fused. Both of the created multimodal chimeric database contains huge amount

of images and hence it is a big challenge to achieve good results over them, because

the number of falsely accepted sample rate (FAR) grows very fast with the database

size [5].

CASIA Interval and PolyU (MM1) : All of the CASIA V4 Interval

database images are considered along with the knuckleprint of first 395 subjects

from PolyU knuckleprint database. Hence, a total of 2, 639 iris images as well as

knuckleprints are need to define a multimodal database. This database is named as

MM1. Specifications of this database are given in Table 7.4.

CASIA Lamp and PolyU (MM2) : All PolyU Knuckleprints along with

the first and last 6 images (i.e a total of 12 samples) of initial 660 iris samples from

Page 180: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

146CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

Table 7.4: Database Specifications

Subject Pose Total Training Testing GenuineMatching

ImposterMatching

Casia V4 Interval (Iris)249 (395 Iris) 7 2,639 First 3 Rest 3,657 1,272,636

Casia V4 Lamp (Iris)411 (819 Iris) 20 16,212 First 10 Last 10 78,300 61,230,600

PolyU (Knuckleprint)165 (660 Knuckleprint) 12 7920 First 6 Last 6 23,760 15,657,840

Multimodal (Iris Interval + Knuckleprint PolyU) MM1395 (Iris Fused Knuck-leprint)

7 2,639 Iris and 2,639Knuckleprint

First 3 Rest 3,657 1,272,636

Multimodal (Iris Lamp + Knuckleprint PolyU) MM2660 (Iris Fused Knuck-leprint)

12 16,212 Iris and16,212 Knuck-leprint

First 6 Last 6 23,760 15,657,840

CASIA V4 Lamp iris database are considered to define a modified database. A total

of 7, 920 iris as well as knuckleprint sample images from 660 distinct irises as well

as knuckleprints are used in this multi-modal database which is named as MM2.

Specifications of the database are given in Table 7.4.

7.2.2 Testing Strategy

CASIA V4 Interval database : For testing, images of first session are taken as

training while remaining images are considered for testing. Hence, a total of 3, 657

genuine and 1, 272, 636 imposter matchings are considered for Interval database

testing.

CASIA V4 Lamp database : For testing, first 10 images are considered

as training and rest are taken for testing. Hence, a total of 78, 300 genuine and

61, 230, 600 imposter matchings are used for Lamp database.

PolyU Knuckleprint database : For testing, all 6 images of first session

are taken as training while images of second session are taken as testing. It has

Page 181: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.2. IRIS AND KNUCKLEPRINT FUSION 147

considered a total of 23, 760 genuine and 15, 657, 840 imposter matchings for Lamp

database.

CASIA V4 Interval fused with PolyU database (MM1): For testing the

system in chimeric multimodal database same strategy for Interval testing is used.

Iris images of first session are fused with first 3 PolyU knuckleprint images and are

considered as training data while remaining iris images of that subject are fused

with the same number of PolyU knuckleprint of second session for testing. Hence, a

total of 3, 657 genuine and 1, 272, 636 imposter matchings are considered for MM1

database.

CASIA V4 Lamp fused with PolyU database (MM2): For testing the

system in chimeric multimodal database same strategy of PolyU Knuckleprint test-

ing is considered. Knuckleprint of first session are fused with first 6 iris Lamp images

and are considered as training data while remaining knuckleprints of that subject

are fused with the last 6 iris Lamp images for testing. Hence, it has considered a

total of 23, 760 genuine and 15, 657, 840 imposter matchings for testing on MM2

database.

The database as well as testing specifications are given in Table 7.4. One can

observe that a huge amount of imposter as well as genuine matchings are performed

for the performance evaluation we have adopted this testing strategy because all

state-of-the-art systems [80], [60], [46] have used the same strategy.

7.2.3 Performance Analysis

The proposed multimodal system is rigorously tested over chimerically self created

multimodal database viz. MM1 and MM2 by fusing iris samples with knuckleprint

samples. It is found that CRR (Rank 1 accuracy) of the proposed system is 100%

over both databases. For both MM1 and MM2 databases, Receiver Operating

Characteristics (ROC) curves are shown in Fig. 7.3. ROCs over all five database

Page 182: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

148CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

viz. CASIA V4 Interval, CASIA V4 Lamp, PolyU Knuckle, MM1 and MM2 are

plotted for comparative analysis. It is evident that multimodal fusion enhances the

system performance significantly. Since this is the first attempt in which iris and

knuckleprint samples are fused; hence there does not exist any system which can be

compared with. Comparison with some existing multimodal systems like [64], [58],

[81], [59], [38], [39], [49], [48], [82], [53], [47] that are fusing, face and iris or knuckle

and palm is not justified as their results are restricted due to their selected biometric

modalities. Also all the above referred multimodal systems lack of uniformity in the

selected modality, database as well as testing strategies. But still the proposed

system’s performance is much better than all of the above stated state-of-the-art

multimodal as well as unimodal systems.

One can clearly see from Fig. 7.3 and Table 7.5 how well the proposed fusion

based multimodal system has performed. Any system can be considered generally

as a highly secure system if matching error per 1000 matching is less than 1. The

verification accuracy is evaluated in terms of EER and is found to be 0.027% over

MM1 and 0.083% over MM2. It means that the proposed system performs only

two and eight errors per ten thousand matching over MM1 and MM2 respectively,

which can be considered as very good performance. The decidability index (d′) is

found to be 2.31 and 2.25 for MM1 and MM2 databases respectively. One can see

that MM2 is really a big database; hence huge amount of matchings are performed

(23, 760 genuines against /15, 657, 840 imposters) as compared with MM1 (3, 657

genuines against /1, 272, 636 imposters). Also, the iris images of Lamp database are

severely affected due to lighting variation, but still after fusion with knuckleprint,

the system performance has been tremendously improved as seen in Fig 7.3 and

Table 7.5. Apart from the performance, the system has shown its scalability as the

performance does not vary much with the increased size of the database. Negligible

error under ROC curve is obtained for both multimodal databases viz. MM1 (i.e.

Page 183: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.3. IRIS, KNUCKLEPRINT AND PALMPRINT FUSION 149

Description DI EER(%) Accuracy(%) EUC CRR(%)Iris Casia Interval Database

vcode 2.069 0.1659 99.837 0.0052 100hcode 1.701 0.6835 99.357 0.0539 99.7538fusion 2.0182 0.1093 99.910 0.0009 100

Iris Casia Lamp Databasevcode 1.5799 1.5732 98.615 0.3553 99.7828hcode 1.3013 3.2207 97.017 0.815 99.7062fusion 1.5045 1.3005 98.859 0.2407 99.8722

Knuckleprint PolyU Databasevcode 2.1712 1.5151 98.8636 0.4955 99.6464hcode 1.6998 4.2929 96.7424 2.8553 97.1464fusion 2.0374 0.9343 99.2550 0.2566 99.7979

Multimodal (Iris Interval + Knuckleprint PolyU) MM1 Databasefusion 2.3191 0.0273 99.975 0.0006 100

Multimodal (Iris Lamp + Knuckleprint PolyU) MM2 Databasefusion 2.2545 0.08330 99.927 0.002 100

Table 7.5: Performance Analysis of the Proposed System over Iris Interval andLamp Databases along with Knuckleprint PolyU Database (In the above Tablefusion referred as vcode+hcode)

6 × 10−4) and MM2 (i.e. 2 × 10−3) ensuring the robustness and perfection of the

proposed multimodal system.

7.3 Iris, Knuckleprint and Palmprint Fusion

In this section, iris, knuckleprint and palmprint images are considered for authenti-

cation by fusing them at score level. The proposed system is tested on two publicly

available CASIA V4 Interval [14] and Lamp [14] iris databases, two publicly avail-

able CASIA [15] and PolyU [57] palmprint databases along with the largest publicly

available PolyU [56] knuckleprint database. The CASIA V4 Interval database con-

tains 2, 639 iris images collected from 395 distinct irises while the Lamp database

contains 16, 212 images collected from 819 distinct irises. The PolyU Knuckleprint

Page 184: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

150CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

Figure 7.3: ROC Characteristics Comparison of the Unimodal Vs MultimodalDatabases for the Proposed System

Database Iris Knuckleprint Palmprint Subject Training Testing Total Imgsdb1 Interval PolyU Casia 349 3 4 2443db2 Interval PolyU PolyU 349 3 4 2443db3 Lamp PolyU Casia 566 4 4 4528db4 Lamp PolyU PolyU 386 6 6 4632

Table 7.6: Database Specifications of Four Self Created Tri-Modal Database usingIris, Knuckleprint and Palmprint.

database consists of 7, 920 FKP images obtained from 660 distinct knuckleprints.

The CASIA palmprint database has 5502 palmprints taken from 624 distinct palms

while PolyU palmprint database has 7752 palmprints of 386 distinct palms. Later,

four multimodal databases are created for testing. Specifications of the databases

are given in Table 7.6.

Page 185: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.3. IRIS, KNUCKLEPRINT AND PALMPRINT FUSION 151

7.3.1 Multimodal database creation

Two iris databases are fused with two palmprint databases and single knuckleprint

database, that means, four different multimodal databases are created viz. db1,

db2, db3 and db4 which are described in Table 7.6.

db1 : The dataset is constructed by considering all iris images of the interval

database belonging to all subjects. These iris samples are fused with first 349

subjects of knuckleprint (PolyU) and palmprint (CASIA) databases. First 3 images

are considered for training while first 4 images of second session are used for testing

for both knuckleprint and palmprint.

db2 : The dataset is constructed by considering every iris image of all interval

subjects. These iris samples are fused with first 349 subjects of knuckleprint (PolyU)

and palmprint (PolyU) databases. First 3 images are considered for training while

first 4 images of second session are considered as testing images for both knuckleprint

and palmprint.

db3 : The dataset is constructed by considering 8 palmprint images taken from

all CASIA palmprint subjects. These palmprint of first 566 subjects are fused with

knuckleprints (PolyU) and iris (Lamp) databases. First 4 images are considered

for training while first 4 images of second session are used for testing for both

knuckleprint and iris.

db4 : The dataset is constructed by considering 12 palmprint images taken from

all PolyU palmprint subjects. These palmprints are fused with first 386 subjects of

knuckleprint (PolyU) and iris (Lamp) databases. For both knuckleprint and iris first

6 images are used for training while first 6 images of second session are considered

for testing.

Page 186: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

152CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

7.3.2 Testing Strategy

In order to test the multimodal system, testing strategy harder than previously

used inter-session matching is devised which is described as follows. Any system

typically enrolls only one image per subject during the enrollment phase. Hence one

training and one testing image per subject (i.e 1− Training and 1− Testing) can

be a suitable testing strategy for more general system setting. Also 1 − Training

and multiple testing images per subject are used to analyze the average system

behavior. Apart from single image training, multiple training image strategy is also

considered. Multiple training images facilitate the identification and hence, affect

CRR favorably.

On the other hand, multiple image testing scenario is dependent upon the vari-

ability in the testing samples. If the database contains less variation and the subse-

quent testing samples are almost similar to each other than multiple testing images

may effect the system performance favorably. But if the testing images are having

more texture variation and are different to each other, than performance may be

adversely affected.

For every database, the case where only single testing image is used, it always

considers the first image of the second session. But while for training, either under

single or multiple image strategy, several possible combinations are considered and

average result is used. Specifications of the database are given in Table 7.6.

7.3.3 Performance Analysis

The proposed multimodal system is tested over four self created databases. The

performance parameters for all three traits individually along with the fusion are

given in Table 7.7 -Table 7.10. One training and one testing along with multiple

training and testing strategy are considered and four performance parameters viz.

Page 187: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.3. IRIS, KNUCKLEPRINT AND PALMPRINT FUSION 153

FA = Falsely Accepted imposters, FR = Falsely rejected genuine, EER = Equal

error rate and CRR = Correct recognition rate are reported.

Biometric Traits ⇒ Iris Knuckle Palmprint FusionTestingStrategy

TotalGenuine

TotalImposter

FA FR FA FR FA FR FA FREER CRR EER CRR EER CRR EER CRR

1Tr-1Test 349 121452240 0.66 1849 5 235.33 0.66 0 00.19 99.80 1.47 97.80 0.19 99.90 0 100

1Tr-4Test 1219 424212476.33 1.33 7300 21 948.66 2.66 0 00.11 99.91 1.72 97.62 0.22 99.83 0 100

2Tr-1Test 698 242904464 1.33 3670 10.66 464 1.66 0 00.19 100 1.51 99.13 0.19 100 0 100

2Tr-4Test 2438 8484241033 3 14616 42 1971.33 5.66 0 00.12 100 1.72 99.23 0.23 99.91 0 100

3Tr-1Test 1047 364356696 2 5568 16 694 2 0 00.19 100 1.72 99.67 0.24 99.91 0 100

3Tr-4Test 3657 12762931392 4 21923 63 3007 9 0 00.10 100 1.72 99.67 0.24 99.91 0 100

Table 7.7: Database Specifications for Interval CASIA (db1) containing images from349 Subjects in (3+4) = 7 different poses. First 3 images are considered for trainingand Last 4 are taken as testing.

One can observe that after fusion, results obtained for all databases under all

different testing strategies tend to become almost perfect (i.e CRR = 100% with

EER = 0%).

Moreover, one can also observe that with the increase in the number of training

and testing images, the genuine and the imposter matchings are increased signifi-

cantly that affect the performance of any unimodal system but the performance of

each fused system remains almost invariant. Hence, one can also infer that the fu-

sion of multiple traits introduces great amount of scalability with respect to system

performance. More and more matching can be considered without much perfor-

mance degradation. This is achieved because many different uncorrelated biometric

traits are fused that enhances the uniqueness and the discriminative power of the

combined sample and the fused score becomes more and more discriminative.

Page 188: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

154CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

Biometric Traits ⇒ Iris Knuckle Palmprint FusionTestingStrategy

TotalGenuine

TotalImposter

FA FR FA FR FA FR FA FREER CRR EER CRR EER CRR EER CRR

1Tr-1Test 349 121452240 0.66 1849 5 617.66 1.66 0 00.19 99.80 1.47 97.80 0.49 99.33 0 100

1Tr-4Test 1219 424212476.33 1.33 7300 21 1564.66 4.33 0 00.11 99.91 1.72 97.62 0.36 99.67 0 100

2Tr-1Test 698 242904464 1.33 3670 10.66 1325 3.66 0 00.19 100 1.51 99.13 0.53 99.52 0 100

2Tr-4Test 2438 8484241033 3 14616 42 3188.66 9 0 00.12 100 1.72 99.23 0.37 99.86 0 100

3Tr-1Test 1047 364356696 2 5568 16 2088 6 0 00.19 100 1.72 99.67 0.57 99.71 0 100

3Tr-4Test 3657 12762931392 4 21923 63 4872 14 0 00.10 100 1.72 99.67 0.38 99.91 0 100

Table 7.8: Database Specifications for Interval PolyU (db2) containing images from349 Subjects in (3+4) = 7 different poses. First 3 images are considered for trainingand Last 4 are taken as testing.

Also as discussed in the testing strategy, some interesting observations can be

made about the unimodal results.

Iris (Interval) and Palmprint (PolyU) databases : The testing images

of both these databases do not posses much variation; as a result, their individual

performances are good. Since lesser variation in testing images, the multiple image

testing strategy affect favorably as one can observe it from Tables 7.7, 7.8 and 7.10

where results of multiple training and testing image strategy are seen better than

their corresponding single image testing scenario.

Iris (Lamp) : Testing images of this database posses much variation because

images of the second session are taken under lamp light which makes it difficult for

subjects to focus. Also, it is observed that first image (image used for testing) of

the second session is of poor quality as it is the first image under lamp illumination

and every subject’s eye takes some time to adjust to that light. Since more variation

in testing images in the multiple image testing strategy affects adversely as one can

Page 189: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.3. IRIS, KNUCKLEPRINT AND PALMPRINT FUSION 155

Biometric Traits ⇒ Iris Knuckle Palmprint FusionTestingStrategy

TotalGenuine

TotalImposter

FA FR FA FR FA FR FA FREER CRR EER CRR EER CRR EER CRR

1Tr-1Test 566 3197904970.33 8.66 6027.66 10.66 385.33 0.66 0 0

1.54 97.29 1.88 97.17 0.11 99.94 0 100

1Tr-4Test 2264 127916016521.33 29.33 24268.66 43 2260 4 0 0

1.29 97.76 1.8982 97.15 0.17 99.83 0 100

2Tr-1Test 1132 63958010735.6 19 12338.6 22 753.3 1.33 0 0

1.67 98.93 1.93 98.70 0.11 100 0 100

2Tr-4Test 4528 255832033465.6 59.33 48589 86 4906.3 8.66 0 0

1.30 99.23 1.89 98.86 0.19 99.94 0 100

3Tr-1Test 1698 95937014690 26 20118.5 35.5 1130 2 0 01.53 99.38 2.09 98.85 0.11 100 0 100

3Tr-4Test 6792 383748051091.5 90.5 74019.5 131 6095 11 0 0

1.33 99.51 1.92 99.18 0.16 99.97 0 100

4Tr-1Test 2264 127916018189 32 27119 48 1130 2 0 01.41 99.64 2.12 99.29 0.08 100 0 100

4Tr-4Test 9056 511664066217 117 99902 177 7912 14 0 0

1.29 99.77 1.95 99.55 0.15 100 0 100

Table 7.9: Database Specifications for Lamp Casia (db3) containing images from 566Subjects in (4 + 4) = 8 different poses. First 4 images are considered for trainingand Last 4 are taken as testing.

Page 190: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

156CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

Biometric Traits ⇒ Iris Knuckle Palmprint FusionTestingStrategy

TotalGenuine

TotalImposter

FA FR FA FR FA FR FA FREER CRR EER CRR EER CRR EER CRR

1Tr-1Test 386 1486102435 6.33 2181 5.66 1026.6 2.66 0 01.64 97.23 1.46 98.01 0.69 99.13 0 100

1Tr-6Test 2316 89166011896.6 31 16168.6 42 5647 14.66 0 0

1.33 97.85 1.81 97.39 0.63 99.4 0 100

2Tr-1Test 772 2972205135 13.33 4107 10.66 2053.3 5.33 0 01.72 98.87 1.38 99.22 0.69 99.48 0 100

2Tr-6Test 4632 178332023873 62 32818 85.33 1293.6 29.33 0.66 01.33 99.32 1.84 98.90 0.63 99.78 0.000019 100

3Tr-1Test 1158 4458304715.5 12.25 7122.7 18.5 4138.2 10.75 0 01.05 99.64 1.59 99.61 0.92 99.54 0 100

3Tr-6Test 6948 267498036720.2 95.5 44253.7 115 22522.2 58.5 103.2 0.5

1.37 99.08 1.65 99.56 0.84 99.8 0.0055 100

4Tr-1Test 1544 5944406288.6 16.33 9624 25 5580.3 14.66 0 01.05 99.91 1.61 99.65 0.94 99.74 0 100

4Tr-6Test 9264 356664048894 127 58804.3 152.6 30028.3 78 191.33 0.661.37 99.35 1.64 99.78 0.84 99.83 0.00628 100

5Tr-1Test 1930 7430508661.5 22.5 1551.1 30 6929 18 0 01.16 100 1.55 99.74 0.93 99.74 0 100

5Tr-6Test 11580 445830060251 156.5 72765 189 37416 97 374 11.35 99.65 1.63 99.87 0.83 99.91 0.0085 100

6Tr-1Test 2316 89166010780 28 13090 34 8065 21 0 01.20 100 1.46 99.74 0.9 99.74 0 100

6Tr-6Test 13896 534996070833 184 87401 227 43504 113 385 11.32 99.87 1.63 99.87 0.81 99.91 0.0071 100

Table 7.10: Database Specifications for Lamp PolyU (db4) containing images from386 Subjects in (6 + 6) = 12 different poses. First 6 images are considered fortraining and Last 6 are taken as testing.

Page 191: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

7.3. IRIS, KNUCKLEPRINT AND PALMPRINT FUSION 157

observe it from Table 7.10 that the results of multiple training and testing strategy

are poorer than their corresponding single testing image scenario.

But some time for very poor quality of some initial test images it is found that

the single testing performs slightly poor because with multiple test samples result

got averaged out.

Palmprint (CASIA) : Testing images of this databases posses much variation.

So, the multiple image testing strategy affect adversely as one can observe it from

Tables 7.7, and 7.9 that results of multiple training and testing strategy are poorer

than their corresponding single image testing scenario.

Page 192: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

158CHAPTER 7. MULTIMODAL BASED RECOGNITION SYSTEM

Page 193: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Chapter 8

Conclusions

This thesis has presented a multimodal based recognition system by fusing iris,

knuckleprint and palmprint biometric traits. Personal authentication is required in

several financial and security related applications running in real time and highly

accurately. Several state-of-the-art systems use unimodal approach using any one

biometric trait such as face, iris, palm, fingerprint, ear, knuckleprint etc. But each

trait has its own challenges and trait specific issues hence none of them can be

considered as the best.

But the performance of any unimodal system has been restricted by varying

environmental and non-controlled conditions. Also, it depends upon the sensor pre-

cision, reliability and the data quality severely. Apart from them, challenges like

pose, expression, age for face or occlusion for iris are some other factors that are

responsible for poor performance. Hence, fusion of multiple traits is proposed and

is tested. Multimodal systems are highly accurate and harder to be compromised.

They are less vulnerable to spoofing and can also deal with missing data. Fusion

itself improves the system performance significantly. Hence, tuning of the individ-

ual unimodal performance is not required. It is also observed that fusion makes,

the system scalable; hence it can handle more and more matchings without much

Page 194: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

160 CHAPTER 8. CONCLUSIONS

performance degradation. But this performance boost-up is achieved over the cost

of more hardware and time.

Chapter 4 of the thesis has proposed an iris based recognition system. The

iris segmentation is done efficiently using an improved circular hough transform

for inner iris boundary (i.e pupil) detection. The robust integro-differential oper-

ator has been used to detect outer iris boundary that makes use of pupil location.

The iris segmentation accuracy is found to be 94.5% and 94.63% for Interval and

Lamp database respectively. The erroneous segmentations are categorized in three

categories viz. occlusion, noise, illumination. A segmentation error hierarchy is

created and parametric variations are discussed to segment iris correctly. These

parametric adjustment has helped to achieve an accuracy of more than 99.6% for

both the databases. The quality of acquired iris sample is estimated using six

proposed quality assessment parameters viz. Focus (F), Motion Blur (MB), Occlu-

sion (O), Contrast and Illumination (CI), Dilation (D), Specular Reflection (SR).

If iris quality is found to be less than a predefined threshold then it is recaptured.

This early quality assessment is very crucial to tackle poor quality and non-ideal

imagery. The segmented iris is normalized to polar coordinates (i.e. rectangular

strips). A local image enhancement is also applied to obtain improved iris texture.

The enhanced images are preprocessed using the proposed LGBP (Local Gradient

Binary Pattern) to obtain robust features. The KLT based corners features are

extracted and matched using the proposed dissimilarity measure CIOF (Corners

having Inconsistent Optical Flow). The proposed system has been tested over two

publicly available CASIA 4.0 Interval and Lamp iris databases consisting of 2, 639

and 16, 212 images respectively. The parameters are selected in such a way that

the performance of the system is maximized over the validation set. The proposed

enhancement has shown significant performance improvement in terms of EER. It

is found that CRR (Rank 1 accuracy) of the proposed system is 100% and 99.87%

Page 195: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

161

for Interval and Lamp databases respectively. Further, its EER for Interval and

Lamp are 0.109% and 1.3% respectively which is better than several state-of-the-art

systems.

Chapter 5 has proposed a knuckleprint based recognition system. The knuck-

leprint ROI is extracted by applying a modified version of gabor filter to estimate

the central knuckle line and point. The central knuckle point is used to extract the

knuckleprint ROI from any image consistently. The quality of the acquired knuck-

leprint sample is estimated using the proposed six quality assessment parameters

viz. Focus (F), Clutter (C), Uniformity (S), Entropy (E), Reflection (Re), Contrast

and Illumination (Con). If the quality is less than a predefined threshold then im-

age is recaptured. The vertical and horizontal knuckle line based features may be

of poor quality; hence they are enhanced to obtain better texture. The segmented

knuckleprint ROI has been preprocessed using the proposed LGBP (Local Gradi-

ent Binary Pattern) to obtain robust features. The KLT based corners features are

extracted and matched using the proposed dissimilarity measure CIOF (Corners

having Inconsistent Optical Flow). The proposed system has been tested over pub-

licly available PolyU knuckleprint databases consisting of 7, 920 images. For testing,

6 images of all first session are taken as training while images of second session are

used for testing. Hence total of 23, 760 genuine and 15, 657, 840 imposter matchings

are performed. The segmentation algorithm performs with an accuracy of 94.494%

over PolyU knuckleprint database and is observed to extract the ROI consistently

throughout the database. However, algorithm may fail due to poor image quality,

lack of assumed horizontal knuckle alignment, missing symmetric knuckle texture

and multiple finger knuckle in an image. Such images are segmented manually.

Parameters are selected in such a way that the performance of the system is max-

imized over the validation set. It is found that CRR of the proposed system is

99.79% with an EER of 0.93% over PolyU knuckleprint database which is better

Page 196: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

162 CHAPTER 8. CONCLUSIONS

than state-of-the-art systems.

Chapter 6 has proposed a palmprint based recognition system. The palmprint

segmentation has been done by obtaining the two valley points and by clipping a

square shaped ROI using these parts. The palm principle lines, wrinkles and ridges

based features are enhanced using the proposed enhancement. The segmented palm-

print is preprocessed using the proposed LGBP (Local Gradient Binary Pattern)

obtain robust features. The KLT based corners features are extracted and matched

using the proposed dissimilarity measure CIOF (Corners having Inconsistent Opti-

cal Flow). The proposed system has been tested over publicly available CASIA and

PolyU palmprint databases consisting of 4, 528 and 7, 720 images respectively. All

intersession matchings are done for performance analysis. First 4 images of CASIA

palmprint database are taken as training while rest of them are kept for testing.

Hence, total of 9, 056 genuine and 5, 116, 640 imposter matchings are performed.

For PolyU palmprint database, first 10 images are used for training while rest are

taken as testing. Hence a total of 38, 600 genuine and 14, 861, 000 imposter match-

ings is considered. Parameters are selected in such a way that the performance of

the system is maximized over the validation set. It is found that CRR (Rank 1

accuracy) of the proposed system is 100% and 99.95% for CASIA and PolyU palm-

print databases respectively. Further, its EER for CASIA and PolyU are 0.15%

and 0.41% respectively which is better than state-of-the-art systems.

Chapter 7 has discussed the multi-modal based recognition system. The major

focus is to create multimodal database and its experimental analysis. We have per-

formed different multimodal systems by fusing iris and knuckleprint, knuckleprint

and palmprint and iris and finally iris, knuckleprint and palmprint. Different test-

ing strategies have been adopted which include intersession matching, one training

and one testing and multiple training and multiple testing. Two publicly available

iris databases (CASIA Interval and LAMP) are fused with two public palmprint

Page 197: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

163

databases (CASIA, PolyU) while for knuckleprint, the publicly available PolyU

database has been used. Hence, 4 tri-modal databases are generated for testing.

It has been observed that the performance of the trimodal system shows almost

perfect ROC behavioral (i.e CRR = 100% and EER = 0%) for different testing

strategies.

Page 198: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

164 CHAPTER 8. CONCLUSIONS

Page 199: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

References

[1] Palmprint verification using binary orientation co-occurrence vector. Pattern

Recognition Letters, 30(13):1219 – 1227, 2009.

[2] A. Abhyankar, L. Hornak, and S Schuckers. Off-angle iris recognition using bi-

orthogonal wavelet network system. In Fourth IEEE Workshop on Automatic

Identification Advanced Technologies, pages 239–244, 2005.

[3] A. Abhyankar and S Schuckers. Active shape models for effective iris segmen-

tation. In Society of Photographic Instrumentation Engineers (SPIE), volume

6202, 2006.

[4] R. Adams and L Bischof. Seeded region growing. IEEE Transactions on Pattern

Analysis and Machine Intelligence, 16(6):641–647, 1994.

[5] Sharat Chikkerur Amit J. Mhatre, Srinivas Palla and Venu Govindaraju. Ef-

ficient search and retrieval in biometric databases. In Society of Photographic

Instrumentation Engineers (SPIE), volume 5779, pages 265–273, 2005.

[6] G. S. Badrinath and P Gupta. Stockwell transform based palmprint recognition.

Applied Soft Computing, 11(7):4267–4281, 2011.

[7] G. S. Badrinath, N K. Kachhi, and P Gupta. Verification system robust to

occlusion using low-order zernike moments of palmprint sub-images. Telecom-

munication Systems, 47(3-4):275–290, 2011.

Page 200: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

166 REFERENCES

[8] G. S. Badrinath, Kamlesh Tiwari, and Phalguni Gupta. An efficient palmprint

based recognition system using 1d-dct features. In International Conference on

Intelligent Computing (ICIC), volume 7389, pages 594–601. LNCS, 2012.

[9] G.S. Badrinath and Phalguni Gupta. Palmprint based recognition system using

phase difference information. Future Generation Computer Systems, 28:287–

305, 2010.

[10] D.H Ballard. Generalizing the hough transform to detect arbitrary shapes.

Pattern Recognition, 13(2):111–122, 1981.

[11] B. Bonney, R. Ives, D. Etter, and Y Du. Iris pattern extraction using bit planes

and standard deviations. In Thirty-Eighth Asilomar Conference on Signals,

Systems and Computers, volume 1, pages 582–586. IEEE, 2004.

[12] Kevin W. Bowyer, Karen Hollingsworth, and Patrick J. Flynn. Image under-

standing for iris biometrics: A survey. Computer Vision and Image Under-

standing, 110(2):281 – 307, 2008.

[13] T.A. Camus and R Wildes. Reliable and fast eye finding in close-up images.

In 16th International Conference on Pattern Recognition, volume 1, pages 389–

394. IEEE, 2002.

[14] CASIA. The casia iris database. http://biometrics.idealtest.org/

dbDetailForUser.do?id=4.

[15] CASIA. The casia palmprint database. http://biometrics.idealtest.org/

dbDetailForUser.do?id=5.

[16] Ryan Connaughton, KevinW. Bowyer, and PatrickJ Flynn. Fusion of face and

iris biometrics. In Mark J. Burge and Kevin W Bowyer, editors, Handbook of

Page 201: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

REFERENCES 167

Iris Recognition, Advances in Computer Vision and Pattern Recognition, pages

219–237. Springer London, 2013.

[17] J. Cui, Y. Wang, J.Z. Huang, T. Tan, and Z Sun. An iris image synthesis

method based on pca and super-resolution. In 17th International Conference

on Pattern Recognition (ICPR), volume 4, pages 471–474. IEEE, 2004.

[18] J Daugman. Statistical richness of visual phase information: update on rec-

ognizing persons by iris patterns. International Journal of Computer Vision,

45(1):25–38, 2001.

[19] J.G Daugman. High confidence visual recognition of persons by a test of sta-

tistical independence. IEEE Transactions on Pattern Analysis and Machine

Intelligence, 15(11):1148–1161, 1993.

[20] John Daugman. How iris recognition works. In IEEE International Conference

on Image Processing (ICIP), pages 33–36, 2002.

[21] John Daugman and Cathryn Downing. Epigenetic randomness, complexity and

singularity of human iris patterns. Royal Society London Biological Sciences,

268(1477):1737–1740, 2001.

[22] V. Dorairaj, N.A. Schmid, and G Fahmy. Performance evaluation of non-

ideal iris based recognition system implementing global ica encoding. In IEEE

International Conference on Image Processing (ICIP), volume 3, pages III–

285–5, 2005.

[23] Nicolae Duta, A K. Jain, and K V. Mardia. Matching of palmprints. Pattern

Recognition Letters, 23(4):477–485, 2002.

Page 202: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

168 REFERENCES

[24] R.M. Farouk. Iris recognition based on elastic graph matching and gabor

wavelets. Computer Vision and Image Understanding, 115(8):1239 – 1244,

2011.

[25] X. Feng, C. Fang, X. Ding, and Y Wu. Iris localization with dual coarse-to-

fine strategy. In 18th International Conference on Pattern Recognition (ICPR),

volume 4, pages 553–556. IEEE, 2006.

[26] R. Ferzli and L.J Karam. A no-reference objective image sharpness metric

based on the notion of just noticeable blur (jnb). IEEE Transactions on Image

Processing, 18(4):717–728, 2009.

[27] Guangwei Gao, Jian Yang, Jianjun Qian, and Lin Zhang. Integration of mul-

tiple orientation and texture information for finger-knuckle-print verification.

Neurocomputing, 135:180–191, 2014.

[28] C C Han, H L Cheng, C L Lin, and K C Fan. Personal authentication using

palm-print features. Pattern Recognition, 36(2):371–381, 2003.

[29] X.F. He and P.F Shi. A novel iris segmentation method for hand-held capture

device. Advances in Biometrics, pages 479–485, 2005.

[30] Z. He, T. Tan, and Z Sun. Iris localization via pulling and pushing. In 18th

International Conference on Pattern Recognition (ICPR), volume 4, pages 366–

369. IEEE, 2006.

[31] Y.P. Huang, S.W. Luo, and E.Y Chen. An efficient iris recognition system.

In International Conference on Machine Learning and Cybernetics, volume 1,

pages 450–454. IEEE, 2002.

Page 203: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

REFERENCES 169

[32] A.K. Jain, S. Pankanti, S. Prabhakar, L. Hong, and A Ross. Biometrics:

A grand challenge. In 17th International Conference on Pattern Recognition

(ICPR), volume 2, pages 935–942. IEEE, 2004.

[33] P.A. Johnson, B. Tan, and S. Schuckers. Multimodal fusion vulnerability to

non-zero effort (spoof) imposters. In IEEE International Workshop on Infor-

mation Forensics and Security (WIFS), pages 1–5, Dec 2010.

[34] T Ko. Multimodal biometric identification for large user population using

fingerprint, face and iris recognition. In 34th Applied Imagery and Pattern

Recognition Workshop (AIPR), pages 223–228, 2005.

[35] A. W. K. Kong and D. Zhang. Competitive coding scheme for palmprint ver-

ification. In International Conference on Pattern Recognition (ICPR), pages

520–523, 2004.

[36] A. W. K. Kong and David Zhang. Feature-level fusion for effective palmprint

authentication. In First International Conference on Biometric Authentication

(ICBA), pages 761–767, 2004.

[37] A W Kin Kong, D. Zhang, and G. Lu. A study of identical twins palmprints

for personal verification. Pattern Recognition, 39(11):2149–2156, 2006.

[38] A. Kumar and C Ravikanth. Personal authentication using finger knuckle sur-

face. IEEE Transactions on Information Forensics and Security, 4(1):98–110,

2009.

[39] A. Kumar and D Zhang. Personal recognition using hand shape and texture.

IEEE Transactions on Image Processing, 15(8):2454 –2461, 2006.

[40] X Li. Modeling intra-class variation for nonideal iris recognition. In Advances

in Biometrics, pages 419–427. Springer, 2005.

Page 204: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

170 REFERENCES

[41] P. Lili and X Mei. The algorithm of iris image preprocessing. In Fourth IEEE

Workshop on Automatic Identification Advanced Technologies, pages 134–138,

2005.

[42] X. Liu, K.W. Bowyer, and P.J Flynn. Experiments with an improved iris

segmentation algorithm. In Fourth IEEE Workshop on Automatic Identification

Advanced Technologies, pages 118–123, 2005.

[43] Guangming Lu, David Zhang, and Kuanquan Wang. Palmprint recognition

using eigenpalms features. Pattern Recognition Letters, 24(910):1463 – 1467,

2003.

[44] B. D. Lucas and T. Kanade. An Iterative Image Registration Technique with

an Application to Stereo Vision. In International Joint Conference on Artificial

Intelligence (IJCAI), pages 674–679, 1981.

[45] L. Ma, T. Tan, Y. Wang, and D Zhang. Efficient iris recognition by character-

izing key local variations. IEEE Transactions on Image Processing, 13(6):739–

750, 2004.

[46] Libor Masek and Peter Kovesi. Matlab source code for a biometric identification

system based on iris patterns. M.Tech Thesis, 2003.

[47] A. Meraoumia, S. Chitroub, and A Bouridane. Fusion of finger-knuckle-print

and palmprint for an efficient multi-biometric system of person recognition. In

IEEE International Conference on Communications (ICC), pages 1–5, 2011.

[48] Abdallah Meraoumia, Salim Chitroub, and Ahmed Bouridane. Palmprint and

finger-knuckle-print for efficient person recognition based on log-gabor filter

response. Analog Integrated Circuits and Signal Processing, 69:17–27, 2011.

Page 205: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

REFERENCES 171

[49] Goh Kah Ong Michael, Tee Connie, and Andrew Teoh Beng Jin. An inno-

vative contactless palm print and knuckle print recognition system. Pattern

Recognition Letters, 31(12):1708–1719, 2010.

[50] K. Miyazawa, K. Ito, T. Aoki, K. Kobayashi, and H Nakajima. An effective

approach for iris recognition using phase-based image matching. IEEE Trans-

actions on Pattern Analysis and Machine Intelligence, 30(10):1741–1756, 2008.

[51] D.M. Monro, S. Rakshit, and Dexin Zhang. Dct-based iris recognition. IEEE

Transactions on Pattern Analysis and Machine Intelligence, 29(4):586–595,

2007.

[52] A. Morales, C.M. Travieso, M.A. Ferrer, and J.B Alonso. Improved finger-

knuckle-print authentication based on orientation enhancement. Electronics

Letters, 47(6):380–381, 2011.

[53] Loris Nanni and Alessandra Lumini. A multi-matcher system based on knuckle-

based features. Neural Computing and Applications, 18:87–91, 2009.

[54] Timo Ojala, Matti Pietikainen, and Topi Maenpaa. Multiresolution gray-scale

and rotation invariant texture classification with local binary patterns. IEEE

Transactions on Pattern Analysis Machine Intelligence, 24(7):971–987, 2002.

[55] Stephen M. Pizer, E. Philip Amburn, John D. Austin, Robert Cromartie, Ari

Geselowitz, Trey Greer, Bartter Haar Romeny, John B. Zimmerman, and Karel

Zuiderveld. Adaptive histogram equalization and its variations. Computer

Vision, Graphics, and Image Processing, 39(3):355–368, 1987.

[56] PolyU. The polyu knuckleprint database. http://www4.comp.polyu.edu.hk/

~biometrics/.

Page 206: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

172 REFERENCES

[57] PolyU. The polyu palmprint database. http://www4.comp.polyu.edu.hk/

~biometrics/.

[58] Ajita Rattani and Massimo Tistarelli. Robust multi-modal and multi-unit fea-

ture level fusion of face and iris biometrics. In Advances in Biometrics, volume

5558 of Lecture Notes in Computer Science (LNCS), pages 960–969. Springer,

2009.

[59] S. Ribaric and I Fratric. A biometric identification system based on eigenpalm

and eigenfinger features. IEEE Transactions on Pattern Analysis and Machine

Intelligence, 27(11):1698–1709, 2005.

[60] Kaushik Roy, Prabir Bhattacharya, and Suen Ching Y. Iris recognition using

shape-guided approach and game theory. Pattern Analysis and Applications,

14(4):329–348, 2011.

[61] H. Scharr. Optimal operators in digital image processing. PhD thesis, 2000.

[62] Li Shang, De-Shuang Huang, Ji-Xiang Du, and Chun-Hou Zheng. Palmprint

recognition using fastica algorithm and radial basis probabilistic neural net-

work. Neurocomputing, 69(1315):1782 – 1786, 2006.

[63] Jianbo Shi and Tomasi. Good features to track. In Computer Vision and

Pattern Recognition (CVPR), pages 593–600. IEEE, 1994.

[64] Byungjun Son and Yillbyung Lee. Biometric authentication system using re-

duced joint feature vector of iris and face. In Audio and Video Based Biomet-

ric Person Authentication, volume 3546 of Lecture Notes in Computer Science

(LNCS), pages 513–522. Springer, 2005.

Page 207: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

REFERENCES 173

[65] Z Sun, T Tan, Y Wang, and Stan Z. Li. Ordinal palmprint represention for

personal identification. In Computer Vision and Pattern Recognition (CVPR),

pages 279–284, 2005.

[66] Zhenan Sun and Tieniu Tan. Ordinal measures for iris recognition. IEEE

Transactions on Pattern Analysis and Machine Intelligence, 31(12):2211–2226,

2009.

[67] P. Viola and M Jones. Robust real-time object detection. International Journal

of Computer Vision, 57(2):137–154, 2001.

[68] Norbert Wiener. Extrapolation, Interpolation, and Smoothing of Stationary

Time Series. The MIT Press, 1964.

[69] R.P Wildes. Iris recognition: an emerging biometric technology. Proceedings

of the IEEE, 85(9):1348–1363, 1997.

[70] Xiangqian Wu, David Zhang, and Kuanquan Wang. Fisherpalms based palm-

print recognition. Pattern Recognition Letters, 24(15):2829 – 2838, 2003.

[71] Ming Xiong, Wankou Yang, and Changyin Sun. Finger-knuckle-print recogni-

tion using lgbp. In International Symposium on Neural Networks, ISNN, pages

270–277, 2011.

[72] G.Z. Xu, Z.F. Zhang, and Y.D Ma. Automatic iris segmentation based on

local areas. In 18th International Conference on Pattern Recognition (ICPR),

volume 4, pages 505–508. IEEE, 2006.

[73] Feng Yue, Wangmeng Zuo, Kuanquan Wang, and D Zhang. Fcm-based ori-

entation selection for competitive coding-based palmprint recognition. In 19th

International Conference on Pattern Recognition (ICPR), pages 1–4, 2008.

Page 208: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

174 REFERENCES

[74] A. Zaim, M. Quweider, J. Scargle, J. Iglesias, and R Tang. A robust and

accurate segmentation of iris images using optimal partitioning. In 18th Inter-

national Conference on Pattern Recognition (ICPR), volume 4, pages 578–581.

IEEE, 2006.

[75] D Zhang. Polyu finger-knuckle-print database. http://www4.comp.polyu.

edu.hk/~biometrics/FKP.htm.

[76] D. Zhang, W K Kong, J. You, and M Wong. Online palmprint identification.

Pattern Analysis and Machine Intelligence, 25(9):1041 – 1050, sept 2003.

[77] Lin Zhang, Lei Zhang, and David Zhang. Finger-knuckle-print verification

based on band-limited phase-only correlation. In International Conference on

Computer Analysis of Images and Patterns (CAIP), pages 141–148, 2009.

[78] Lin Zhang, Lei Zhang, David Zhang, and Zhenhua Guo. Phase congruency

induced local features for finger-knuckle-print recognition. Pattern Recognition,

45(7):2522–2531, 2012.

[79] Lin Zhang, Lei Zhang, David Zhang, and Hailong Zhu. Online finger-knuckle-

print verification for personal authentication. Pattern Recognition, 43(7):2560–

2571, 2010.

[80] Lin Zhang, Lei Zhang, David Zhang, and Hailong Zhu. Ensemble of local and

global information for finger-knuckle-print recognition. Pattern Recognition,

44(9):1990 – 1998, 2011.

[81] Zhijian Zhang, Rui Wang, Ke Pan, StanZ. Li, and Peiren Zhang. Fusion of

near infrared face and iris biometrics. In Advances in Biometrics, volume 4642

of Lecture Notes in Computer Science (LNCS), pages 172–180. Springer, 2007.

Page 209: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

REFERENCES 175

[82] Le-Qing Zhu and San-Yuan Zhang. Multimodal biometric identification system

based on finger-geometry, knuckleprint and palmprint. Pattern Recognition

Letters, 31(12):1641 – 1649, 2010.

Page 210: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

176 REFERENCES

Page 211: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

Publications

12. Aditya Nigam, Vamshi Krishna and Phalguni Gupta, “Iris RecognitionUsing Block Local Binary Patterns and Relational Measures” Inter-national Joint Conference on Biometrics, (IJCB 2014), Clearwater, Florida,USA, 29 September - 2 October 2014.

11. Aditya Nigam, Lovish, Amit Bendale and Phalguni Gupta, “Iris Recog-nition Using Relational Measures” IAPR 6th International Workshopon Computational Forensics, (ICPR 2014) Workshop, Stockholm, Sweden,August 24, 2014.

10. Aditya Nigam and Phalguni Gupta, “Multimodal Personal Authentica-tion Using Iris and Knuckleprint”, 10th International Conference onIntelligent Computing (ICIC 2014), Taiyuan, China, August 3-6, Volume(8588) of LNCS, pp. (819-825), 2014.

9. Aditya Nigam and Phalguni Gupta, “Designing An Accurate Hand Bio-metric Based Authentication System Fusing Finger Knuckleprintwith Palmprint” Neurocomputing, Elsevier (Impact Factor: 1.634), [Ac-cepted].

8. Aditya Nigam and Phalguni Gupta, “Quality Assessment of Knuck-leprint Biometric Images” IEEE 20th International Conference on ImageProcessing (ICIP 2013), Melbourne, Australia, September 15-18, pp. (4205-4209) 2013.

7. Aditya Nigam and Phalguni Gupta, “Multimodal Personal Authentica-tion System Fusing Palmprint and Knuckleprint” 9th InternationalConference on Intelligent Computing (ICIC 2013), Nanning, China, July28-31, Volume (375) of Springer, pp. (188-193), 2013.

6. Aditya Nigam, Anvesh T. and Phalguni Gupta, ”Iris Classification Basedon its Quality” 9th International Conference on Intelligent Computing (ICIC

Page 212: Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint · Multimodal Biometric Recognition using Iris, Knuckleprint and Palmprint A Thesis Submitted in Partial Ful

178 Publications

2013), Nanning, China, July 28-31, Volume (7995) of LNCS, pp. (443-452),2013.

5. Aditya Nigam and Phalguni Gupta, ”Palmprint Recognition using Geo-metrical and Statistical Constraints” International Conference on SoftComputing for Problem Solving (SocProS 2012), Jaipur, India, December28-30, Volume (236) of Springer, pp. (1303-1315), 2012.

4. Aditya Nigam and Phalguni Gupta, ”Iris Recognition using ConsistentCorner Optical Flow” 11th Asian Conference on Computer Vision (ACCV2012), Daejeon, Korea, November 5-9, Volume (7724) of LNCS, pp. (358-369), 2012.

3. Amit Bendale, Aditya Nigam, Surya Prakash and Phalguni Gupta, ”IrisSegmentation using an Improved Hough Transform” 8th Interna-tional conference on Intelligent Computing (ICIC 2012), Huangshan, China,Volume (304) of Springer, pp. (408-415), July 25-29, 2012.

2. Aditya Nigam and Phalguni Gupta “Knuckleprint Recognition usingFeature Tracking” 6th Chinese Conference on Biometric Recognition (CCBR2011), Beijing, China, December 3-4, Volume (7098) of LNCS, pp. (125-132),2011.

1. G.S Badrinath, Aditya Nigam and Phalguni Gupta, “An Efficient Finger-knuckle-print based Recognition System Fusing SIFT and SURFMatching Scores” 13th International Conference on Information and Com-munications Security (ICICS 2011), Beijing, China, 23-26 November, Volume(7043) of LNCS, pp. (374-387), 2011.