collecting multimodal biometric data ross j. micheals image group, charlie wilson, manager...

Post on 25-Dec-2015

218 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Collecting Multimodal Biometric Data

Ross J. Micheals

Image Group, Charlie Wilson, ManagerInformation Access Division, Martin Herman, Chief

National Institute of Standards and Technology

International Meeting of Biometrics Experts23 March 2004

The United States government has no multimodal database of

face, fingerprint, and iris images suitable for evaluation.

Challenges

Multimodal Biometrics

Initial motivation: Collect an iris image database

Data collections have substantial fixed costs

Additional sensors are relatively less expensive

Extension of original goal: Collect a multimodal biometric database

Iris Recognition

Iris images are an ICAO (International Civil Aviation Organization) approved biometric

Large market expansion anticipated in early 2005 at expiration of iris recognition concept patent

Iris recognition systems have been deployed internationally and are in operation today

Multimodal Biometrics

There are inherent correlations among different biometric modalities

NIST Face Recognition Vendor Test•Young females (face vs. fingerprints)•Chinese (face vs. iris?)•More data is an opportunity to

discover additional relationships Multimodal data is being collected

right now, every day (US-VISIT)

MBARKMultimodal Biometric Accuracy Research Kiosk

MBARK is an externally deployable, multimodal biometric acquisition and information system

NIST as the maintainer, and synchronizer, and gatekeeper

Two major purposes:• To collect biometric data• To obtain data about collecting biometrics

Multi-agency project• Department of Homeland Security (S&T, TSA)• Intelligence Technology Innovation Center (ITIC)• Department of State

MBARKMultimodal Biometric Accuracy Research Kiosk

Current goal for one MBARK session• Eighteen face images (two sets of nine each)• Forty fingerprints (two sets on two sensors)• Four iris images (two sets of two each)

Most of the data will be sequestered for use in future evaluations

Small portions of the data will be released for scientific and research purposes

Aside:

Privacy Rule of thumb

“Would we want to be in the database?” Suppose we release face, fingerprint, and

iris images of a subject in the database Critical to ensure that multiple modalities

could not be synchronized outside of NIST and Privacy Act protection

Conclusion: Release one and only one modality per subject externally

Research & Operational Needs

Data collections should address a real operational need or a specific research question

Data collected to evaluate a deployed system would be an operational motivation

The design of MBARK reflects a mixture of operational and research needs

MBARK• Face: Operational and research• Fingerprint & Iris: Operational

MBARK : Face

Nine color cameras Five-megapixels per

image Olympus C5050Z Some reliability problems Operational

• Multiple images

Research• Multiple images (FRVT

2002)• Texture-based • Image-based 3D

MBARK: Fingerprint

Optical slap scanners Smiths-Heimann LS2 CrossMatch ID500 Operational

• Ohio WebCheck• Sensor comparisons

MBARK: Iris

Oki IrisPass-WG Near infrared illumination Grayscale iris images Two irises in one sitting User does not need to

manipulate camera Primarily an operational

driven component

MBARK: Registration

Identification of subjects returning later

Using a well-studied model (US-VISIT) as an aid to identify subjects on return visits

Single fingerprint scanner

CrossMatch Verifier 300

Open Systems

NIST evaluations are typically with an emphasis on open systems

Ensures interoperability among components Prevents deployments from being locked

into any particular vendor Requires component evaluations Example: Face Recognition Vendor Test

(FRVT) and Fingerprint Vendor Technology Evaluation (FpVTE) compared algorithms over a set of common images

System vs. Component Evaluation

Iris-recognition market is system oriented

I.e., what you buy is meant to be used in an end-to-end system, rather than an interoperable component

How does this effect image-based evaluations?

Hypothetical example:• “MH Electrics,” iris camera manufacturer• “EyeRidian,” iris recognition software

Iris CameraModel MH 5000

Iris Recognition AlgorithmEyeRidian v1.2

Control Software

Iris Recognition System

Iris Recognition AlgorithmEyeRidian v1.2

Iris Recognition AlgorithmEyeRidian v1.2

Recognition Image Quality

For “high” quality images, recognition is 99.99%

But, suppose only 70% of all data is high quality.

Iris Recognition System

Iris Recognition AlgorithmEyeRidian v1.2

Iris Recognition AlgorithmEyeRidian v1.2

Recognition Image Quality

Iris Recognition System

Iris Recognition System

Iris CameraModel MH 5000

Iris Recognition AlgorithmEyeRidian v1.2

Image QualityMH ControlSoftware

What about the 30% of images that are not “high” quality?

How might other algorithms do on these images?

If there are no images of sufficient quality, the sensor reports a failure to acquire (FTA).

FTA data is usually not available for image-based evaluations.

Iris CameraModel MH 5000

Iris Recognition AlgorithmEyeRidian v1.2

Image QualityMH ControlSoftware

Iris Recognition System

Conclusion

In component testing, be aware of the internals of each component and how evaluations might be effected

For some modalities, we can reduce bias by using a mix of sensors• Example: Many fingerprint scanners all with

different control logic

For other modalities, testing components requires more sensitivity

The degree of this minimization depends on the state of the market and vendor support

Questions?

top related