handheld system design for dual-eye multispectral iris capture with one camera

13

Click here to load reader

Upload: dotram

Post on 07-Feb-2017

220 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

1154 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 5, SEPTEMBER 2013

Handheld System Design for Dual-Eye MultispectralIris Capture With One Camera

Yazhuo Gong, David Zhang, Fellow, IEEE, Pengfei Shi, Senior Member, IEEE, and Jingqi Yan

Abstract—This paper describes the design and implementationof a dual-eye multispectral iris capture handheld device with onecamera, which consists of the following four parts: 1) captureunit; 2) illumination unit; 3) interaction unit; and 4) control unit.A multispectral iris image database is created by the proposedcapture device, and then, we use the multispectral score-levelfusion to further investigate the effectiveness of the proposedcapture device by the 1-D Log-Gabor wavelet filter approach.Experimental results are also presented in this paper.

Index Terms—Iris capture device, iris recognition,multispectrum.

I. INTRODUCTION

B IOMETRICS has become more important, with an in-creasing demand on security. Iris recognition is one of

the most reliable and accurate biometrics technologies in termsof identification and verification performance. It mainly usesiris patterns to recognize and distinguish individuals since thepattern variability among different persons is enormous. Inaddition, as an internal organ of the eye, the iris is well protectedfrom the environment and is stable over time. The amount of theiris texture information will greatly affect the performance ofthe recognition algorithm. There are some practical applicationcases of iris recognition used on a large scale, such as India,Mexico, United Arab Emirates, and Afghanistan.

A critical step in an iris recognition system is designing aniris capture device that can capture iris images in a short time.Some research groups [1]–[5], such as Oki Electric IndustryCo., Ltd., LG Electric Co., Ltd., Panasonic, and Cross-match,have explored the requirements on the iris image acquisitionsystem, and some implementations have already been put intocommercial practice [6]–[10]. The Institute of Image Process-ing and Pattern Recognition at Shanghai Jiao Tong Universityalso developed a contactless autofeedback iris capture system[11]. The previous different techniques have their limitations.All these capture devices operate predominantly in a single

Manuscript received March 15, 2012; revised July 23, 2012 andSeptember 28, 2012; accepted October 9, 2012. Date of publication March 7,2013; date of current version August 14, 2013. This work was supported inpart by the Natural Science Foundation of China Oversea fund under Grant61020106004, by the China National 973 Project under Grant 2011CB302203,by the General Research Fund from the Hong Kong Special AdministrativeRegion Government, and by the central fund from Hong Kong PolytechnicUniversity. This paper was recommended by Associate Editor J. Wu.

Y. Gong, P. Shi, and J. Yan are with the Institute of Image Processing andPattern Recognition, Shanghai Jiao Tong University, Shanghai 200240, China(e-mail: [email protected]; [email protected]; [email protected]).

D. Zhang is with the Biometrics Research Centre, Department of Com-puting, Hong Kong Polytechnic University, Kowloon, Hong Kong (e-mail:[email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSMCA.2012.2227958

band of the near infrared (IR) (NIR) range of the electromag-netic spectrum, using wavelengths that peak around 850 nm,with a narrow bandpass. The 850 nm has some strength, suchas alleviating physical discomfort from illumination, reducingspecular reflections, and increasing the amount of texture cap-tured for some iris colors. Commercial iris recognition systemsoperate in a single band of the NIR for another primary reason:simplifying system design and reducing production costs.

However, the textural information of the iris has the com-plex components, mainly including two kinds of texture: fromstructures and from pigments. The 850 nm can penetrate thepigments, presenting the texture which cannot be easily ob-served in the visible spectrum. However, the effect of pigments,the major color-inducing compound, is negligible at 850 nm.Therefore, the iris images captured by the previous devices haveinsufficient textural information, lacking of the ones generatedby the pigments.

The aforementioned study has inspired researchers into con-sidering that the iris textures generated outside 850 nm mayhave more information over those that are only generated in theNIR spectrum because pigments can be present in the shorterwavelengths and become another major source of iris texture.

Previous research has shown that matching performance isnot invariant to iris color and can be improved by imagingoutside the NIR spectrum and that the physiological propertiesof the iris (e.g., the amount and distribution of pigments)impact the transmission, absorbance, and reflectance of differ-ent portions of the electromagnetic spectrum and the abilityto image well-defined iris textures. Performing multispectralfusion at the score level is proven feasible, and the multispectralinformation is used to determine the authenticity of the imagediris [12].

Some research groups [13]–[17] explored the single-eye mul-tispectral iris image capture devices, which are designed for theexperimental data collection only, far from the requirements ofreal usage scenarios. All of the previous multispectral devicesare designed for capturing the iris image from a single eyeeach time. It switches the light source or the filter manually,adjusts the lens focal length manually, and uses chinrest torequire an uncomfortable fixed head position. To some extent,most of multispectral devices demand full cooperation from thesubject who needs to be trained in advance, which will even-tually increase the time of image acquisition and influence theacceptability of users. Subjects may be tired and easily fatigued,blinking eyes, rotating eyes, and dilating and constricting thepupil subconsciously, which are interference factors to irisimage quality. Taking into account these interference factorsin iris image acquisition, the acquisition of only an iris cannot

2168-2216 © 2013 IEEE

Page 2: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

GONG et al.: HANDHELD SYSTEM DESIGN FOR DUAL-EYE MULTISPECTRAL IRIS CAPTURE WITH ONE CAMERA 1155

TABLE ICOMPARISON OF PREVIOUS HANDHELD DUAL-EYE CAPTURE DEVICES

ensure the quality of the iris image, and the accuracy of the irisrecognition is destroyed, particularly in some application sce-narios of higher security requirements, combining the left andthe right iris recognition for personal authentication. Therefore,the practical system should be designed to capture iris imagesfrom two eyes jointly, to guarantee that at least the iris image ofone side is of high quality.

Although single-eye systems benefit from simplicity andlower cost owing to reduced optics, sensors, and computationalneeds, individual scanning implies a sequential activity thatincreases enrollment time and risks left–right misassignment.Dual-eye systems are more complex because a larger spacemust be imaged, processed, and segmented to isolate each iris,but there are also very obvious advantages: Enrollment through-put can be faster, with less risk of left–right iris swapping, andthere is more flexibility in obtaining individualized best qualityiris images from any one of the two eyes.

Using dual-eye system, two iris codes from the left and righteyes are not fused: The code from the left eye is comparedexhaustively against all in the left eye database, and the codefrom the right eye is compared exhaustively against all inthe right eye database. Therefore, two recognition decisionsdleft and dright are made from two eyes, respectively, andthe following two final decision-making policies depend on thelevel of security that users choose: Policy 1—the identity of theindividual is determined by any one of the two eyes, dfinal =dleft OR dright, and Policy 2—the identity of the individual isdetermined by both eyes, dfinal = dleft AND dright. The proba-bility of misidentification when using Policy 2 will be reducedby many orders of magnitude.

Some commercial groups [18]–[20] explored the handhelddual-eye iris image capture devices (see Table I). All of theprevious dual-eye devices are based on two cameras and de-signed for capturing the iris image under only one band ofwavelength, without the feature of multispectral image capture,so they cannot be used for multispectral iris recognition.

A simple dual-eye capture device can be done with twosingle-eye cameras connected together using a bracket of somekind (see Fig. 1). Two slide bars mounted on a tripod keep the

Fig. 1. Two cameras with slide bar.

lens axes parallel and avoid rotation as you adjust the camerasbetween the first and second iris images. For best image quality,shutter controls should be added so that both cameras could betriggered at the same time. Because of slight variations amongcamera shutter speeds and exposures, we cannot guarantee thatthe image quality (such as brightness, contrast, and gray leveldistribution) of the two iris images are exactly the same. Thedownside is that you have to carry the slide bar and a tripodwith you, and the process of system calibration (including themagnification and focus accuracy of two sets of lenses and theparallel degree of the main optical axis) is so cumbersome as toreduce the efficiency of image acquisition. The “two-camera”structure is too bulky and cannot be directly used for handheldiris capture devices. Therefore, the previous dual-eye devicesin Table I are all based on the optimized similar design, but inmore compact structure.

A more advanced dual-eye capture device can be done withonly one camera. The iris recognition standards (InternationalOrganization for Standardization (ISO)/International Elec-trotechnical Commission (IEC), 2004) recommend 200 pixelsacross the iris as the minimum acceptable resolution for level2 iris features (intermediate iris structure sufficient to identifyprominent instances of radial furrows, concentric furrows, pig-ment spots, crypts of Fuchs, and degenerations due to medicalconditions or damage). At this writing (2012), the highestresolution commercial video cameras are of the order of 4096× 4096 pixels (16 million pixels), so they can support a field ofview of approximately 20 × 20 cm, only just able to cover thewidth of the two eyes with a very small margin on the left and

Page 3: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

1156 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 5, SEPTEMBER 2013

right sides. The “one-camera” design is only used in large irisaccess control system in train stations, libraries, and airports,such as Sarnoff Company’s Iris on the Move. The “one-camera”design is too expensive to be acceptable on a very large scale inthe emerging handheld iris capture device markets.

In order to avoid the design problems noted earlier, weabandoned the “two-camera” design and improved the “one-camera” design. Using a set of refraction lenses, we enlargedthe effective resolution of a 2-million-pixel camera, which cancover the width of the two eyes within 1600 × 1200 pixels,capture two iris images with high resolution and high imagequality, and meet the iris recognition standards ISO/IEC 2004(200 pixels across the iris—the minimum acceptable resolutionfor level 2 iris features). In this paper, we presented a dual-eye multispectral iris capture handheld system that will enablethe collection of two iris images simultaneously based on asingle camera with 2 million pixels, to explore the feasibilityof dual-eye multispectral capture handheld system with highefficiency and low cost. Using this system, a complete capturecycle (including six images from the left and the right iris underthree different wavelength bands) can be completed within 2 or3 s, much faster than previous devices. In addition, this systemis not only an isolated acquisition device but also connectedto the server running recognition algorithm and can completea full process of multispectral online iris identification, whichis the first attempt of practical application of the multispectraldual-iris recognition.

The capture system consists of the following four parts:1) capture unit; 2) illumination unit; 3) interaction unit; and4) control unit. It uses a Micron 1/3.2-in CMOS camera, anautomatic focusing lens, and a set of refraction lenses as thecapture unit, and the working distance is about 160–200 mm.Two groups of matrix-arrayed light-emitting diodes (LEDs)across three different wavelengths (including visible light andNIR light) are used as the multispectral illumination unit. Wedesign an interaction unit including the IR distance measuringsensor and the speaker to realize the exact focusing range of thelens via the real-time feedback of the voice prompts.

Compared with the traditional “two-camera” design, theproposed design has three advantages: 1) There is no syn-chronization control between two cameras, so the circuit boarddesign is simpler and smaller and has higher reliability; 2) onecamera is used rather than two, so it is more energy efficient, haslonger battery life, and is more easily converted into a handheldportable device; some previous “two-camera” capture devices,such as Iritech’s DB300 and CMITech Company, Ltd. MDX-10in Table I, whose power consumption is much higher than theupper limit of one universal serial bus (USB) port, even need tobe powered by two USB ports or an external dc power supply,and without the external fixed power, such device should becompletely unusable; and 3) there is no need for cumbersometwo-camera calibration (including the magnification and focusaccuracy of two sets of lenses, parallel degree of the mainoptical axis, brightness, contrast, shutter speeds, and exposurelevels), and the imaging parameters corresponding to the leftand the right eye are exactly the same.

Compared with the traditional “one-camera” design, theproposed design has three advantages: 1) The low-resolution

camera is used, so the system cost is significantly lowered;2) under conditions of limited transmission bandwidth, low-resolution video streams have higher frame rates, so it is moreconducive to capture iris images from moving eyes and moreresistant to eye movement interference, such as blinking eyes,rotating eyes, and dilating and constricting the pupil subcon-sciously; and 3) based on Daugman’s method, the process ofdetecting and localizing the eye, the iris, and the eyelids hasthe largest time consumption, almost 40% of the total irisprocessing time. Using the same iris detection and locationalgorithm and the same computing platform, the computationtime consumed by the low-resolution iris images will be sig-nificantly less (the complexity of detection and location isdirectly related to the size of the iris image), so the real-time performance of the iris recognition system will be greatlyimproved.

In summary, compared with two traditional designs, the“two-camera” and “one-camera” designs, our proposed designmethod has some obvious advantages. The proposed dual-eyecapture device benefits from simplicity and lower cost becauseof reduced optics, sensors, and computational needs. This sys-tem is designed to have good performance at a reasonable priceso that it becomes suitable for civilian personal identificationapplications. This research represents the first attempt in theliterature to design the dual-eye multispectral iris capture hand-held system based on a single low-resolution camera.

The remainder of this paper is organized as follows.Section II provides a description of the design of the dual-iris capture device. Section III reports experiments and results.Section IV is the performance analysis. Section V concludesthis paper.

II. PROPOSED IRIS CAPTURE DEVICE

One of the major challenges of multispectral dual-iris capturesystem is capturing two iris images of the left and the rightiris while switching the wavelength band of illumination. Therealization of the capture device is quite complicated, for itintegrates light, machine, and electronics into one platform andinvolves multiple processes of design and manufacture.

A multispectral iris image capture system is proposed. Thedesign of the capture device includes following four subcompo-nents: 1) capture unit; 2) illumination unit; 3) interaction unit;and 4) control unit. The capture, illumination, and interactionunits constitute the main body of the capture device, which isinstalled on the three-way pan-tilt head of a tripod to allowsubjects to manually adjust the pitching angle for fitting theirheight while capturing. The control unit is operating withinthe capture system, in charge of the synchronization of theother three units and data exchange with the iris recognitionserver.

We considered the configurations for our system as flowing:Use one single camera with multiple narrow-band illuminators.The illuminators are controlled by the Advanced ReducedInstruction Set Computing Machine (ARM) main board andcan switch automatically and synchronize with the lens’ focusand the CMOS camera’s shutter. This approach enables dual-eye collection of multispectral iris images.

Page 4: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

GONG et al.: HANDHELD SYSTEM DESIGN FOR DUAL-EYE MULTISPECTRAL IRIS CAPTURE WITH ONE CAMERA 1157

Fig. 2. Optical path. (a) Side view. (b) Front view.

Fig. 3. Composition of capture unit.

The optical path is as follows: First, the subject is watch-ing the capture window, and the multispectral light from thematrix-arrayed illuminators on both sides is delivered to theeyes. The reflected light from the subject’s eyes is collectedthrough a set of refraction lenses, through a narrow-band filter,and imaged by the Micron CMOS camera using the auto focus(AF) lens. In addition, the light beam from the IR distancesensor is also concentrated on the focal plane, where the sub-ject’s eyes are located. The optical path of the proposed capturedevice is shown in Fig. 2(a) and (b).

A. Capture Unit

The capture unit that we propose is composed of four parts:Micron CMOS sensor, AF lens, refraction lens, narrow-bandfilter, and protection glass, as shown in Fig. 3.

The camera with the Micron MT9D131 CMOS sensor isusing USB interfaces and has exceptional features, includinghigh resolution (working at 1600 × 1200), high sensitivity(Responsivity = 1.0 V/lx · s), and high frame rate (15 fpsat full resolution), which are all important to multispectralimaging. The CMOS’s spectral response from 400–1000-nm

Page 5: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

1158 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 5, SEPTEMBER 2013

Fig. 4. Relative sensor response for the Micron MT9D131.

wavelengths, including visible light and IR light, is not abso-lutely uniform (see Fig. 4), but we verified that the CMOSresponse does not introduce significant errors into the ex-perimental values after the optimization of the multispectralsystem.

In iris capture system, the acquisition of iris images almostalways begins in poor focus. It is therefore desirable to computefocus scores for image frames very rapidly, to control a movinglens element for autofocusing. The AF lens is based on the NewScale M3-F Focus Module [21] and controlled by the computervia the I2C interface, which runs the 2-D fast focus assessmentalgorithm to estimate the quality of focus of a multispectralimage and to indicate the direction of the lens focus’ movement.The focus range of the AF lens is set from 160 to 200 mm, sowe can capture the iris image that is in acceptably sharp focusin this distance range, without the strict requirements on thesubject’s location and cooperation.

The set of refraction lenses is designed for capturingtwo iris images from the left and the right eye simultane-ously and ensures that each iris image has enough effec-tive resolution. The standards (ISO/IEC 19794-6: Informationtechnology—Biometric data interchange formats—Iris imagedata) recommend a resolution of more than 200 pixels acrossthe eye and demand at least 100 pixels. Many experimentshave demonstrated that a reduction in the effective resolutionof an iris image will increase the Hamming distance betweenthe reduced resolution image and a full resolution image of thesame eye [22]. Effective resolution is affected by the resolutionof the camera and the focal length of the lens. The maximumresolution of the Micron CMOS camera is 1600 × 1200.Following the standards of 200 pixels across the eye, one imagecan only cover the width of 80 mm. Generally, the interpupillarydistance is 65–70 mm, and the iris diameter is 12 mm, so takinginto account the necessary margin, the 80 mm is not enough tocover the area of two eyes.

We use a set of refraction lenses to enlarge the effectiveresolution of the iris area in the image (see Fig. 5). The refrac-tion lens is made up of four triangular prisms (including twocentral prisms and two lateral prisms) and the prism housing.The central prisms must remain in contact with each other, soas to collect two optical paths (each one corresponding to aniris), combine them into a light bundle of a few millimeters in

Fig. 5. Refraction lens. (a) Optical scheme of the refraction lens. (b) Wiremesh view of the prism housing.

diameter, and deliver it to the surface of the CMOS via the AFlens [see Fig. 5(a)]. The lateral prisms, instead, must follow theaverage interpupillary distance, which is preset as 65 mm in thiswork, to aim at the left and the right eye accurately. The areabetween the two eyes, such as the nose, has not been acquiredinto the image to save the limited resolution. The image widthof 1800 pixels is divided equally into two parts, and in eachpart, the iris is located in the center. Each prism is mountedinside a short square tube which slides inside the main tube ofthe prism housing. Fig. 5(b) shows the prism housing structureand its contents. The screws, support plates, and linkage tubehave been omitted for clarity.

The narrow-band filter is the customized coated filter corre-sponding to the wavelengths of illuminators. The narrow-bandfilter can transmit the light of the specified three wavelengthsand reflect the light of the other wavelengths. These threewavelengths with high transmittance are 700, 780, and 850 nm,under which the iris is imaged by CMOS.

The output of the capture unit is the sequence of multi-spectral iris images received by CMOS. These images aretransmitted from the ARM main board to the server (the imageprocessing host computer) via the USB 2.0 interface for irisrecognition.

Page 6: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

GONG et al.: HANDHELD SYSTEM DESIGN FOR DUAL-EYE MULTISPECTRAL IRIS CAPTURE WITH ONE CAMERA 1159

Fig. 6. Composition of illumination unit.

B. Illumination Unit

The illumination system is composed of two groups, whichare located at the bottom side of the lens, and each groupincludes nine matrix-arrayed (3 × 3) LEDs corresponding tothe three wavelengths 700, 780, and 850 nm (see Fig. 6). Thewavelengths of the illuminators can be switched automatically,allowing the illumination of the captured iris with a certainangle. The light of the left LED is delivered to the right iris,and the light of the right LED is delivered to the left iris,which can increase the lighting angle of incidence, in orderto reduce the interference of obvious reflectance in glasses orsunglasses. In the real usage environment, users often try topass the iris recognition wearing glasses and sunglasses, so itis very necessary to improve the antireflection interference ofthe capture device. To avoid glittering further, we optimizedthe design of the illuminators: the arrangement and the angleof each LED are specially designed. Therefore, the glitteringis controlled in the pupil area in most cases, and this glitteringdoes not affect the localization and recognition.

The selection of the aforementioned three wavelengths ismainly based on two reasons. The first reason is that wefound that three clusters are enough to present all wavelengths,including the visible and IR spectrum. The selection of thethree wavelengths 700, 780, and 850 nm is the optimizedresult, in order to ensure adequate coverage of the full spectrumand diversity of the iris texture. The second reason is thepractical considerations. The proposed iris capture device inthis work will be developed into the practical multispectraldual-iris recognition system, not just the experimental device,so all the selected wavelengths should be easily accessible inthe system. The selection of more wavelengths maybe meansa little more accurate performance of iris recognition, but italso results in much longer time of image acquisition andmuch lower acceptability of users. In summary, the selectionof the aforementioned three wavelengths is the best choicebased on both experimental data integrity and feasibility ofthe practical system. In addition, the luminance levels for theexperimental setup described here meet the requirements foundin the American National Standards Institute/Illuminating En-gineering Society of North America RP-27.1-05 for exposurelimits under the condition of a weak aversion stimulus.

C. Interaction Unit

The interaction unit is included in our iris capture devicesystem for the purpose of easily capturing iris images. It is very

Fig. 7. Composition of interaction unit.

convenient for subjects to adjust their poses according to thefeedback information. The interaction unit that we propose iscomposed of four parts: checkerboard stimulus organic light-emitting diode (OLED), IR distance measuring sensor, distanceguide indicator, and the speaker (see Fig. 7).

Pupil dilation is the most common quality defect in irisimages and is a serious interference factor, particularly formultispectral iris fusion and recognition. Hollingsworth et al.[23] studied the effect of texture deformations caused by pupildilation on the accuracy of iris biometrics and found that, whenmatching two iris images (enrollment and recognition) of thesame person, larger differences in pupil dilation yield highertemplate dissimilarities and, therefore, a greater chance of afalse nonmatch. The aforementioned research demonstratedthat texture nonlinear deformations caused by pupil dilation aresometimes so serious as to let iris recognition algorithms makethis wrong decision.

In the proposed device, we design the checkerboard stimu-lus to solve the problem of pupil dilation. Some researchers[24] found that human pupillary constriction can be evokedby visual spatial patterns, such as the checkerboard. In theproposed device, the checkerboard is an OLED screen, onwhich the reversal pattern is a 2 × 2 matrix, which can flickerbetween black on white to white on black (invert contrast) at acertain frequency without changing the illumination intensityon the eyes. When the subject is watching the checkerboardstimulus, the pupillary constriction will be evoked. We capturethe multispectral iris images with a similar degree of pupildilation to minimize the interference caused by pupil dilation.The iris images of pupillary constriction will be more suitablefor analysis and recognition and can be filtered automaticallythrough the iris segmentation algorithm.

As shown in Fig. 8, when the subjects watch the capturewindow, the checkerboard stimulus can be seen through thelateral prism because there is one specified metal coating layeron the lateral prism’s hypotenuse surface, which divides thelight ray into two parts: One is the NIR light (wavelength longerthan 650 nm) reflectance, and the other is the visible light(wavelength shorter than 650 nm) transmittance. NIR light isreflected by the central prism and received by the camera togenerate the iris images, while the visible light transmits thelateral prism between the eyes and checkerboard stimulus.

The IR distance measuring sensor is the GP2D12 of Sharp,with integrated signal processing and analog voltage output.The measuring distance range is from 100 to 800 mm. Whenthe IR distance measuring sensor gets the distance between thesubject and the capture unit, both the distance indicator and

Page 7: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

1160 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 5, SEPTEMBER 2013

Fig. 8. Optical path of lateral triangular prism.

the speaker provide subject guidance, so the correct captureposition is achieved. According to the focus range of the AFlens (from 160 to 200 mm), if the distance is out of theaforementioned range, the distance indicator will blink with redor blue LEDs, and the voice instructions will be “Please moveback” or “Please move closer.”

In effect, subjects can make quick adjustment on their posesaccording to the guidance information, including light andvoice. When the capture is completed, the distance indicatorwill automatically blink with green LEDs to tell the subject thatit is successful in capturing iris images with good quality.

D. Control Unit

The control unit is a Texas Instruments ARM Cortex-M3Stellaris LM3S9B92 main board, running Micro C operatingsystem. The control unit can communicate with the iris recog-nition server via the USB 2.0 interface, transmit the iris imagedata, drive the AF lens, and synchronize the other three units:capture, illumination, and interaction.

It is desirable to capture a sequence of iris images with thesame or similar occlusion, pupil dilation, and focusing accuracyin a short time, to introduce as little interference to multispectralfusion and recognition as possible. Therefore, we use the AFmode of the lens and the maximum frame rate of CMOS andcapture continuously a sequence of iris images under differ-ent wavelength illuminations switched automatically with highspeed.

There are three factors influencing the capture speed: theswitching speed of wavelengths, the frame rate of CMOS,and the focusing speed of the lens. The wavelength switchingtime is small enough to be negligible, and the frame rate islimited by the bandwidth of CMOS, with almost no roomfor improvement. Therefore, the focusing speed is the mostimportant factor to be improved.

The refractive index of the same lens changed with thespectral wavelength, so the lens focus should be adjusted eachtime that one wavelength illumination was switched on. Amongthe three wavelengths, 700 nm is the shortest, with the largestrefractive index and minimum object distance, 850 nm is thelongest, with the smallest refractive index and maximum objectdistance, and 780 nm is between them. The lens focus in thetraditional capture device is manually adjusted by an operator,and it is impossible to meet the requirement of high-speedcapture, so the AF lens is necessary in this design.

Fig. 9. Block diagram of multispectral iris acquisition process.

The focusing speed is limited by two factors: the efficiencyof the focusing algorithm and the mechanical movement of thelens. The latter almost cannot be changed, while the former hasmuch room for improvement. We use Daugman’s convolutionmatrix method [25] as the focusing algorithm. The recognitionserver is running the focusing algorithm to estimate the focusquality of a multispectral image and to indicate the directionof the lens focus’ movement for higher focus score, until thefocus score reaches a preset threshold, and then, the lens iscorrectly focused. The lens driving signal is transferred fromthe server to the ARM board via the USB interface and, then, tothe New Scale M3-F Focus Module via the I2C interface. Thecomputational time of the focusing algorithm on each image isless than 10 ms on a computer with Intel Pentium ProcessorG620 at 2.60 GHz. Including the mechanical movement ofthe lens, the action cycle of one focusing will take not morethan 300 ms.

The control unit manages the working process of one multi-spectral data collection cycle and synchronizes the other threeunits: capture, illumination, and interaction (see Fig. 9). Oncethe subject moves closer to the capture device, the distance mea-suring sensor will output the distance voltage signal. The workcycle of distance measuring sensors is 40 ms. When the ARMmain board receives the voltage signal and serializes the dis-tance value, a drive signal will be sent to the distance guideindicator, instructing the subject to move closer or back. At thesame time, the checkerboard stimulus OLED starts to flickerbetween two reversal patterns. If the subject moves into the

Page 8: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

GONG et al.: HANDHELD SYSTEM DESIGN FOR DUAL-EYE MULTISPECTRAL IRIS CAPTURE WITH ONE CAMERA 1161

Fig. 10. Multispectral capture device. (a) Internal structure. (b) External form.

focus range of the AF lens, the illumination unit will be turnedon in the order of the wavelengths: 700, 780, and 850 nm. Atevery time that one wavelength illumination is turned on, thelens begins autofocusing and CMOS begins acquisition. Takinginto account that it will take some time for the subject to moveinto the focus range, a complete capture cycle generally can becompleted within 2 or 3 s, and then, the images of the two iriseswill be transferred to the server via the USB 2.0 interface.

III. EXPERIMENTAL RESULTS

In this section, a series of experiments is performed to eval-uate the performance of the proposed multispectral iris capturedevice. First, a multispectral iris image database is created bythe proposed capture device. Then, we use the iris score-levelfusion to further investigate the effectiveness of the proposedcapture device by the 1-D Log-Gabor wavelet filter approachproposed by Masek and Kovesi [26].

A. Proposed Iris Image Capture Device

According to the proposed design, we developed an multi-spectral iris image capture device, which is shown in Fig. 10(a)and (b). The dimension of this device is 162 mm (width) ×51 mm (height) × 61 mm (thickness). The device’s workingdistance is about 160–200 mm. Through a USB 2.0 cable, thisdevice can connect to the server computer running recognitionalgorithm and complete a full process of multispectral onlineiris identification.

Before image acquisition, the Micron CMOS camera isswitched to autoexposure mode, and the shutter time is presetas 1/30 s, in order to minimize the difference in the brightnessof images and to adapt to the maximum frame rate of 30 fpsof the CMOS. For each subject, the capture device will au-tomatically start a multispectral data collection cycle once itdetects that the subject has move into the focus range. Inthe acquisition process, the subjects do not need to do otherthings except to watch the capture window. Each wavelengthilluminator is switched on according to the preset order auto-matically, two images corresponding to the left and the righteye are captured and transferred to the server, then the currentwavelength illuminator is turned off, and the next wavelengthilluminator repeats the aforementioned process. The scene ofthe multispectral data collection is shown in Fig. 11. The oper-ator held the handle of the proposed handheld capture device,which is aiming at the user’s eyes, and moved it forward orback according to the voice prompts. A complete multispectraldual-iris recognition cycle (capture, fusion, and recognition ofsix images from the left and the right iris under three differentwavelength bands) can be completed within 2 or 3 s. Using

Fig. 11. (a) Device with handle. (b) Scene of multispectral iris capture.

Fig. 12. Multispectral ample iris images: (a) Captured under 700 nm,(b) captured under 780 nm, and (c) captured under 850 nm, from one multi-spectral data collection cycle for the same iris.

an Intel Atom N270 CPU (single core) running at a 1.6-GHzclock rate, which is the low-end product and has the lowestcomputing performance among all Intel desktop CPUs, the iristemplate extraction time for one 1600 × 1200 pixel iris image is0.13–0.15 s. Using Neurotechnology Company’s VeriEye Soft-ware Development Kit on Intel Core 2 Q9400 CPU (four cores)running at a 2.67-GHz clock rate, which is nearly the highestcomputing performance among all Intel desktop CPUs, the iristemplate extraction time for one 640 × 480 pixel iris imageis 0.11–0.13 s [27], almost equal with our time consumptionon Intel N270. Compared with other major commercial irisrecognition algorithms, our recognition system has the obviousspeed advantage. To see the operation demonstration video,please download from the following link: http://www.youtube.com/watch?v=3xfLb_NLEmM&feature=plcp.

The multispectral example iris images that are captured bythe proposed device can be seen in Fig. 12. As shown inFig. 12(a)–(c), we can know that the captured iris image qualityis very good: The focus is accurate, and the pupil radius has arelatively consistent and small size. The glittering is controlledin the pupil area in most cases, so it does not affect the irissegmentation and recognition.

B. Iris Database

A data set containing samples from 100 irises was usedto conduct the following study. Our database, which used theproposed multispectral device, was created with 50 subjects. Inthis data set, 35 are male. The age distribution is as follows:Subjects younger than 30 years old comprised 84%, and sub-jects between 30 and 40 years old comprised about 10% (seeTable II).

The resolution of one pair of iris images (one pair includingthe iris image captured from the left and the right eye) is 1600× 1200, and the distance between the device and the subjectis about 180 mm. Ten pairs of images, which were taken fromthe same subject, are selected under each of the three spectralbands corresponding to 700, 780, and 850 nm, respectively. Intotal, we collected 1500 pairs of iris images (3000 images) forthis database, which are used as a unique session in experiment.

Page 9: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

1162 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 5, SEPTEMBER 2013

TABLE IICOMPOSITION OF THE MULTISPECTRAL IRIS IMAGE DATA SET

C. Score Fusion and Recognition

We can locate and segment the pupil in the iris image basedon the gray level threshold method [26]. After segmentation,we can use the homogeneous rubber sheet model devised byDaugman [28] to normalize the iris image, by remapping eachpixel within the iris region to a pair of polar coordinates (r, θ)where r is on the interval [0, 1] and θ is the angle [0, 2π].Therefore, we can obtain six normalized patterns from one datacollection cycle.

The 1-D Log-Gabor wavelet recognition method proposedby Masek and Kovesi [26] is used for encoding in our experi-ments. The 1-D Log-Gabor bandpass filter can be efficient inangular feature extraction, which is the most distinctive andstable texture information, and ignore the radial feature extrac-tion, which is easily interfered by the dilated pupil. The 1-DLog-Gabor wavelet method is the most popular comparisonmethod used in the literature due to the accessibility of theirsource code. In this paper, we use the performance of the 1-DLog-Gabor wavelet method as the benchmark to access theimprovement of recognition performance after multispectraliris image fusion.

The 1-D Log-Gabor wavelet feature encoding method pro-posed by Masek and Kovesi is implemented by convolving thenormalized pattern with 1-D Log-Gabor wavelets. The rowsof the 2-D normalized pattern (the angular sampling lines) aretaken as the 1-D signal, and each row corresponds to a circularring on the iris region. The angular direction is taken ratherthan the radial one, which corresponds to columns of the nor-malized pattern, since the iris maximum independence occursin the angular feature extraction, which is the most distinctiveand stable texture information, and ignores the radial featureextraction, which is easily interfered by the dilated pupil. Werevisit the default parameter values used by Masek and Kovesito get higher recognition performance [29]. All parametersrevised in this work are as follows: angular resolution and radialresolution of the normalized image, center wavelength and filterbandwidth of the 1-D Log-Gabor filter, and fragile bit percent-age (see Table III). After we apply the 1-D Log-Gabor waveletfilter to the normalized iris image, we quantize the result tocreate the iris code bit vectors by determining the quadrant ofthe response in the complex plane. This gives us an iris code bitvector that has twice as many bits as the normalized iris imagehad pixels.

In our work, two single-eye iris code bit vectors codeleft andcoderight generated from one pair of iris images (including theleft and the right eye) captured under the same wavelength are

TABLE IIILISTING OF PARAMETERS REVISED FOR MULTISPECTRAL IRIS IMAGES

WITH THE INITIAL VALUE FROM MASEK AND KOVESI

combined into one dual-eye iris code bit vector codepair asfollows:

codepair = [codeleft; coderight]. (1)

Inspired by the matching scheme of Daugman [30], the binaryHamming distance is used, and the similarity between twoiris images is calculated by using the exclusive-OR operation.Under wavelength i, based on two pairs whose two iris codebit vectors are denoted as {codepairA,i, codepairB,i} and maskbit vectors are denoted as {maskpairA,i,maskpairB,i}, we cancompute the raw Hamming distance HDraw as follows:

HDraw,i(A,B)

=‖(codepairA,i⊗codepairB,i)∩maskpairA,i∩maskpairB,i‖

‖maskpairA,i∩maskpairB,i‖.

(2)

Suppose that there are k bands of wavelengths, correspondingto k kinds of iris code bit vectors (codepairA,i, i = {1, 2 . . . k})and k kinds of mask bit vectors (maskpairA,i, i = {1, 2 . . . k})for one pair of irises. For two pairs of irises from differentsubjects A and B, the distance using the simple sum rule isdefined as

HDsum(A,B) =k∑

i=1

HDraw,i(A,B). (3)

Generally, the more the texture information used across alldifferent wavelengths, the better the recognition performancethat could be achieved. However, since maybe there is someoverlapping of the discriminating information between differentbands of wavelengths, the simple sum of the matching scoresof all bands may not improve much the final accuracy. Theoverlapping part between the two iris code bit vectors undertwo different wavelengths will be counted twice by using thesum rule [see (3)]. Such kind of overcomputing may make thesimple score-level fusion fail.

When a score-level fusion strategy could reduce the over-lapping effect, better verification results can be expected. Incombinatorics, the inclusion–exclusion principle [31] (which isattributed to Abraham de Moivre) is an equation relating the

Page 10: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

GONG et al.: HANDHELD SYSTEM DESIGN FOR DUAL-EYE MULTISPECTRAL IRIS CAPTURE WITH ONE CAMERA 1163

sizes of two sets and their union. For finite sets A1, . . . , An,one has the identity as follows:∣∣∣∣∣

n⋃i=1

Ai

∣∣∣∣∣ =n∑

i=1

|Ai| −∑

i,j:1≤i≤j≤n

|Ai ∩Aj |+∑

i,j,k:1≤i≤j≤k≤n

×|Ai ∩Aj ∩Ak| − . . .+ (−1)n−1|A1 ∩ . . . ∩An|(4)

where ∪ denotes the union and ∩ denotes the intersection.Therefore, based on (4), a score-level fusion rule is definedwhich tends to minimize the overlapping effect on the fusedscore as follows:

HDsum(1,2)(A,B) = HDraw,1(A,B) + HDraw,2(A,B)

−HDraw,1(A,B) + HDraw,2(A,B)

2× P1,2(A,B) (5)

where P1,2(A,B) is the overlapping percentage between twoiris code bit vectors extracted from the same iris but capturedunder two different bands of wavelength, defined as (6) shownat the bottom of the page.

Similarly, we could extend the multispectral score fusionscheme [32] to fuse more bands of wavelength, e.g., threespectral bands as in

HDsum(1,2,3)(A,B)

=HDraw,1(A,B)+HDraw,2(A,B)+HDraw,3(A,B)

−HDraw,1(A,B)+HDraw,2(A,B)

2×P1,2(A,B)

−HDraw,1(A,B)+HDraw,3(A,B)

2×P1,3(A,B)

−HDraw,2(A,B)+HDraw,3(A,B)

2×P2,3(A,B)

+HDraw,1(A,B)+HDraw,2(A,B)+HDraw,3(A,B)

3

×P1,2,3(A,B). (7)

We usually evaluate recognition accuracy according to threeindicators: false acceptance rate (FAR, a measure of the like-lihood that the access system will wrongly accept an accessattempt), false rejection rate (FRR, the percentage of identi-fication instances in which false rejection occurs), and equalerror rate (EER, the value where FAR and FRR are equal).Therefore, we calculate the FAR, FRR, and EER based onthe images of single wavelength, 700, 780, and 850 nm, and

Fig. 13. EER and FRR (FAR = 0) based on the dual-eye images of threewavelengths and multispectral fusion: (a) 700 nm, (b) 780 nm, (c) 850 nm, and(d) multispectral score-level fusion.

the images of multispectral fusion. The intraspectral genuineand intraspectral impostor scores are used to compute the EERand FRR.

We capture 500 pairs of iris images (1000 iris images) undereach wavelength. Therefore, in total, there are 1500 pairs ofiris images (3000 iris images) across all three wavelengths.Under each wavelength, 2250 intraspectral genuine scores and122 500 intraspectral impostor scores were generated from thedual-eye iris code (codepair) matching, and 500 intraspectralgenuine scores and 245 000 intraspectral impostor scores weregenerated from the single-eye iris code (codeleft or coderight)matching.

In both the dual-eye and the single-eye iris recognition,sum score-level fusion [32] is used as the fusion technique onthe data set, to generate the multispectral genuine scores andimposter scores based on all three wavelengths. The normalizedhistogram plots of HDnorm for the dual-eye images of threewavelengths and multispectral fusion are shown in Fig. 13.The normalized histogram plots of HDnorm for the single-eyeimages of three wavelengths and multispectral fusion are shownin Fig. 14.

P1,2(A,B) = 1−(‖(codepairA,1 ⊗ codepairA,2) ∩maskpairA,1 ∩maskpairA,2‖

‖maskpairA,1 ∩maskpairA,2‖

+‖(codepairB,1 ⊗ codepairB,2) ∩maskpairB,1 ∩maskpairB,2‖

‖maskpairB,1 ∩maskpairB,2‖

)× 1

2(6)

Page 11: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

1164 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 5, SEPTEMBER 2013

Fig. 14. EER and FRR (FAR = 0) based on the single-eye images of threewavelengths and multispectral fusion: (a) 700 nm, (b) 780 nm, (c) 850 nm, and(d) multispectral score-level fusion.

IV. SYSTEM PERFORMANCE ANALYSIS

In Figs. 13 and 14, each of the four images is composed oftwo parts: The lower is the matching score distribution, andthe upper is the magnified distribution near the cross-point ofthe genuine curve (the left blue one) and the impostor curve(the right red one). The blue curve which is to the right ofthe cross-point means FRR, while the red curve which is tothe left of the cross-point means FAR. (a)–(d) show the samecharacteristics: The intraspectral genuine scores of differentwavelengths have similar median values and models and aremostly spread around the corresponding median value. Theintraspectral impostor scores are observed to be fairly wellseparated from the intraspectral genuine scores.

The EERs (where FRR = FAR) and FRRs (where the highestdegree of FAR accuracy can be obtained by the user, FAR = 0)based on the images of single wavelength and the images ofmultispectral fusion can be clearly compared. As shown inFigs. 13 and 14, in both the dual-eye and the single-eye match-ing, the EER (where FRR = FAR) and FRR (where FAR = 0)reach their lowest values based on multispectral score-levelfusion, while the ones based on the images of single wavelengthare relatively higher, which means that the best recognitionperformance is achieved by multispectral fusion (see Table IV).

In dual-eye matching, according to the FRR (where FAR =0), the 1-D Log-Gabor wavelet recognition method with mul-tispectral score-level fusion is 87% lower than the one under850 nm without multispectral image fusion, 77% lower thanthe one under 780 nm without multispectral image fusion, and56% lower than the one under 700 nm without multispectralimage fusion. According to the EER (where FRR = FAR), the

TABLE IVCOMPARISON OF EER AND FRR BETWEEN THE DUAL-EYE AND THE

SINGLE-EYE MATCHING

1-D Log-Gabor wavelet recognition method with multispectralimage fusion is 87% lower than the one under 850 nm withoutmultispectral image fusion, 78% lower than the one under780 nm without multispectral image fusion, and 59% lowerthan the one under 700 nm without multispectral image fusion.

In single-eye matching, according to the FRR (where FAR =0), the 1-D Log-Gabor wavelet recognition method with mul-tispectral score-level fusion is 87% lower than the one under850 nm without multispectral image fusion, 78% lower thanthe one under 780 nm without multispectral image fusion, and54% lower than the one under 700 nm without multispectralimage fusion. According to the EER (where FRR = FAR), the1-D Log-Gabor wavelet recognition method with multispectralimage fusion is 86% lower than the one under 850 nm withoutmultispectral image fusion, 72% lower than the one under780 nm without multispectral image fusion, and 55% lowerthan the one under 700 nm without multispectral image fusion.

According to the FRR (where FAR = 0) and EER (whereFRR = FAR), the 1-D Log-Gabor wavelet recognition methodin dual-eye matching is 31% lower than the one in single-eyematching.

Up to now, no other academic organizations and commercialcompanies have announced experimental results about multi-spectral iris recognition performance based on three bands ofwavelength image fusion. Therefore, we compared our multi-spectral iris recognition performance with Masek and Kovesi’sperformance on iris images of single wavelength band, whichis proposed by Masek and Kovesi [26]. Two recognition perfor-mances are based on the same template extraction algorithm,the 1-D Log-Gabor wavelet filter approach. The difference isas follows: The former is based on multispectral images andmultispectral score-level fusion, and the latter is based on theimages of single wavelength band.

According to the FRR (where FAR = 0), Masek got 0.0458on the LEI-a iris data set and 0.05181 on the CASIA-a irisdata set. Our proposed dual-eye multispectral recognition per-formance is about 1/10 of the aforementioned rates, showing

Page 12: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

GONG et al.: HANDHELD SYSTEM DESIGN FOR DUAL-EYE MULTISPECTRAL IRIS CAPTURE WITH ONE CAMERA 1165

a very significant performance advantage over the traditionalsystem.

From the aforementioned comparison, we can learn that, inboth the dual-eye and the single-eye matching, the higher irisrecognition accuracy can be achieved based on the multispec-tral images captured by the proposed device and the score-level fusion. The simultaneous acquisition and recognition ofdual-eye multispectral iris images have greater performanceadvantages than the traditional single-eye system, in imageacquisition efficiency, recognition accuracy, and adaptability tocomplex situations. Furthermore, we can get two inferences,which are all innovative: First, the dual-eye multispectral irisimages captured by the proposed handheld device are goodenough for score-level fusion and recognition, so the proposeddevice meets the design requirements and can be used fordual-eye multispectral iris capture and recognition, which isthe initiative in the research of multispectral iris recognition.Second, the integrated dual-eye multispectral iris recognitionsystem consisting of the image acquisition device and therecognition server is feasible and can complete a full processof multispectral online iris identification, which is the firstsuccessful attempt of practical application of the dual-eye irisrecognition based on a single low-resolution camera.

V. CONCLUSION

In this paper, a handheld system for dual-eye multispectraliris capture based on the improved “one-camera” design hasbeen proposed. Using a set of refraction lenses, we enlargedthe effective resolution of a 2-million-pixel camera, which cancover the width of the two eyes within 1600 × 1200 pixels,capture two iris images with high resolution and high imagequality, and meet the iris recognition standards ISO/IEC 2004(200 pixels across the iris—the minimum acceptable resolutionfor level 2 iris features). Using this system, a complete capturecycle (including six images from the left and the right iris underthree different wavelength bands) can be completed within 2 or3 s, which is much faster than that of previous devices.

The system consists of the following four parts: 1) captureunit; 2) illumination unit; 3) interaction unit; and 4) controlunit. It uses a low-resolution (2 million pixels) Micron CMOScamera. The matrix-arrayed LEDs across three different wave-lengths (including visible light and NIR light) are used as themultispectral illumination unit. We design an interaction unitwith the checkerboard stimulus evoking the pupillary constric-tion to eliminate the interference of pupil dilation. The controlunit synchronizes the previous three units and makes the systemcapture with high speed, easy to use, and nonintrusive for users.

A series of experiments is performed to evaluate the per-formance of the proposed multispectral iris capture device. Amultispectral iris image database is created by the proposedcapture device, and then, we use the iris score-level fusion tofurther investigate the effectiveness of the proposed capturedevice by the 1-D Log-Gabor wavelet filter approach. Exper-imental results have illustrated the encouraging performance ofthe improved “one-camera” design.

In summary, the proposed handheld dual-eye capture devicebenefits from simplicity and lower cost because of reduced

optics, sensors, and computational needs. This system is de-signed to have good performance at a reasonable price so thatit becomes suitable for civilian personal identification applica-tions. For further improvement of the system, we will focuson the following issues: conducting multispectral fusion andrecognition experiments on a large number of iris databases invarious environments and presenting a more critical analysisof the accuracy and performance measurement of the proposedsystem.

REFERENCES

[1] R. P. Wildes, “Iris recognition: An emerging biometric technology,” Proc.IEEE, vol. 85, no. 9, pp. 1348–1363, Sep. 1997.

[2] K. R. Park and J. Kim, “A real-time focusing algorithm for iris recognitioncamera,” IEEE Trans. Syst., Man, Cybern. C, Appl. Rev., vol. 35, no. 3,pp. 441–444, Aug. 2005.

[3] T. Tan, Y. Zhu, and Y. Wang, “Iris Image Capture Device,” ChinesePatent 2 392 219, Jul. 22, 1999.

[4] CASIA Iris Image Database. [Online]. Available: http://www.cbsr.ia.ac.cn/IrisDatabase.htm

[5] P. Shi, L. Xing, and Y. Gong, “A Quality Evaluation Method of IrisRecognition System,” Chinese Patent 1 474 345, May 22, 2003.

[6] “Iris recognition in focus,” Biom. Technol. Today, vol. 13, no. 2, pp. 9–11,Feb. 2005.

[7] [Online]. Available: http://www.iriscan.com/index.php[8] [Online]. Available: http://www.oki.com/jp/FSC/iris/en/[9] [Online]. Available: http://www.lgiris.com

[10] [Online]. Available: http://www.panasonic.com/cctv/products/biometrics.asp

[11] X. He, J. Yan, G. Chen, and P. Shi, “Contactless autofeedback iris capturedesign,” IEEE Trans. Instrum. Meas., vol. 57, no. 7, pp. 1369–1375,Jul. 2008.

[12] C. L. Wilkerson, N. A. Syed, M. R. Fisher, N. L. Robinson, I. H. L.Wallow, and D. M. Albert, “Melanocytes and iris color: Light-microscopicfindings,” Arch. Ophthalmol., vol. 114, no. 4, pp. 437–442, Apr. 1996.

[13] A. Ross, R. Pasula, and L. Hornak, “Exploring multispectral iris recog-nition beyond 900 nm,” in Proc. Conf. Comput. Vis. Pattern Recognit.Workshop, 2006, pp. 1–8.

[14] M. Vilaseca, R. Mercadal, J. Pujol, M. Arjona, M. de Lasarte, R. Huertas,M. Melgosa, and F. H. Imai, “Characterization of the human iris spectralreflectance with a multispectral imaging system,” Appl. Opt., vol. 47,pp. 5622–5630, Oct. 2008.

[15] M. J. Burge and M. K. Monaco, “Multispectral iris fusion for en-hancement, interoperability, and cross wavelength matching,” in Proc.SPIE—Algorithms Technologies Multispectral, Hyperspectral, Ultra-spectral Imagery XV Conf., Orlando, FL, Apr. 13, 2009, vol. 7334,pp. 73341D-1–73341D-8.

[16] H. T. Ngo, R. W. Ives, J. R. Matey, J. Dormo, M. Rhoads, andD. Choi, “Design and implementation of a multispectral iris capture sys-tem,” in Proc. 43rd Asilomar Conf. Signals, Syst. Comput. Conf. Rec.,2009, pp. 380–384.

[17] Y. Gong, D. Zhang, P. Shi, and J. Yan, “High-speed multispectral iriscapture system design,” IEEE Trans. Instrum. Meas., vol. 61, no. 7,pp. 1966–1978, Jul. 2012.

[18] [Online]. Available: http://www.crossmatch.com/i-scan-2.php[19] [Online]. Available: http://www.iritech.com/products/IriTerminal.html[20] [Online]. Available: http://www.cogentsystems.com/cis202.asp[21] [Online]. Available: http://www.newscaletech.com/M3-F.html[22] J. R. Matey, O. Naroditsky, K. Hanna, R. Kolczynski, D. LoIacono,

S. Mangru, M. Tinker, T. Zappia, and W. Y. Zhao, “Iris on the move: Ac-quisition of images for iris recognition in less constrained environments,”Proc. IEEE, vol. 94, no. 11, pp. 1936–1947, Nov. 2006.

[23] K. Hollingsworth, K. Bowyer, and P. Flynn, “Pupil dilation degrades irisbiometric performance,” Comput. Vis. Image Understand., vol. 113, no. 1,pp. 150–157, Jan. 2009.

[24] F. Sun, L. Chen, and X. Zhao, “Pupillary responses evoked by spatialpatterns,” Acta Physiol. Sin., vol. 50, no. 1, pp. 67–74, Feb. 1998.

[25] J. Daugman, “How iris recognition works,” IEEE Trans. Circuits Syst.Video Technol., vol. 14, no. 1, pp. 21–30, Jan. 2004.

[26] L. Masek, “Recognition of human iris patterns for biometric identifica-tion,” M.S. thesis, Univ. Western Australia, Perth, Australia, 2003.

[27] [Online]. Available: http://download.neurotechnology.com/VeriEye_SDK_Brochure_2012-04-23.pdf

Page 13: Handheld System Design for Dual-Eye Multispectral Iris Capture With One Camera

1166 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS: SYSTEMS, VOL. 43, NO. 5, SEPTEMBER 2013

[28] J. Daugman, “High confidence visual recognition of persons by a test ofstatistical independence,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 15,no. 11, pp. 1148–1161, Nov. 1993.

[29] T. Peters, “Effects of segmentation routine and acquisition environmenton iris recognition,” M.S. thesis, Comput. Dept., Univ. Notre Dame, NotreDame, IN, 2009.

[30] J. Daugman, “New methods in iris recognition,” IEEE Trans. Syst., Man,Cybern. B, Cybern., vol. 37, no. 5, pp. 1167–1175, Oct. 2007.

[31] L. Comtet, Advanced Combinatorics: The Art of Finite and Infinite Ex-pansions. Dordrecht, The Netherlands: Reidel, 1974, pp. 176–177.

[32] Y. Gong, D. Zhang, P. Shi, and J. Yan, “Optimal wavelength band clus-tering for multispectral iris recognition,” Appl. Opt., vol. 51, no. 19,pp. 4275–4284, Jul. 2012.

Yazhuo Gong received the B.S. degree from theDepartment of Computer Science and Engineering,Beijing Institute of Technology, Beijing, China, andthe M.S. degree from the Department of Automation,Shanghai Jiao Tong University, Shanghai, China,where he is currently working toward the Ph.D.degree in the Pattern Recognition and Image Pro-cessing Laboratory.

His research interests mainly lie in the fields ofpattern recognition, image processing, and artificialintelligence, with applications to biometrics and in-

telligent systems.

David Zhang (F’09) received the B.S. degree incomputer science from Peking University, Shenzhen,China, and the M.Sc. degree in computer scienceand the Ph.D. degree from the Harbin Institute ofTechnology, Harbin, China, in 1982 and 1985, re-spectively. He received his second Ph.D. degree inelectrical and computer engineering from the Uni-versity of Waterloo, Ontario, Canada, in 1994.

From 1986 to 1988, he was a Postdoctoral Fellowat Tsinghua University, Beijing, China, and then anAssociate Professor at the Academia Sinica, Beijing.

He is currently with the Biometrics Research Centre, Department of Com-puting, Hong Kong Polytechnic University, Kowloon, Hong Kong. He is theFounder and Editor-in-Chief of the International Journal of Image and Graph-ics; a Book Editor of Springer International Series on Biometrics (KISB); theOrganizer of the first International Conference on Biometrics Authentication;the Technical Committee Chair of IEEE Computational Intelligence Society;and the author of more than ten books and 200 journal papers.

Dr. Zhang is a Croucher Senior Research Fellow, a Distinguished Speakerof the IEEE Computer Society, and a Fellow of International Association ofPattern Recognition. He is an Associate Editor of more than ten internationaljournals, including the IEEE TRANSACTIONS AND PATTERN RECOGNITION

Pengfei Shi (SM’01) received the B.S. and M.S.degrees in electrical engineering from Shanghai JiaoTong University (SJTU), Shanghai, China, in 1962and 1965, respectively.

In 1980, he joined the Institute of Image Process-ing and Pattern Recognition (IPPR), SJTU. Duringthe past 23 years, he has been working in the areaof image analysis, pattern recognition, and visual-ization. He is the author of more than 80 publishedpapers. He is currently the Director of the IPPRand a Professor of pattern recognition and intelligent

systems with the Faculty of Electronic and Information Engineering.

Jingqi Yan received the B.S. degree in automaticcontrol and the M.S. and Ph.D. degrees in patternrecognition and intelligent systems from ShanghaiJiao Tong University (SJTU), Shanghai, China, in1996, 1999, and 2002, respectively.

He is currently a Researcher with the Institute ofImage Processing and Pattern Recognition, SJTU.His research interests include geometric modeling,computer graphics, biometrics, scientific visualiza-tion, and image processing.