itimp 06-full report

86
IMAGE QUALITY ASSESSMENT FOR FAKE BIOMETRIC DETECTION: APPLICATION TO IRIS, FINGERPRINT, AND FACE RECOGNITION ABSTRACT To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real- time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general

Upload: maruthi-jacs

Post on 17-Nov-2015

37 views

Category:

Documents


0 download

DESCRIPTION

zadAX

TRANSCRIPT

IMAGE QUALITY ASSESSMENT FOR FAKE BIOMETRIC DETECTION: APPLICATION TO IRIS, FINGERPRINT, AND FACE RECOGNITION

ABSTRACT

To ensure the actual presence of a real legitimate trait in contrast to a fake self-manufactured synthetic or reconstructed sample is a significant problem in biometric authentication, which requires the development of new and efficient protection measures. In this paper, we present a novel software-based fake detection method that can be used in multiple biometric systems to detect different types of fraudulent access attempts. The objective of the proposed system is to enhance the security of biometric recognition frameworks, by adding liveness assessment in a fast, user-friendly, and non-intrusive manner, through the use of image quality assessment. The proposed approach presents a very low degree of complexity, which makes it suitable for real-time applications, using 25 general image quality features extracted from one image (i.e., the same acquired for authentication purposes) to distinguish between legitimate and impostor samples. The experimental results, obtained on publicly available data sets of fingerprint, iris, and 2D face, show that the proposed method is highly competitive compared with other state-of-the-art approaches and that the analysis of the general image quality of real biometric samples reveals highly valuable information that may be very efficiently used to discriminate them from fake traits.

TABLE OF CONTENTSCHAPTER NO. TITLE PAGE NO.

ABSTRACTLIST OF FIGURESLIST OF ABBREVATIONS

1.CHAPTER 1 : INTRODUCTION1.1 GENERAL 1.1.1 THE IMAGE PROCESSING SYSTEM 1.1.2 IMAGE PROCESSING FUNDAMENTAL1.2 OBJECTIVE AND SCOPE OF THE PROJECT1.3 EXISTING SYSTEM 1.3.1 DISADVANTAGES OF EXISTING SYSTEM 1.3.2 LITERATURE SURVEY1.4 PROPOSED SYSTEM 1.4.1 PROPOSED SYSTEM ADVANTAGES

2.CHAPTER 2 :PROJECT DESCRIPTION2.1 INTRODUCTION 2.2 IMAGE QUALITY ASSESMENT 2.2.1 FULL REFERENCE IMAGE QUALITY ASSESMENT

2.2.2 NO REFERENCE IMAGE QUALITY ASSESMENT

2.3 CLASSIFICATION TECHNIQUES 2.3.1 LINEAR DISCRIMINITIVE ANALYSIS 2.3.2 QUADRATIC DISCRIMINITIVE ANALYSIS 2.4 APPLICATIONS2.5 GENERAL RESULTS AND CONCLUSIONS2.6 MATERIALS AND METHODS 2.6.1 MODULE DESCRIPTION 2.6.2 METHODOLOGIES - GIVEN INPUT AND EXPECTED OUTPUT

3.CHAPTER 3 : SOFTWARE SPECIFICATION3.1 general3.2 features of matlab 3.2.1 INTERFACING WITH OTHER LANGUAGES 3.2.2 ANALYZING AND ACCESSING DATA 3.2.3 PERFORMING NUMERIC COMPUTATION

4.CHAPTER 4 : IMPLEMENTATION4.1 GENERAL4.2 IMPLEMENTATION CODING4.3 SNAPSHOTS

5.CHAPTER 5 : CONCLUSION & REFERENCES5.1 CONCLUSION5.2 REFERENCES

LIST OF FIGURES

FIGURE NONAME OF THE FIGURE

PAGE NO

1.1BLOCK DAIGRAM FOR IMAGE PROCESSING SYSTEM

1.2BLOCK DIAGRAM OF FUNDAMENTAL SEQUENCE INVOLVED IN AN IMAGE PROCESSING SYSTEM

1.3 IMAGE PROCESSING TECHNIQUES

1.4BLOCK DIAGRAM OF PROPOSED SYSTEM

2.3.1 CLASSIFICATION OF DATA USING CLASSIFIER

2.3.2EXAMPLE OF LINEAR DISCRIMITIVE ANALYSIS

2.3.3EXAMPLE OF LINEAR DISCRIMITIVE ANALYSIS

LIST OF ABBREVIATIONS

IQA - Image Quality AssessmentMSE - Mean Square ErrorPSNR - Peak Signal To Noise RatioSNR - Signal To Noise RatioSC - Structural Content MD - Maximum DifferenceAD - Average DifferenceNAE - Normalized Absolute ErrorRAMD - R-Averaged Maximum Difference LMSE - Laplacian Mean Square ErrorNXC - Normalized Cross CorrelationMAS - Mean Angle SimilarityMAMS - Mean Angle Magnitude SimilarityTED - Total Edge DifferenceTCD - Total Corner DifferenceSME - Spectral Magnitude ErrorSPE - Spectral Phase Error

GME - Gradient Magnitude ErrorGPE - Gradient Phase ErrorSSIM - Structural Similarity Index Measure VIF - Virtual Information FidelityRRED - Reduced Reference Entrophy DistortionJQI - JPEG Quality IndexHLFI - High Low Frequency Index BIQI - Blind Image Quality IndexNIQE - Naturalness Image Quality EvaluatorLDA - Linear Discriminant AnalysisQDA - Quadratic Discriminant AnalysisGUIDE - Graphical User Interface Development Environment) MEX-files - MATLABExecutable-files

CHAPTER IINTRODUCTION1.1 GENERAL

The term digital image refers to processing of a two dimensional picture by a digital computer. In a broader context, it implies digital processing of any two dimensional data. A digital image is an array of real or complex numbers represented by a finite number of bits. An image given in the form of a transparency, slide, photograph or an X-ray is first digitized and stored as a matrix of binary digits in computer memory. This digitized image can then be processed and/or displayed on a high-resolution television monitor. For display, the image is stored in a rapid-access buffer memory, which refreshes the monitor at a rate of 25 frames per second to produce a visually continuous display.

1.1.1 THE IMAGE PROCESSING SYSTEM

DigitizerMass StorageHard Copy DeviceDisplayImage ProcessorDigital ComputerOperator ConsoleFIG 1.1 BLOCK DIAGRAM FOR IMAGE PROCESSING SYSTEM

DIGITIZER:A digitizer converts an image into a numerical representation suitable for input into a digital computer. Some common digitizers are1. Microdensitometer2. Flying spot scanner3. Image dissector4. Videocon camera5. Photosensitive solid- state arrays.

IMAGE PROCESSOR:An image processor does the functions of image acquisition, storage, preprocessing, segmentation, representation, recognition and interpretation and finally displays or records the resulting image. The following block diagram gives the fundamental sequence involved in an image processing system.

Problem DomainKnowledge BaseSegmentation nPreprocessing Image AcquisitionRecognition & interpretation Representation & DescriptionResultFIG 1.2 BLOCK DIAGRAM OF FUNDAMENTAL SEQUENCE INVOLVED IN AN IMAGE PROCESSING SYSTEM

As detailed in the diagram, the first step in the process is image acquisition by an imaging sensor in conjunction with a digitizer to digitize the image. The next step is the preprocessing step where the image is improved being fed as an input to the other processes. Preprocessing typically deals with enhancing, removing noise, isolating regions, etc. Segmentation partitions an image into its constituent parts or objects. The output of segmentation is usually raw pixel data, which consists of either the boundary of the region or the pixels in the region themselves. Representation is the process of transforming the raw pixel data into a form useful for subsequent processing by the computer. Description deals with extracting features that are basic in differentiating one class of objects from another. Recognition assigns a label to an object based on the information provided by its descriptors. Interpretation involves assigning meaning to an ensemble of recognized objects. The knowledge about a problem domain is incorporated into the knowledge base. The knowledge base guides the operation of each processing module and also controls the interaction between the modules. Not all modules need be necessarily present for a specific function. The composition of the image processing system depends on its application. The frame rate of the image processor is normally around 25 frames per second.

DIGITAL COMPUTER:Mathematical processing of the digitized image such as convolution, averaging, addition, subtraction, etc. are done by the computer.

MASS STORAGE:The secondary storage devices normally used are floppy disks, CD ROMs etc.

HARD COPY DEVICE:The hard copy device is used to produce a permanent copy of the image and for the storage of the software involved.OPERATOR CONSOLE: The operator console consists of equipment and arrangements for verification of intermediate results and for alterations in the software as and when require. The operator is also capable of checking for any resulting errors and for the entry of requisite data.

1.1.2 IMAGE PROCESSING FUNDAMENTAL:Digital image processing refers processing of the image in digital form. Modern cameras may directly take the image in digital form but generally images are originated in optical form. They are captured by video cameras and digitalized. The digitalization process includes sampling, quantization. Then these images are processed by the five fundamental processes, at least any one of them, not necessarily all of them.

IMAGE PROCESSING TECHNIQUES:This section gives various image processing techniques.

Image Enhancement

Image Restoration

IP Image Analysis

Image Compression

Image Synthesis FIG1.3: IMAGE PROCESSING TECHNIQUES

MAGE ENHANCEMENT: Image enhancement operations improve the qualities of an image like improving the images contrast and brightness characteristics, reducing its noise content, or sharpen the details. This just enhances the image and reveals the same information in more understandable image. It does not add any information to it.IMAGE RESTORATION:Image restoration like enhancement improves the qualities of image but all the operations are mainly based on known, measured, or degradations of the original image. Image restorations are used to restore images with problems such as geometric distortion, improper focus, repetitive noise, and camera motion. It is used to correct images for known degradations.

IMAGE ANALYSIS:Image analysis operations produce numerical or graphical information based on characteristics of the original image. They break into objects and then classify them. They depend on the image statistics. Common operations are extraction and description of scene and image features, automated measurements, and object classification. Image analyze are mainly used in machine vision applications.

IMAGE COMPRESSION:Image compression and decompression reduce the data content necessary to describe the image. Most of the images contain lot of redundant information, compression removes all the redundancies. Because of the compression the size is reduced, so efficiently stored or transported. The compressed image is decompressed when displayed. Lossless compression preserves the exact data in the original image, but Lossy compression does not represent the original image but provide excellent compression.

IMAGE SYNTHESIS:Image synthesis operations create images from other images or non-image data. Image synthesis operations generally create images that are either physically impossible or impractical to acquire.APPLICATIONS OF DIGITAL IMAGE PROCESSING:Digital image processing has a broad spectrum of applications, such as remote sensing via satellites and other spacecrafts, image transmission and storage for business applications, medical processing, radar, sonar and acoustic image processing, robotics and automated inspection of industrial parts.

MEDICAL APPLICATIONS:In medical applications, one is concerned with processing of chest X-rays, cineangiograms, projection images of transaxial tomography and other medical images that occur in radiology, nuclear magnetic resonance (NMR) and ultrasonic scanning. These images may be used for patient screening and monitoring or for detection of tumors or other disease in patients.

SATELLITE IMAGING:Images acquired by satellites are useful in tracking of earth resources; geographical mapping; prediction of agricultural crops, urban growth and weather; flood and fire control; and many other environmental applications. Space image applications include recognition and analysis of objects contained in image obtained from deep space-probe missions.COMMUNICATION:Image transmission and storage applications occur in broadcast television, teleconferencing, and transmission of facsimile images for office automation, communication of computer networks, closed-circuit television based security monitoring systems and in military communications.RADAR IMAGING SYSTEMS:Radar and sonar images are used for detection and recognition of various types of targets or in guidance and maneuvering of aircraft or missile systems.

DOCUMENT PROCESSING:It is used in scanning, and transmission for converting paper documents to a digital image form, compressing the image, and storing it on magnetic tape. It is also used in document reading for automatically detecting and recognizing printed characteristics.DEFENSE/INTELLIGENCE:It is used in reconnaissance photo-interpretation for automatic interpretation of earth satellite imagery to look for sensitive targets or military threats and target acquisition and guidance for recognizing and tracking targets in real-time smart-bomb and missile-guidance systems.

1.1 OBJECTIVE AND SCOPE OF THE PROJECTIn the present work we propose a novel software-based multi-biometric and multi-attack protection method which targets to overcome part of these limitations through the use of image quality assessment (IQA). It is not only capable of operating with a very good performance under different biometric systems (multi-biometric) and for diverse spoofing scenarios, but it also provides a very good level of protection against certain non-spoofing attacks (multi-attack).

1.2 EXISTING SYSTEM In the existing system, each image was processed using wavelets to create features based on the spectral and textural information available. A variant of Fishers linear discriminant was used to create eight features for classification. For testing, the difference between the squared Euclidian distance to the spoof and person class means was used to calculate the error trade-off of correctly classifying a subject and misclassifying a spoof. The results are shown in Fig. 6, which is an ROC curve similar to those used to describe biometric matching performance over a range of operating points. In this case, the TAR is the rate at which a measurement taken on a genuine person is properly classified as a genuine sample. As such, this is a metric for the convenience of the spoof detection method as seen by an authorized user.

1.3.1 DISADVANTAGES OF EXISTING SYSTEM On fraudulently access the biometric system. The usual digital protection mechanisms (e.g., encryption, digital signature or watermarking) are not effective. It is very low Speed and high complexity, which makes it unsuited to operate on real scenarios. Computation load very high.

1.2.2 LITERATURE SURVEY1. BIOMETRIC TEMPLATE SECURITY

Biometric recognition offers a reliable solution to the problem of user authentication in identity management systems. With the widespread deployment of biometric systems in various applications, there are increasing concerns about the security and privacy of biometric technology. Public acceptance of biometrics technology will depend on the ability of system designers to demonstrate that these systems are robust, have low error rates, and are tamper proof. We present a high-level categorization of the various vulnerabilities of a biometric system and discuss countermeasures that have been proposed to address these vulnerabilities. In particular, we focus on biometric template security which is an important issue because, unlike passwords and tokens, compromised biometric templates cannot be revoked and reissued. Protecting the template is a challenging task due to intra user variability in the acquired biometric traits. We present an overview of various biometric template protection schemes and discuss their advantages and limitations in terms of security, revocability, and impact on matching accuracy. A template protection scheme with provable security and acceptable recognition performance has thus far remained elusive. Development of such a scheme is crucial as biometric systems are beginning to proliferate into the core physical and information infrastructure of our society.

2. FIRST INTERNATIONAL FINGERPRINT LIVENESS DETECTION COMPETITIONLIVDET 2009Fingerprint recognition systems are vulnerable to artificial spoof fingerprint attacks, like molds made of silicone, gelatin or Play-Doh. Liveness detection, which is to detect vitality information from the biometric signature itself, has been proposed to defeat these kinds of spoof attacks. The goal for the LivDet 2009 competition is to compare different methodologies for software based fingerprint liveness detection with a common experimental protocol and large dataset of spoof and live images. This competition is open to all academic and industrial institutions which have a solution for software-based fingerprint vitality detection problem. Four submissions resulted in successful completion: Dermalog, ATVS, and two anonymous participants (one industrial and one academic). Each participant submitted an algorithm as a Win32 console application. The performance was evaluated for three datasets, from three different optical scanners, each with over 1500 images of fake and over 1500 images of live fingerprints. The best results were from the algorithm submitted by Dermalog with a performance of 2.7% FRR and 2.8% FAR for the Identix (L-1) dataset. The competition goal is to become a reference event for academic and industrial research in software-based fingerprint liveness detection and to raise the visibility of this important research area in order to decrease risk of fingerprint systems to spoof attacks.

3. EVALUATION OF DIRECT ATTACKS TO FINGERPRINT VERIFICATION SYSTEMSThe vulnerabilities of fingerprint-based recognition systems to direct attacks with and without the cooperation of the user are studied. Two different systems, one minutiae-based and one ridge feature-based, are evaluated on a database of real and fake fingerprints. Based on the fingerprint images quality and on the results achieved on different operational scenarios, we obtain a number of statistically significant observations regarding the robustness of the systems.

4. COUNTER-MEASURES TO PHOTO ATTACKS IN FACE RECOGNITION: A PUBLIC DATABASE AND A BASELINEA common technique to by-pass 2-D face recognition systems is to use photographs of spoofed identities. Unfortunately, research in counter-measures to this type of attack have not kept-up - even if such threats have been known for nearly a decade, there seems to exist no consensus on best practices, techniques or protocols for developing and testing spoofing-detectors for face recognition. We attribute the reason for this delay, partly, to the unavailability of public databases and protocols to study solutions and compare results. to this purpose we introduce the publicly available print-attack database and exemplify how to use its companion protocol with a motion-based algorithm that detects correlations between the person's head movements and the scene context. The results are to be used as basis for comparison to other counter-measure techniques. The print-attack database contains 200 videos of real- accesses and 200 videos of spoof attempts using printed photographs of 50 different identities.

5 .IMAGE MANIPULATION DETECTION WITH BINARY SIMILARITY MEASURES Since extremely powerful technologies are now available to generate and process digital images, there is a concomitant need for developing techniques to distinguish the original images from the altered ones, the genuine ones from the doctored ones. In this proposed method ,based on the neighbor bit planes of the image. The basic idea is that, the correlation between the bit planes as well the binary texture characteristics within the bit planes will differ between an original and a doctored image. This change in the intrinsic characteristics of the image can be monitored via the quantal-spatial moments of the bit planes. These so-called Binary Similarity Measures are used as features in classifier design.

1.3 PROPOSED SYSTEM

we propose a novel software-based multi-biometric and multi-attack protection method which targets to overcome part of these limitations through the use of image quality assessment (IQA). It is not only capable of operating with a very good performance under different biometric systems (multi-biometric) and for diverse spoofing scenarios, but it also provides a very good level of protection against certain non-spoofing attacks (multi-attack). Moreover, being software-based, it presents the usual advantages of this type of approaches: fast, as it only needs one image (i.e., the same sample acquired for biometric recognition) to detect whether it is real or fake; non-intrusive; user-friendly (transparent to the user); cheap and easy to embed in already functional systems (as no new piece of hardware is required). An added advantage of the proposed technique is its speed and very low complexity, which makes it very well suited to operate on real scenarios (one of the desired characteristics of this type of methods). As it does not deploy any trait-specific property (e.g., minutiae points, iris position or face detection), the computation load needed for image processing purposes is very reduced, using only general image quality measures fast to compute, combined with very simple classifiers. It has been tested on publicly available attack databases of iris, fingerprint and 2D face, where it has reached results fully comparable to those obtained on the same databases and following the same experimental protocols by more complex trait-specific top-ranked approaches from the state-of-the-art.

1.4.1 BLOCK DIAGRAM

FEATURE EXTRACTION2D IMAGEGAUSSIAN FILTERINGFR -IQANR -IQAREAL / FAKE CLASSIFICATIONFINAL PARMETERIZATIONTRAINING DATARESULT

Fig 1.4 Block diagram of proposed system

1.3.2 PROPOSED SYSTEM ADVANTAGES Speed and very low complexity, which makes it very well suited to operate on real scenarios. Computation load needed for image processing purpose is much reduced, combined with very simple classifiers. Fraudulently access will be denied in this biometric system. These digital protection mechanisms (Image Quality Assessment) are very effective and sensible to find more fakes. CHAPTER 2PROJECT DESCRIPTION2.1 INTRODUCTION

In recent years, the increasing interest in the evaluation of biometric systems security has led to the creation of numerous and very diverse initiatives focused on this major field of research : the publication of many research works disclosing and evaluating different biometric vulnerabilities, the proposal of new protection methods sessions and workshops in biometric-specific and general signal processing conferences , the organization of competitions focused on vulnerability assessment, the acquisition of specific datasets ,the creation of groups and laboratories specialized in the evaluation of biometric security ,or the existence of several European Projects with the biometric security topic as main research interest. Among the different threats analyzed, the so-called direct or spoofing attacks have motivated the biometric community to study the vulnerabilities against this type of fraudulent actions in modalities such as the iris, the fingerprint, the face, the signature, or even the gait and multimodal approaches. In these attacks, the intruder uses some type of synthetically produced artifact (e.g., gummy finger, printed iris image or face mask), or tries to mimic the behavior of the genuine user (e.g., gait, signature), to fraudulently access the biometric system. As this type of attacks is performed in the analog domain and the interaction with the device is done following the regular protocol, the usual digital protection mechanisms (e.g., encryption, digital signature or watermarking) are not effective.The information flow of a biometric access system is simple .First the biometric is presented to the sensor by the person requesting access. A camera may capture a face or iris, a sensor may capture a fingerprint, a microphone may capture a voice; in each case, the raw biometric information is acquired and sent to the biometric feature extractor. The extractor is generally software that extracts the features important for determining identity from the raw information. For a fingerprint, this might be the minutiae points and for a face this could be the distance between the eyes. This extracted feature information is called a template. The template is then sent to the matcher. The matcher compares the newly-presented biometric information to previously submitted template information to make a decision. Presented along with a pin number or access card, the template may be matched against that of a single enrolled user for verification. Alternatively, it may be compared to all enrolled users for identification.

2.2 IMAGE QUALITY ASSESSMENT (IQA) TECHNIQUE:

Expected quality differences between real and fake samples may include: degree of sharpness, color and luminance levels, local artifacts, amount of information found in both type of images (entropy), structural distortions or natural appearance For example, iris images captured from a printed paper are more likely to be blurred or out of focus due to trembling; face images captured from a mobile device will probably be over- or under-exposed; and it is not rare that fingerprint images captured from a gummy finger present local acquisition artifacts such as spots and patches. Furthermore, in an eventual attack in which a synthetically produced image is directly injected to the communication channel before the feature extractor, this fake sample will most likely lack some of the properties found in natural images. Following this quality-difference hypothesis, in the present research work we explore the potential of general image quality assessment as a protection method against different biometric attacks (with special attention to spoofing) As the implemented features do not evaluate any specific property of a given biometric modality or of a specific attack, they may be computed on any image. This gives the proposed method a new multi-biometric dimension which is not found in previously described protection schemes. In the current state-of-the-art, the rationale behind the use of IQA features for liveness detection is supported by three factors: Image quality has been successfully used in previous works for image manipulation detection and steganalysis in the forensic field. To a certain extent, many spoofing attacks, especially those which involve taking a picture of a facial image displayed in a 2D device (e.g., spoofing attacks with printed iris or face images), may be regarded as a type of image manipulation which can be effectively detected, as shown in the present research work, by the use of different quality features. In addition to the previous studies in the forensic area, different features measuring trait-specific quality properties have already been used for liveness detection purposes in fingerprint and iris applications .However, even though these two works give a solid basis to the use of image quality as a protection method in biometric systems, none of them is general. For instance, measuring the ridge and valley frequency may be a good parameter to detect certain fingerprint spoofs, but it cannot be used in iris liveness detection. On the other hand, the amount of occlusion of the eye is valid as an iris anti-spoofing mechanism, but will have little use in fake fingerprint detection. This same reasoning can be applied to the vast majority of the liveness detection methods found in the state-of-threat Although all of them represent very valuable works which bring insight into the difficult problem of spoofing detection, they fail to generalize to different problems as they are usually designed to work on one specific modality and, in many cases, also to detect one specific type of spoofing attack. Human observers very often refer to the different appearance of real and fake samples to distinguish between them. As stated above, the different metrics and methods designed for IQA intend to estimate in an objective and reliable way the perceived appearance of images by humans.A different quality measure presents different sensitivity to image artifacts and distortions. For instance, measures like the mean squared error respond more to additive noise, whereas others such as the spectral phase error are more sensitive to blur; while gradient-related features react to distortions concentrated around edges and textures. Therefore, using a wide range of IQMs exploiting complementary image quality properties should permit to detect the aforementioned quality differences between real and fake samples expected to be found in many attack attempts (i.e., providing the method with multi-attack protection capabilities). All these observations lead us to believe that there is sound proof for the quality-difference hypothesis and that image quality measures have the potential to achieve success in biometric protection tasks.2.2.1 FULL REFERENCE IQA:

1. MEAN SQUARE ERROR:Themean squared error(MSE) of anestimatormeasures theaverageof the squares of the "errors", that is, the difference between the estimator and what is estimated. MSE is arisk function, corresponding to theexpected valueof the squared error loss or quadratic loss. The difference occurs because ofrandomnessor because the estimatordoesn't account for informationthat could produce a more accurate estimate. The MSE is the secondmoment(about the origin) of the error, and thus incorporates both thevarianceof the estimator and its bias. For anunbiased estimator, the MSE is the variance of the estimator. Like the variance, MSE has the same units of measurement as the square of the quantity being estimated. In an analogy tostandard deviation, taking the square root of MSE yields the root-mean-square error orroot-mean-square deviation(RMSE or RMSD), which has the same units as the quantity being estimated; for an unbiased estimator, the RMSE is the square root of the variance, known as thestandard deviation.2. PEAK SIGNAL TO NOISE RATIO:Peak signal-to-noise ratio, often abbreviatedPSNR, is an engineering term for the ratio between the maximum possible power of asignaland the power of corruptingnoisethat affects the fidelity of its representation. Because many signals have a very widedynamic range, PSNR is usually expressed in terms of thelogarithmicdecibelscale.PSNR is most commonly used to measure the quality of reconstruction of lossy compressioncodecs(e.g., forimage compression). The signal in this case is the original data, and the noise is the error introduced by compression. When comparing compression codecs, PSNR is anapproximationto human perception of reconstruction quality. Although a higher PSNR generally indicates that the reconstruction is of higher quality, in some cases it may not. One has to be extremely careful with the range of validity of this metric; it is only conclusively valid when it is used to compare results from the same codec (or codec type) and same content.3. SIGNAL TO NOISE RATIO:Signal-to-noise ratio(often abbreviatedSNRorS/N) is a measure used in science and engineering that compares the level of a desiredsignalto the level of backgroundnoise. It is defined as the ratio of signal power to the noise power, often expressed indecibels. A ratio higher than 1:1 (greater than 0 dB) indicates more signal than noise. While SNR is commonly quoted for electrical signals, it can be applied to any form of signal (such as isotope levels in anice coreorbiochemical signalingbetween cells).The signal-to-noise ratio, thebandwidth, and thechannel capacityof a communication channelare connected by theShannonHartley theorem.Signal-to-noise ratio is sometimes used informally to refer to the ratio of usefulinformationto false or irrelevant data in a conversation or exchange. For example, inonline discussion forumsand other online communities, off-topicposts andspamare regarded as "noise" that interferes with the "signal" of appropriate discussion.4. STRUCTRAL CONTENT:The ratio between the square of sum of original image to the square of sum of reference image is often defined by structural content. In the form of equation is given by,

5. MAXIMUM DIFFERENCE:The maximum value of absolute difference image (original image is subtracted to the reference image. In the form of equation is given by,

6. AVERAGE DIFFERENCE:The average value per pixel of absolute difference image (original image is subtracted to the reference image. In the form of equation is given by,

7. NORMALIZED ABSOLUTE ERROR:The ratio between sum of absolute of difference image to the sum of absolute of original image. In the form of equation is given by,

8. R-AVERAGED MD:The sum of maximum of R numbers value is summed and divided by R to calculate average maximum difference. In the form of equation is given by,

In the RAMD formulae, maxr is defined as the r -highest pixel difference between two images. For the present implementation, R = 10.9. LAPLACIAN MSE:Based on this h(image ) = Ii+1, j +Ii1, j +Ii, j+1 + Ii, j1 4Ii, j equation .the h(Ii, j ) and h(I^i, j ) will be calculated .the ratio between the square of difference of these two values to the sum of original image h(Ii, j ) value. In the form of equation is given by,

10. NORMALIZED CROSS-CORRELATIONFor image-processing applications in which the brightness of the image and template can vary due to lighting and exposure conditions, the images can be first normalized. This is typically done at every step by subtracting the mean and dividing by thestandard deviation. In the form of equation is given by,

11. MEAN ANGLE SIMILLARITYThe mean angle similarity is the measure of similarity of mean angle between the original image and reference image. In the form of equation is given by,

12. MEAN ANGLE MAGNITUDE SIMILLARITYThe mean angle magnitude similarity is the measure of similarity of mean angles magnitude between the original image and reference image. In the form of equation is given by,

In the MAS and MAMS entries, i, j denotes the angle between two vectors, defined as, i, j =2 arccos((Ii, j ,Ii, j ) /(||Ii, j ||||Ii, j ||),where Ii, j ,Ii, j denotes the scalar product.As we are dealing with positive matrices I and I,we are constrained to the first quadrant of the Cartesian space so that the maximum difference attained will be /2, therefore the coefficient 2/ is included for normalization.

13. TOTAL EDGE DIFFERENCE: The ratio between the differences of total number of edges between the two images to the total number of pixels. In the form of equation is given by,

14. TOTAL CORNER DIFFERENCE: The ratio between the differences of total number of corners between the two images to the total number of pixels. In the form of equation is given by,

15. SPECTRAL MAGNITUDE ERROR:The difference between the Fourier transform of original image to the Fourier transform of reference image is averaged by total number of pixel. In the form of equation is given by,

16. SPECTRAL PHASE ERROR:The difference between the angles of Fourier transformed original image to the angle of Fourier transformed reference image is averaged by total number of pixel. In the form of equation is given by,

17. GRADIENT MAGNITUDE ERROR:The difference between the gradient of original image to the gradient of reference image is averaged by total number of pixel. In the form of equation is given by,

18. GRADIENT PHASE ERROR:The difference between the angles of gradient of original image to the angle of gradient of reference image is averaged by total number of pixel. In the form of equation is given by,

19. STRUCTURAL SIMMILARITY INDEX MEASUREMENT:The StructuralSIMilarity(SSIM) index is a method for measuring the similarity between two images. The SSIM index can be viewed as a quality measure of one of the images beingcompared,provided the other image is regarded as of perfect quality. It is an improved version of theuniversal image quality indexproposed before.20. VISUAL INFORMATION FIDELITY:The Visual Information Fidelity (VIF) metric is based on the assumption that images of the human visual environment are all natural scenes and thus they have the same kind of statistical properties.21. REDUCED REFERENCE ENTROPHY DIFFERENCE:On the other hand, the RRED metric approaches the problem of QA from the perspective of measuring the amount of local information difference between the reference image and the projection of the distorted image onto the space of natural images, for a given subband of the wavelet domain. In essence, the RRED algorithm computes the average difference between scaled local entropies of wavelet coefficients of reference and projected distorted images in a distributed fashion. This way, contrary to the VIF feature, for the RRED it is not necessary to have access the entire reference image but only to a reduced part of its information (i.e., quality is computed locally). This required information can even be reduced to only one single scalar in case all the scaled entropy terms in the selected wavelet subband are considered in one single block.

2.2.2. NO REFERENCE IMAGE QUALITY MEASUREMENT22. JPEG QUALITY INDEX:

The JPEG Quality Index (JQI), which evaluates the quality in images affected by the usual block artifacts found in many compression algorithms running at low bit rates such as the JPEG.

23. HIGH LOW FREQUECNY INDEX:

The High-Low Frequency Index (HLFI), which is formally defined in Table I. It was inspired by previous work which considered local gradients as a blind metric to detect blur and noise [41]. Similarly, the HLFI feature is sensitive to the sharpness of the image by computing the difference between the power in the lower and upper frequencies of the Fourier Spectrum.

In the HLFI entry, il , ih, jl , jh are respectively the indices corresponding to the lower and upper frequency thresholds considered by the method. In the current implementation, il = ih = 0.15N and jl = jh = 0.15M.

24. BLIND IMAGE QUALITY INDEX MEASUREMENT:

These blind IQA techniques use a priori knowledge taken from natural scene distortion-free images to train the initial model (i.e., no distorted images are used). The rationale behind this trend relies on the hypothesis that undistorted images of the natural world present certain regular properties which fall within a certain subspace of all possible images. If quantified appropriately, deviations from the regularity of natural statistics can help to evaluate the perceptual quality of an image.

25. NATURALNESS IMAGE QUALITY EVALUATOR:

The NIQE is a completely blind image quality analyzer based on the construction of a quality aware collection of statistical features (derived from a corpus of natural undistorted images) related to a multi variate Gaussian natural scene statistical model

2.3 CLASSIFICATION

2.3.1.DISCRIMINANT ANALYSIS

Suppose we observe a sample drawn from a multivariate normal distribution N(, ) with mean vector and covariance matrix . The data are D-dimensional, and vectors, unless otherwise noted, are column-oriented. The multivariate density is then

where jj is the determinant of . Suppose we observe a sample of data drawn from two classes, each described by a multivariate normal density

Fig 2.3.1 CLASSIFICATION OF DATA USING CLASSIFIER

For classes k D 1, 2. Recall that Bayes rule gives

For the posterior probability P(kjx) of observing an instance of class k at point x.LINEAR DISCRIMINANT ANALYSISLinear discriminant analysis (LDA)and the relatedFisher's linear discriminantare methods used instatistics,pattern recognition andmachine learningto find alinear combinationoffeatureswhich characterizes or separates two or more classes of objects or events. The resulting combination may be used as alinear classifier or, more commonly, fordimensionality reductionbefore later classification.LDA is closely related toANOVA(analysis of variance) andregression analysis, which also attempt to express onedependent variable as a linear combination of other features or measurements.However, ANOVA usescategoricalindependent variablesand a continuousdependent variable, whereas discriminant analysis has continuousindependent variablesand a categorical dependent variable (i.e.the class label). Logistic regressionandprobit regressionare more similar to LDA, as they also explain a categorical variable by the values of continuous independent variables. These other methods are preferable in applications where it is not reasonable to assume that the independent variables are normally distributed, which is a fundamental assumption of the LDA method.LDA is also closely related toprincipal component analysis(PCA) andfactor analysisin that they both look for linear combinations of variables which best explain the data.LDA explicitly attempts to model the difference between the classes of data. PCA on the other hand does not take into account any difference in class, and factor analysis builds the feature combinations based on differences rather than similarities. Discriminant analysis is also different from factor analysis in that it is not an interdependence technique: a distinction between independent variables and dependent variables (also called criterion variables) must be made.LDA works when the measurements made on independent variables for each observation are continuous quantities. When dealing with categorical independent variables, the equivalent technique isdiscriminant correspondence analysis.

fig 2.3.2:example of lda classificationQUADRATIC DISCRIMINANT ANALYSISQuadratic discriminant analysis (QDA) is closely related to linear discriminant analysis (LDA), where it is assumed that the measurements from each class are normally distributed. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical. When the normality assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is the likelihood ratio test. Suppose there are only two groups, and the means of each class are defined to be _y=0; _y=1 and the covariances are defined as _y=0;_y=1 . Then the likelihood ratio will be given by

for some threshold t. After some rearrangement, it can be shown that the resulting separating surface between the classes is a quadratic. The sample estimates of themean vector and variance-covariance matrices will substitute the population quantities in this formula.

fig 2.3.3 example of quadratic discrimintive analysis

2.4 APPLICATIONS BIOMETRIC CLOCK TIME:Biometric time clock systems are the same as biometric time attendance systems. Its like having two different names for the same thing. Biometric identification clocks are usually referred to by people looking for time punch systems.Biometric time systems usually utilize fingerprint recognition or at times hand recognition for the employees to "mark in" or "mark out". Based on the records collected the wages are worked out.You can find more details for biometric time clock systems in our biometric time and attendance section. We discuss the technology in more detail there.If you are interested in implementing a biometric time clock system, you should check out our biometric devices section for various available biometric clock time systems.

BIOMETRIC DOOR LOCKS:

Biometric door locks marks the advent biometrics into people's everyday life. With biometric door handles available at large, biometrics is now a household thing. Biometric doorknobs are now being increasingly used in villas, condos, offices and even server rooms. With biometric door locks, your fingerprint is the key. They replace keyed locking mechanisms with a fingerprint sensor that actually recognizes who is and who is not authorized to enter. Biometric door locks gives you the power to secure your home with the latest in technology and at the same time eliminating the headache of shared or lost keys. Now with the fingerprint biometric door locks you can have both security and peace of mind. Biometric door locks guarantees peace of mind by eliminating the need for managing keys. Biometric door locks provide a relief for families as it saves them from the effort of making duplicate keys and most of all saves them all the worry of a key being lost by their kids or someone breaking using a duplicate key.Fingerprint door locks are very easy to install and can be fitted easily like any other lock. Each lock has a biometric scanner which scans the individual fingerprints. Once registered all users of the lock can easily access the premises whenever they want without any trouble. Fingerprint records can be added and deleted on the fly, so in case you have a shared residence, you can easily add any new fingerprints and even delete the obsolete ones.

BIOMETRIC FLASH DRIVES:Biometric flash drives (also known as "biometric thumb drives"), are gaining increasing popularity because of the added level of security it provides to the now common flash drives. As the costs of flash drives keep declining, users are more likely to copy more data be it personal or corporate to these convenient devices. Though convenient, flash drives also are easily lost, making the data contained in it vulnerable to prying eyes. A biometric flash drive solves this loophole by providing secure storage of critical information. Biometric flash drives basically provide an interesting combination of biometric authentication technology and electronic mass storage. Biometric flash drives basically have a public and a private partition. The public partition can be accessed by all the users like a normal USB flash drive. It also at times consists of the software required to access the private partition of the disk. With the software it is easy to add and remove users and also to adjust the balance between the two partitions. Applications of biometric flash drives are manifold when we understand the benefits it has to offer. Just imagine how easy it would now be to carry sensitive information along with you without worrying about it getting lost.

2.5 MATERIALS AND METHODS The work presented in this study consists of three major modules:

1. FULL-REFERENCE IQ MEASURES 2. NO-REFERENCE IQ MEASURES 3. CLASSIFICATION

2.5.1 MODULE DESCRIPTION:MODULE DESCRIPTION:MODULE 1: FULL-REFERENCE IQ MEASURES:Full-reference (FR) IQA methods rely on the availability of a clean undistorted reference image to estimate the quality of the test sample. In the problem of fake detection addressed in this work such a reference image is unknown, as the detection system only has access to the input sample. In order to circumvent this limitation, the same strategy already successfully used for image manipulation detection and for steganalysis is implemented. The input grey-scale image I (of size N M) is filtered with a low-pass Gaussian kernel ( = 0.5 and size 3 3) in order to generate a smoothed version I . Then, the quality between both images (I and I) is computed according to the corresponding full-reference IQA metric this approach assumes that the loss of quality produced by Gaussian filtering differs between real and fake biometric samples.MODULE 2: NO-REFERENCE IQ MEASURES: Unlike the objective reference IQA methods, in general the human visual system does not require of a reference sample to determine the quality level of an image. Following this same principle, automatic no-reference image quality assessment (NR-IQA) algorithms try to handle the very complex and challenging problem of assessing the visual quality of images, in the absence of a reference.

MODULE 3: CLASSIFICATION:Iris:For the iris modality the protection method is tested under two different attack scenarios, namely: i ) spoofing attack and i i ) attack with synthetic samples. For each of the scenarios a specific pair of real-fake databases is used. Databases are divided into totally independent (in terms of users): train set, used to train the classifier; and test set, used to evaluate the performance of the proposed protection method.Fingerprints:As in the iris experiments, the database are divided into a: train set, used to train the classifier; and test set, used to evaluate the performance of the protection method. In order to generate totally unbiased results, there is no overlap between both sets (i.e., samples corresponding to each user are just included in the train or the test set).

2.5.2 METHODOLOGIES - GIVEN INPUT AND EXPECTED OUTPUT:

MODULE-1:Input image is face, iris and finger print image splitted into direct input and Gaussian filtered input image and the output is full reference of 21 IQM parameters.

MODULE-2:Input image is face, iris and finger print image and output is non-reference of 4 IQM parameters.

MODULE-3:Input is 25 IQM parameters combined from full reference and non-reference and output is the text display of real and fake authentication.

CHAPTER 3SOFTWARE SPECIFICATION3.1 general MATLAB(matrixlaboratory) is anumerical computingenvironment andfourth-generation programming language. Developed byMath Works, MATLAB allowsmatrixmanipulations, plotting offunctionsand data, implementation ofalgorithms, creation ofuser interfaces, and interfacing with programs written in other languages, includingC,C++,Java, andFortran. Although MATLAB is intended primarily for numerical computing, an optional toolbox uses theMuPAD symbolic engine, allowing access tosymbolic computingcapabilities. An additional package,Simulink, adds graphical multi-domain simulation andModel-Based Designfordynamicandembedded systems. In 2004, MATLAB had around one million users across industry and academia.MATLAB users come from various backgrounds ofengineering,science, andeconomics. MATLAB is widely used in academic and research institutions as well as industrial enterprises. MATLAB was first adopted by researchers and practitioners incontrol engineering, Little's specialty, but quickly spread to many other domains. It is now also used in education, in particular the teaching oflinear algebraandnumerical analysis, and is popular amongst scientists involved inimage processing. The MATLAB application is built around the MATLAB language. The simplest way to execute MATLAB code is to type it in the Command Window, which is one of the elements of the MATLAB Desktop. When code is entered in the Command Window, MATLAB can be used as an interactive mathematicalshell. Sequences of commands can be saved in a text file, typically using the MATLAB Editor, as ascriptor encapsulated into afunction, extending the commands available. MATLAB provides a number of features for documenting and sharing your work. You can integrate your MATLAB code with other languages and applications, and distribute your MATLAB algorithms and applications.

3.2 features of matlab High-level language for technical computing. Development environment for managing code, files, and data. Interactive tools for iterative exploration, design, and problem solving. Mathematical functions for linear algebra, statistics, Fourier analysis, filtering, optimization, and numerical integration. 2-D and 3-D graphics functions for visualizing data. Tools for building custom graphical user interfaces. Functions for integrating MATLAB based algorithms with external applications and languages, such as C, C++, Fortran, Java, COM, and MicrosoftExcel.

MATLAB is used in vast area, including signal and image processing, communications, control design,test and measurement, financial modeling and analysis, and computational. Add-on toolboxes (collections of special-purpose MATLAB functions) extend the MATLAB environment to solve particular classes of problems in these application areas. MATLAB can be used on personal computers and powerful server systems, including theCheahacompute cluster. With the addition of the Parallel Computing Toolbox, the language can be extended with parallel implementations for common computational functions, including for-loop unrolling. Additionally this toolbox supports offloading computationally intensive workloads toCheahathe campus compute cluster. MATLAB is one of a few languages in which each variable is a matrix (broadly construed) and "knows" how big it is. Moreover, the fundamental operators (e.g. addition, multiplication) are programmed to deal with matrices when required. And the MATLAB environment handles much of the bothersome housekeeping that makes all this possible. Since so many of the procedures required for Macro-Investment Analysis involves matrices, MATLAB proves to be an extremely efficient language for both communication and implementation.

3.2.1 INTERFACING WITH OTHER LANGUAGES MATLAB can call functions and subroutines written in theC programming languageorFORTRAN. A wrapper function is created allowing MATLAB data types to be passed and returned. The dynamically loadable object files created by compiling such functions are termed "MEX-files" (forMATLABexecutable). Libraries written inJava,ActiveXor.NETcan be directly called from MATLAB and many MATLAB libraries (for exampleXMLorSQLsupport) are implemented as wrappers around Java or ActiveX libraries. Calling MATLAB from Java is more complicated, but can be done with MATLAB extension, which is sold separately byMath Works, or using an undocumented mechanism called JMI (Java-to-Mat lab Interface),which should not be confused with the unrelated Javathat is also called JMI. As alternatives to theMuPADbased Symbolic Math Toolbox available from Math Works, MATLAB can be connected toMapleorMathematica. Libraries also exist to import and exportMathML. Development Environment Startup Accelerator for faster MATLAB startup on Windows, especially on Windows XP, and for network installations. Spreadsheet Import Toolthat provides more options for selecting and loading mixed textual and numeric data. Readability and navigation improvements to warning and error messages in the MATLAB command window. Automatic variable and function renamingin the MATLAB Editor.

Developing Algorithms and Applications MATLAB provides a high-level language and development tools that let you quickly develop and analyze your algorithms and applications.

The MATLAB Language The MATLAB language supports the vector and matrix operations that are fundamental to engineering and scientific problems. It enables fast development and execution. With the MATLAB language, you can program and develop algorithms faster than with traditional languages because you do not need to perform low-level administrative tasks, such as declaring variables, specifying data types, and allocating memory. In many cases, MATLAB eliminates the need for for loops. As a result, one line of MATLAB code can often replace several lines of C or C++ code. At the same time, MATLAB provides all the features of a traditional programming language, including arithmetic operators, flow control, data structures, data types,object-oriented programming(OOP), and debugging features. MATLAB lets you execute commands or groups of commands one at a time, without compiling and linking, enabling you to quickly iterate to the optimal solution. For fast execution of heavy matrix and vector computations, MATLAB uses processor-optimized libraries. For general-purpose scalar computations, MATLAB generates machine-code instructions using its JIT (Just-In-Time) compilation technology. This technology, which is available on most platforms, provides execution speeds that rival those of traditional programming languages.Development Tools MATLAB includes development tools that help you implement your algorithm efficiently. These include the following:

MATLAB Editor Provides standard editing and debugging features, such as setting breakpoints and single stepping

Code Analyzer Checks your code for problems and recommends modifications to maximize performance and maintainability

MATLAB Profiler Records the time spent executing each line of code

Directory Reports Scan all the files in a directory and report on code efficiency, file differences, file dependencies, and code coverage

Designing Graphical User Interfaces By using the interactive tool GUIDE (Graphical User Interface Development Environment) to layout, design, and edit user interfaces. GUIDE lets you include list boxes, pull-down menus, push buttons, radio buttons, and sliders, as well as MATLAB plots and Microsoft ActiveXcontrols. Alternatively, you can createGUIsprogrammatically using MATLAB functions.

3.2.2 ANALYZING AND ACCESSING DATA MATLAB supports the entire data analysis process, from acquiring data from external devices and databases, through preprocessing, visualization, and numerical analysis, to producing presentation-quality output.

Data Analysis MATLAB provides interactive tools and command-line functions for data analysis operations, including: Interpolating and decimating Extracting sections of data, scaling, and averaging Thresholding and smoothing Correlation, Fourier analysis, and filtering 1-D peak, valley, and zero finding Basic statistics and curve fitting Matrix analysisData Access MATLAB is an efficient platform for accessing data from files, other applications, databases, and external devices. You can read data from popular file formats, such as Microsoft Excel; ASCII text or binary files; image, sound, and video files; and scientific files, such as HDF and HDF5. Low-level binary file I/O functions let you work with data files in any format. Additional functions let you read data from Web pages and XML.Visualizing Data All the graphics features that are required to visualize engineering and scientific data are available in MATLAB. These include 2-D and 3-D plotting functions, 3-D volume visualization functions, tools for interactively creating plots, and the ability to export results to all popular graphics formats. You can customize plots by adding multiple axes; changing line colors and markers; adding annotation, Latex equations, and legends; and drawing shapes.2-D PlottingVisualizing vectors of data with 2-D plotting functions that create: Line, area, bar, and pie charts. Direction and velocity plots. Histograms. Polygons and surfaces. Scatter/bubble plots. Animations.

3-D Plotting and Volume Visualization MATLAB provides functions for visualizing 2-D matrices, 3-D scalar, and 3-D vector data. You can use these functions to visualize and understand large, often complex, multidimensional data. Specifying plot characteristics, such as camera viewing angle, perspective, lighting effect, light source locations, and transparency. 3-D plotting functions include: Surface, contour, and mesh. Image plots. Cone, slice, stream, and isosurface. 3.2.3 PERFORMING NUMERIC COMPUTATION MATLAB contains mathematical, statistical, and engineering functions to support all common engineering and science operations. These functions, developed by experts in mathematics, are the foundation of the MATLAB language. The core math functions use the LAPACK and BLAS linear algebra subroutine libraries and the FFTW Discrete Fourier Transform library. Because these processor-dependent libraries are optimized to the different platforms that MATLAB supports, they execute faster than the equivalent C or C++ code. MATLAB provides the following types of functions for performing mathematical operations and analyzing data: Matrix manipulation and linear algebra. Polynomials and interpolation. Fourier analysis and filtering. Data analysis and statistics. Optimization and numerical integration. Ordinary differential equations (ODEs). Partial differential equations (PDEs). Sparse matrix operations. MATLAB can perform arithmetic on a wide range of data types, including doubles, singles, and integers.

CHAPTER 4 IMPLEMENTATION4.1 GENERAL Matlab is a program that was originally designed to simplify the implementation of numerical linear algebra routines. It has since grown into something much bigger, and it is used to implement numerical algorithms for a wide range of applications. The basic language used is very similar to standard linear algebra notation, but there are a few extensions that will likely cause you some problems at first.4.2 CODE IMPLEMENTATION clc;clear all;close all;warning off;load final_training.matload final_trainF.mat

a=input('enter the type of image "1.iris 2.fingerprint 3.face" ');if(a==1) training=final_train;elseif(a==2) training=final_trainF;% else % training=final_trainFa;end format shortE%%%CLASSIFICATION OF TRAINING DATASET(TRAIN THE DATASETS)% for ind=1:3m=2;% label=[1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2]';

for i=1:80label(i,1)=1;endfor j=81:160label(j,1)=2;endindices = crossvalind('Kfold',label,m);cp = classperf(label);for i = 1:m test = (indices == i); train = ~test; class = classify(training(test,:),training(train,:),label(train,:),'quadratic'); classperf(cp,class,test) accuracy(i)=cp.CorrectRate;endcp.CorrectRate;accuracy;MeanAccuracy=mean(accuracy); %Mean Accuracy over M resultsStdAccuracy=std(accuracy) ;%Standart Deviation of the M results%%%FINDING THE GIVEN IMAGE IS REAL OR FAKE[filename,pathname]=uigetfile('*.bmp , *.jpg , *.png '); Irf=imread([pathname,filename]); IQA=abs(fakeorreal(Irf));

for ii=1:160 if(IQA(1,:)==training(ii,:)) result=label(ii,1); end end if(result==1) display('the given image is real persons image'); elseif(result==2) display('the given image is not real persons image'); end %% IQA %%%I=rgb2gray(I);I=im2double(I);[N,M]=size(I);figure,imshow(I);

%%% GAUSSIAN FILTERING OF INPUT IMAGE

h = fspecial('gaussian',[3,3], 0.5);I_cap = imfilter(I, h);figure,imshow(I_cap);

4.3 SNAPSHOTS

SNAP SHOT

ORIGINAL IMAGE

FILTERED (REFERNCE) IMAGE

INPUT ORIGINAL IMAGE FOR HARRIS CORNER DETECRTOR:

HARRIS CORNER DETECTED FOR ORG IMAGE:

INPUT FRERENCE IMAGE FOR HARRIS CORNER DETECRTOR:

HARRIS CORNER DETECTED FOR REF IMAGE:

TRAINED CLASSIFIER RESULT:

FINAL OUTPUT

CHAPTER 5CONCLUSION AND REFERENCES

5.1 CONCLUSION The study of the vulnerabilities of biometric systems against different types of attacks has been a very active field of research in recent years. This interest has lead to big advances in the field of security-enhancing technologies for biometric-based applications. However, in spite of this noticeable improvement, the development of efficient protection methods against known threats has proven to be a challenging task. Simple visual inspection of an image of a real biometric trait and a fake sample of the same trait shows that the two images can be very similar and even the human eye may find it difficult to make a distinction between them after a short inspection. Yet, some disparities between the real and fake images may become evident once the images are translated into a proper feature space. These differences come from the fact that biometric traits, as 3D objects, have their own optical qualities (absorption, reflection, scattering, refraction), which other materials (paper, gelatin, electronic display) or synthetically produced samples do not possess. Furthermore, biometric sensors are designed to provide good quality samples when they interact, in a normal operation environment, with a real 3D trait. If this scenario is changed, or if the trait presented to the scanner is an unexpected fake artifact (2D, different material, etc.), the characteristics of the captured image may significantly vary.

5.2 REFERENCES:[1] S. Prabhakar, S. Pankanti, and A. K. Jain, Biometric recognition: Security and privacy concerns, IEEE Security Privacy, vol. 1, no. 2, pp. 3342, Mar./Apr. 2003. [2] T. Matsumoto, Artificial irises: Importance of vulnerability analysis, in Proc. AWB, 2004.

[3] J. Galbally, C. McCool, J. Fierrez, S. Marcel, and J. Ortega-Garcia, On the vulnerability of face verification systems to hill-climbing attacks, Pattern Recognit., vol. 43, no. 3, pp. 10271038, 2010.

[4] A. K. Jain, K. Nandakumar, and A. Nagar, Biometric template security, EURASIP J. Adv. Signal Process., vol. 2008, pp. 113129, Jan. 2008.

[5] J. Galbally, F. Alonso-Fernandez, J. Fierrez, and J. Ortega-Garcia, A high performance fingerprint liveness detection method based on quality related features, Future Generat. Comput. Syst., vol. 28, no. 1, pp. 311321, 2012.

[6] K. A. Nixon, V. Aimale, and R. K. Rowe, Spoof detection schemes, Handbook of Biometrics. New York, NY, USA: Springer-Verlag, 2008, pp. 403423.

[7] ISO/IEC 19792:2009, Information TechnologySecurity Techniques Security Evaluation of Biometrics, ISO/IEC Standard 19792, 2009.

[8] Biometric Evaluation Methodology. v1.0, Common Criteria, 2002.

[9] K. Bowyer, T. Boult, A. Kumar, and P. Flynn, Proceedings of the IEEE Int. Joint Conf. on Biometrics. Piscataway, NJ, USA: IEEE Press, 2011.[10] G. L. Marcialis, A. Lewicke, B. Tan, P. Coli, D. Grimberg, A. Congiu, et al., First international fingerprint liveness detection competition LivDet 2009, in Proc. IAPR ICIAP, Springer LNCS-5716. 2009, pp. 1223.

[11] M. M. Chakka, A. Anjos, S. Marcel, R. Tronci, B. Muntoni, G. Fadda, et al., Competition on countermeasures to 2D facial spoofing attacks, in Proc. IEEE IJCB, Oct. 2011, pp. 16.

[12] J. Galbally, J. Fierrez, F. Alonso-Fernandez, and M. Martinez-Diaz, Evaluation of direct attacks to fingerprint verification systems, J. Telecommun. Syst., vol. 47, nos. 34, pp. 243254, 2011.

[13] A. Anjos and S. Marcel, Counter-measures to photo attacks in face recognition: A public database and a baseline, in Proc. IEEE IJCB, Oct. 2011, pp. 17.

[14] Biometrics Institute, London, U.K. (2011). Biometric Vulnerability Assessment Expert Group [Online]. Available: http://www. biometricsinstitute.org/pages/biometric-vulnerability-assessment-expertgroup- bvaeg.html

[15] (2012). BEAT: Biometrices Evaluation and Testing [Online]. Available: http://www.beat-eu.org/[16] (2010). Trusted Biometrics Under Spoofing Attacks (TABULA RASA) [Online]. Available: http://www.tabularasa-euproject.org/[17] J. Galbally, R. Cappelli, A. Lumini, G. G. de Rivera, D. Maltoni, J. Fierrez, et al., An evaluation of direct and indirect attacks using fakefingers generated from ISO templates, Pattern Recognit. Lett., vol. 31 no. 8, pp. 725732, 2010.[18] J. Hennebert, R. Loeffel, A. Humm, and R. Ingold, A new forgery scenario based on regaining dynamics of signature, in Proc. IAPR ICB,vol. Springer LNCS-4642. 2007, pp. 366375. [19] A. Hadid, M. Ghahramani, V. Kellokumpu, M. Pietikainen, J. Bustard, and M. Nixon, Can gait biometrics be spoofed? in Proc. IAPR ICPR, 2012, pp. 32803283.[20] Z. Akhtar, G. Fumera, G. L. Marcialis, and F. Roli, Evaluation of serial and parallel multibiometric systems under spoofing attacks, in Proc. IEEE 5th Int. Conf. BTAS, Sep. 2012, pp. 283288.[21] D. Maltoni, D. Maio, A. Jain, and S. Prabhakar, Handbook of Fingerprint Recognition. New York, NY, USA: Springer-Verlag, 2009.[22] R. Cappelli, D. Maio, A. Lumini, and D. Maltoni, Fingerprint image reconstruction from standard templates, IEEE Trans. Pattern Anal.Mach. Intell., vol. 29, no. 9, pp. 14891503, Sep. 2007.[23] S. Shah and A. Ross, Generating synthetic irises by feature agglomeration, in Proc. IEEE ICIP, Oct. 2006, pp. 317320.[24] S. Bayram, I. Avcibas, B. Sankur, and N. Memon, Image manipulation detection, J. Electron. Imag., vol. 15, no. 4, pp. 041102-1041102-17, 2006.[25] M. C. Stamm and K. J. R. Liu, Forensic detection of image manipulation using statistical intrinsic fingerprints, IEEE Trans. Inf. ForensicsSecurity, vol. 5, no. 3, pp. 492496, Sep. 2010.[26] I. Avcibas, N. Memon, and B. Sankur, Steganalysis using image quality metrics, IEEE Trans. Image Process., vol. 12, no. 2, pp. 221229, Feb. 2003.[27] S. Lyu and H. Farid, Steganalysis using higher-order image statistics, IEEE Trans. Inf. Forensics Security, vol. 1, no. 1, pp. 111119,Mar. 2006.[28] J. Galbally, J. Ortiz-Lopez, J. Fierrez, and J. Ortega-Garcia, Iris liveness detection based on quality related features, in Proc. 5th IAPR ICB, Mar./Apr. 2012, pp. 271276.[29] I. Avcibas, B. Sankur, and K. Sayood, Statistical evaluation of image quality measures, J. Electron. Imag., vol. 11, no. 2, pp. 206223, 2002.[30] Q. Huynh-Thu and M. Ghanbari, Scope of validity of PSNR in image/video quality assessment, Electron. Lett., vol. 44, no. 13,pp. 800801, 2008.[31] S. Yao, W. Lin, E. Ong, and Z. Lu, Contrast signal-to-noise ratio for image quality assessment, in Proc. IEEE ICIP, Sep. 2005, pp. 397400.[32] A. M. Eskicioglu and P. S. Fisher, Image quality measures and their performance, IEEE Trans. Commun., vol. 43, no. 12, pp. 29592965,Dec. 1995.[33] M. G. Martini, C. T. Hewage, and B. Villarini, Image quality assessment based on edge preservation, Signal Process., Image Commun., vol. 27, no. 8, pp. 875882, 2012.[34] N. B. Nill and B. Bouzas, Objective image quality measure derived from digital image power spectra, Opt. Eng., vol. 31, no. 4,pp. 813825, 1992.[35] A. Liu, W. Lin, and M. Narwaria, Image quality assessment based on gradient similarity, IEEE Trans. Image Process., vol. 21, no. 4,pp. 15001511, Apr. 2012.[36] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, Image quality assessment: From error visibility to structural similarity, IEEETrans. Image Process., vol. 13, no. 4, pp. 600612, Apr. 2004.[37] (2012). LIVE [Online]. Available: http://live.ece.utexas.edu/research/ Quality/index.htm[38] H. R. Sheikh and A. C. Bovik, Image information and visual quality, IEEE Trans. Image Process., vol. 15, no. 2, pp. 430444, Feb. 2006.[39] R. Soundararajan and A. C. Bovik, RRED indices: Reduced reference entropic differencing for image quality assessment, IEEE Trans. Image Process., vol. 21, no. 2, pp. 517526, Feb. 2012.[40] Z. Wang, H. R. Sheikh, and A. C. Bovik, No-reference perceptual quality assessment of JPEG compressed images, in Proc. IEEE ICIP,Sep. 2002, pp. 477480.[41] X. Zhu and P. Milanfar, A no-reference sharpness metric sensitive to blur and noise, in Proc. Int. Workshop Qual. Multimedia Exper., 2009, pp. 6469.[42] A. K. Moorthy and A. C. Bovik, A two-step framework for constructing blind image quality indices, IEEE Signal Process. Lett., vol. 17, no. 5, pp. 513516, May 2010.[43] A. Mittal, R. Soundararajan, and A. C. Bovik, Making a completely blind image quality analyzer, IEEE Signal Process. Lett., vol. 20, no. 3, pp. 209212, Mar. 2013.[44] T. Hastie, R. Tibshirani, and J. Friedman., The Elements of Statistical Learning. New York, NY, USA: Springer-Verlag, 2001.[45] Z. Wang and A. C. Bovik, Mean squared error: Love it or leave it? A new look at signal fidelity measures, IEEE Signal Process. Mag.,vol. 26, no. 1, pp. 98117, Jan. 2009.[46] B. Girod, Whats wrong with mean-squared error? in Digital Images and Human Vision. Cambridge, MA, USA: MIT Press, 1993,pp. 207220.[47] A. M. Pons, J. Malo, J. M. Artigas, and P. Capilla, Image quality metric based on multidimensional contrast perception models, DisplaysJ., vol. 20, no. 2, pp. 93110, 1999.[48] C. Harris and M. Stephens, A combined corner and edge detector, in Proc. AVC, 1988, pp. 147151.[49] J. Zhu and N. Wang, Image quality assessment by visual gradient similarity, IEEE Trans. Image Process., vol. 21, no. 3, pp. 919933,Mar. 2012.[50] D. Brunet, E. R. Vrscay, and Z. Wang, On the mathematical properties of the structural similarity index, IEEE Trans. Image Process., vol. 21, no. 4, pp. 14881499, Apr. 2012.[51] M. A. Saad, A. C. Bovik, and C. Charrier, Blind image quality assessment: A natural scene statistics approach in the DCT domain,IEEE Trans. Image Process., vol. 21, no. 8, pp. 33393352, Aug. 2012.[52] J. Fierrez, J. Ortega-Garcia, D. Torre-Toledano, and J. Gonzalez- Rodriguez, BioSec baseline corpus: A multimodal biometric database,Pattern Recognit., vol. 40, no. 4, pp. 13891392, 2007.[53] E. Marasco and C. Sansone, Combining perspiration- and morphology based static features for fingerprint liveness detection, Pattern Recognit. Lett., vol. 33, no. 9, pp. 11481156, 2012.[54] Y. S. Moon, J. S. Chen, K. C. Chan, K. So, and K. C. Woo, Wavelet based fingerprint liveness detection, Electron. Lett., vol. 41, no. 20,pp. 11121113, 2005.[55] S. Nikam and S. Argawal, Curvelet-based fingerprint anti-spoofing, Signal, Image Video Process., vol. 4, no. 1, pp. 7587, 2010.[56] A. Abhyankar and S. Schuckers, Fingerprint liveness detection using local ridge frequencies and multiresolution texture analysis techniques, in Proc. IEEE ICIP, Oct. 2006, pp. 321324.[57] I. Chingovska, A. Anjos, and S. Marcel, On the effectiveness of local binary patterns in face anti-spoofing, in Proc. IEEE Int. Conf. Biometr. Special Interest Group, Sep. 2012, pp. 17.[58] J. Maatta, A. Hadid, and M. Pietikainen, Face spoofing detection from single images using micro-texture analysis, in Proc. IEEE IJCB,Oct. 2011, pp. 17.[59] P. Pudil, J. Novovicova, and J. Kittler, Flotating search methods in feature selection, Pattern Recognit. Lett., vol. 15, no. 11, pp. 11191125, 1994.[60] A. K. Jain and D. Zongker, Feature selection: Evaluation, application, and small sample performance, IEEE Trans. Pattern Anal. Mach. Intell., vol. 19, no. 2, pp. 153158, Feb. 1997