friction ridge analysis towards lights-out … ridge analysis. towards lights-out latent...

55
Friction Ridge Analysis Towards Lights-out Latent Recognition Elham Tabassi Image Group – NIST August 31, 2015 SAMSI Forensics Opening Workshop

Upload: ngodieu

Post on 18-Jul-2018

223 views

Category:

Documents


0 download

TRANSCRIPT

Friction Ridge AnalysisTowards Lights-out Latent Recognition

Elham TabassiImage Group – NIST

August 31, 2015SAMSI Forensics Opening Workshop

Outline

-We, at NIST-NIST Biometric Evaluations-1:N Latent Fingerprint Matching

-1:N Metrics

-Current latent matching process -Current research at NIST -Closing

Image GroupHistory… Who are we?An old group that traces its origins to NBS based on work started by Jack Wegstein in the1950’s primarily to solve numerical problemson IBM 704 and SEAC (Standards ElectronicAutomatic Computer).Some of these early problems included mesh processing in 2D space, with applications in physics, but also contemporary image processing.

Then something happened in 1966…

3

{

Ethel Marden, Mathematician & Computer Programmer, using the NBS Standards

Electronic Automatic Computer (SEAC), ~1950

Image GroupHistory… Who are we?

NBS accepted the challenge…

4

{

Image GroupHistory…

This research opened several other doors…

5

{

Image GroupHistory… Expanded Research in Friction Ridge

As systems matured in the 70’s, the need for interop emerged… 6

{

Image GroupHistory… Expanded to Standards

Just how big did this standard get?

7

{ Another key landmark in our work occurred in 1986 with the introduction of the ANSI/NIST Standard for the exchange of biometric information between systems.

Image GroupOur Research and The World… International Data StandardThe ANSI/NIST standard has a direct impact on virtually all the biometric data being operated on in the world. This includes capture, interchange of at least 2 million images daily in the United States alone.

Moving to the present, what are our core functions… 8

{

Image GroupSnapshot of Active Projects…

9

{Standards

Pattern Forensics

Emerging

ANSI/NIST

PIV

ISO

Contactless

OSAC sc’s

OASIS

Latent

Matcher Testing

Compression

Segmentation

Quality

MINEX

FpVTE

USG Matcher

PFT

Image Group

Evaluations Iris Recognition

Face Recognition Still images

Video images

IREX

FRVT

FIVE

Fingerprint

Compression Study

CODEC Certification

Friction Ridge Analysis

Face Black Box

Scientific Underpinnings

Challenge Problems

SlapSeg

SMT

Technical Approach :: provide quantitative support

Identify gaps/outreach (NWIP,AMD)

Research + (large scale) evaluation

Submit comment + Technical

contribution

Active participation Advocate for

NIST/USG positions

Test performance

and interoperability of the standard

Development of clear, robust, tested, and

implementable content through extensive study

and experiments, e.g. IREX I + IQCE

aimed at strengthening the science behind the claims or preventing overly prescriptive requirements

e.g. Livenesse.g. MINEX 04

e.g. IREX

Serve as EditorHost workshops

10

NIST BIOMETRIC EVALUATIONShttp://www.nist.gov/itl/iad/ig/biometric_evaluations.cfm

12

Role of Technology testEvaluation of core technical capability of biometric matching technologies

why– Advance science of metrology

– Facilitate innovation through competition

– Help US industry

• Often developers do not have enough data for testing particularly operational data

– Close knowledge gap

• Ditto standard gap

impact• Advance the current state

– Measurement science and Technology

• Improve accuracy– Failure analysis

• Improve implementation’s adherence to standards and protocols

• Procurement ready requirements

13

Presenter
Presentation Notes
To quantify the state-of-art and also do failure analysis. As Ralph pointed out, biometric recognition fails, we was to measure how often and why they fail. All algorithm and no human

Fingerprint research and evaluationsNIST Finger Image Quality.Measures utility of fingerprint images.

NFIQ 1.0 NIST IR 7151. Published 2004.NFIQ 2.0 Summer 2015.

Large scale one-to-many evaluation of fingerprint recognition algorithms.

NIST IR 8034. Published January 2015.

ELFTEvaluation of latent fingerprint

technologies

Accuracy test of latent fingerprint searches using features marked by examiners+ automated feature extraction and matching technologies.

NIST IR 7775. Published March 2011.

Evaluation of performance and interoperability of core minutia template encoding and matching capabilities.

Ongoing-test.NIST IR 7296. Published March 2006.MINUTIA EXCHANGE

FPVTE 2012

4

15

Do these two impressions come from the same finger?

Forensics Friction Ridge Analysis

16

Presenter
Presentation Notes
This is the first of the 3 talks on this newly established project here at NIST. The other two talks will be given by Soweon and Hari on Thursday. I will provide background and overview of why we do what we are doing which basically is to quantify the weight of evidence and uncertainty in friction ridge forensic determination.

Fingerprint recognition

Exemplar-to-Exemplar Latent-to-Exemplar• 63.4% rank-1 accuracy in

lights-out mode• 68.2% rank-1 accuracy with

full markup features (ELFT-EFS 2012, M. Indovina et al., “Evaluation of Latent Fingerprint Technologies: Extended Feature Sets”, NISTIR 7859)

• One finger accuracy FNIR=0.0198 @ FPIR=0.001 (Tabassi et al., Performance evaluation of fingerprint open-set identification algorithms, IJCB 2014)

17

Presenter
Presentation Notes
Our purpose is not to demonstrate the individuality of a complete and well-reproduced fingerprint, but to assess the evidential contribution of fingermarks that can be partial, distorted, and with a poor signal/noise ratio. Uniqueness does not guarantee that prints from two different people are always sufficiently different that they cannot be confused, or that two impressions made by the same finger will also be sufficiently similar to be discerned as coming from the same source. The impression left by a given finger will differ every time, because of inevitable variations in pressure, which change the degree of contact between each part of the ridge structure and the impression medium. None of these variabilities—of features across a population of fingers or of repeated impressions left by the same finger—has been characterized, quantified, or compared.

Latent Fingerprints

Smudged Latent Complex Background Overlapped Latents

18

Presenter
Presentation Notes
Latents recovered from crime scenes are often limited in size, of poor quality, distorted and affected by interference from the substrate.

1:N Fingerprint Identification

Templatei

N templateEnrollment

Database

1. Alice 0.022. Bob 0.343. Christophe 0.384. David 0.395. Ernie 0.45

Candidate List

FNIR, aka “Miss Rate”

FPIRAka False Alarm Rate

Latent image

Features

Search Template

19

Candidate Lists, Rank, Thresholds

• Given L candidates, analyst can inspect– All L– Go only to rank R < L– Only look at

candidates with score ≥ T

– Or some combination of R and T

3.142 1

2.998 2

1.626 3

0.707 4

0.330 5

0.198 6

0.074 7

0.016 8

R = 5

Search image

L = 8

Score Rank

T = 2.0

20

1:N – Two universesClosed-set Identification

• The search is known, a priori, to have mate– Operationally infrequent– E.g. 1:N on a cruise ship– E.g. Transport disaster.

• Very common metric in academic tests– Unfortunately– Explicit dependence on N (i.e. the

number of students!)• Performance metric is

– Rank 1 recognition rate, or more generally

– Cumulative Match Characteristic

Open-set Identification

• Any given search– May have a mate e.g.

• in criminal justice, a recidivist• In visa issuance, a “shopper”

– May not have a mate e.g.• In criminal justice, a first-time

offender• In visa issuance, honest

applicants

• Applies for almost all applications

• Is rarely mentioned in the academic algorithm-development literature

22

1:N ACCURACY METRICS

23

Recognition Error RatesFalse Positive Identification rate (FPIR) or Type I Error Rate

• “false alarm rate” • reporting that an individual

is the source of an impression when in fact she is not.

• Blackstone’s maxim in criminal law– that it is better to let ten

guilty people go free than to falsely convict one innocent person.

False Negative Identification Rate (FNIR) or Type II Error Rate

• “miss rate”• of reporting that an

individual is not the source of an impression when in fact she is.

• Airport screening for terrorist– failing to identify a terrorist

who boards an airplane may be of greater concern than false positives.

25

Presenter
Presentation Notes
False alarm = 1 – specificity Hit rate = selectivity Diagnosticity Uncertainty in ROC

Metrics :: Miss rates

• False Negative Identification Rate (FNIR)– aka “Miss Rate”– Complement is the “hit rate” properly known as the true

positive identification rate, which is 1 – FNIR

• Measured by executing “mated” searches into an enrolled database of N identities

FNIR(N, R, T, L) =

Number of mates outside top R ranks or below threshold Ton candidate list length L

Number of mated searches conducted

26

Miss rates :: FNIR definition

3.142 1

2.998 2

1.626 3

0.707 4

0.330 5

0.198 6

0.074 7

0.016 8

R = 5

Mate is missed because it is below the rank criterion R = 5

Search image FNIR(N, R, T, L)

1. N = Enrolled pop. Size2. R = Rank criterion (applied

by analyst)3. T = Threshold criterion

(applied by analyst)4. L = Number of candidates

requested from algorithm

27

Metrics :: False alarms

• False Positive Identification Rate (FPIR)– aka “False Alarm Rate”, “False Alert Rate”– Complement is the “hit rate” properly known as the true

positive identification rate, which is 1 – FNIR

• Measured by executing “non-mated” searches into an enrolled database of N identities

FPIR(N, T, L) =

Number of searches with any non-mates returned abovethreshold T on candidate list length L

Number of non-mated searches conducted

28

False alarms :: FPIR definition

3.142 1

2.998 2

1.626 3

0.707 4

0.330 5

0.198 6

0.074 7

0.016 8

T = 0.5

There are non-mates above above threshold T = 0.5.This search counts toward FPIR.

Search image

FPIR(N, T, L)

1. N = Enrolled pop. Size2. T = Threshold criterion

(applied by analyst)3. L = Number of candidates

requested from algorithm

29

DET Properties and Interpretation 1 :: Error Rate Tradeoff

LatentAlgorithm X

1:N FNIR “miss rate”Type II Error Rate

1:N FPIR “false alarm rate”Type I Error Rate

Log-scale is typical to show small numbers.

Log-scale is often required because low FPIR values are operationally relevant.

High Threshold

Low Threshold

Multi-FingerAlgorithms

30

DET Properties and Interpretation 1 ::Latent Recognition with / without human examiners

Algorithm X

1:N FNIR “miss rate”Type II Error Rate

1:N FPIR “false alarm rate”Type I Error Rate

Highthreshold

false positives are rare

System configured so that it is almost a “lights out” system

Low Thresholdfalse positives are

common, and candidate lists are long

System configured assuming and requiring human adjudication of false alarms

0.0001 0.001 0.01 0.1 131

DET Properties and Interpretation 1 ::Example applications

1:N FNIR “miss rate”Type II Error Rate

1:N FPIR “false alarm rate”

High threshold false positives rare Low Threshold false

positives are common

0.0001 0.001 0.01 0.1 1

A: Watchlist,Surveillance

B: DMV license deduplication

C: Criminal investigation

D: High profile invest-igation

High labor availability + costLow labor availability + cost

Latent fingerprint matching processSource or Reference

» Large area» Better defined

Mark or Latent Impression

• Less area • Less well defined

AnalysisValue for Individualization 38 minutiae + core + delta

Comparison + EvaluationIndividualization

35 corresponding minutiae + core + delta

5

34

Presenter
Presentation Notes
Explain RIO – Color – Minutia – Core and delta. Core is a center of a loop or whorl where the ridges curve together. Delta is a triangle ridge pattern, where ridges go in different directions. Examiners could also mark level 3. After marking the features, an examiner would assess the value of the latent, if sufficient quality and quantity of features, the latent is assessed as value for ID. Otherwise, value for exclusion or no value. If value for ID, they proceed with comparing the latent with exemplars that are either coming from an IAFIS search or provided as fingerprints of the person of interest. They check for correspondence of features between the two – if they find sufficient features in agreement, they call is an individualization, if they find sufficient disagreement, they call an exclusion, otherwise inconclusive. But currently there is no objective method to quantify sufficiency – sufficiency of quantity and quality of features for value determination or for individualization or exclusion. NAS report says that this process (ACE-V) does not guard against bias; is too broad to ensure repeatability and transparency; and does not guarantee that two analysts following it will obtain the same results. Ulery’s white box study quantified examiners’ markup and value determinations based on these protocols are not repeatable or reproducible. That is the gap we are addressing: a white box study to develop criteria for sufficiency of the information, i.e., when to make an individualization or exclusion decision, and estimating uncertainty of individualization decision considering size and quality of latent and the exemplar We aim to preform a white box study to quantify the concept of sufficiency at value determination that is how many and of what quality/composition of features is needed to make a latent suitable for identification. Or how many and of what quality / composition of features in agreement is needed to conclude that the two are coming from the same source.

ACE-V:: Lack of reproducibility and repeatability

• A growing body of literature questions scientific foundation and transparency of the evaluation of the weight of evidence associated with any particular fingerprint comparison – Zabell (2005); Office of the US Inspector General (2006); Saks and Koehler (2005,

2008); National Research Council of the National Academies (2009)• An increased need for scientific research in the evaluation of methods used

in forensic science, such as bias quantification, validation, and estimates of accuracy and precision in different contexts. [NAS, Strengthening Forensic Science in the United States: A Path Forward, 2009]

• Recent related work– Variability and subjectivity of decisions

• Noblis Black box and white box studies; Neumann NIJ report– Advancing Likelihood Ratio

• Neumann, 2008, 2012,2013; Egli, 2008; Abraham 2013– On latent fingerprint Quality Yoon, Liu, Jain, 2012

While acknowledging the overall reliability of the conclusions of majority of fingerprint comparisons performed over the past century, and their contribution to the criminal justice system

35

Presenter
Presentation Notes
And have argued that these decisions should be supported by a probabilistic framework and possibly by the use of a statistical model enabling the quantification of fingerprint evidence, in a similar manner as this is done for DNA. Note about why and how DNA and FP are different There is no easily definable and quantifiable set of features to characterize friction ridge skin. Distortion, substrate, etc. affect the reproducibility of friction ridge characteristics and increases and adds to complexity of their modeling “NAS report eroded criminal justice confidence”

Analysis:: Subjectivity in value determination• ‘‘The assessment is made based on the quality of features (clarity of the

observed features), the quantity of features (amount of features and area), the specificity of features, and their relationships’’ [SWGFAST, Standards for Examining Friction Ridge Impressions and Resulting Conclusions, ver 1.0. 2011.]

• Lacks repeatability and reproducibility. – Substantial inter- and intra-examiner variation in minutia counts [Evett and

Wiliams. Journal of Forensic Identification 1996; Champod] – VID decisions were unanimous on 48% of mated pairs and 33% of

nonmated pairs.. [Ulery et. al Accuracy and reliability of forensic latent fingerprint decisions– PNAS 2011].

– The extensive variability in annotations’ variation[Ulery BT, et.al (2014) Measuring What Latent Fingerprint Examiners Consider Sufficient Information for IndividualizationDeterminations. PLoS ONE]

• The three most accurate matchers in the NIST ELFTEFS Evaluation successfully matched 8–20% of NV latent prints at rank 1, and 28–35% of VEO latent prints at rank 1.

36

Presenter
Presentation Notes
If an inappropriate NV determination is made, then the opportunity to make an individualization or exclusionconclusion is lost (a missed conclusion); an inappropriate determination that an impression is ‘‘of value’’ wastes examiner time on fruitless comparisons. Talk about minimum minutia point Extensive variation also means that we must treat any individual examiner’s minutia counts as interpretations of the (unknowable) information content of the prints: saying ‘‘the prints had N corresponding minutiae marked’’ is not the same as ‘‘the prints had N corresponding minutiae.’ For 356 latents, unanimous value determination achieved 43%. 85% of NV decisions and 93% of VID decisions were repeated by the same examiner after a time gap while only 55% of VEO decisions were repeated

Evaluation::Subjectivity of individualization determination

• Accuracy and reliability of latent examiners’ decisions

“Sufficiency is the examiner’s determination that adequate unique details of the friction skin source area are revealed in the impression.” [SWGFAST, Methodology.]

[Ulery, et al. Accuracy and reliability of forensiclatent fingerprint decisions, PNAS, 2011]

≫ Repeatability and reproducibility − Inter- and intra examiners’

variability

[Ulery et. al, Repeatability and Reproducibility of Decisions by Latent Fingerprint Examiners. PLoS ONE 2012]

FNIR FPIR

37

Presenter
Presentation Notes
Source determination is made when the examiner concludes, based on his or her experience, that sufficient quantity and quality of friction ridge detail is in agreement between the latent print and the known print. Source exclusion is made when the process indicates sufficient disagreement between the latent print and known print. If neither an identification nor an exclusion can be reached, the result of the comparison is inconclusive.

NIST STATISTICAL FRICTION RIDGE ANALYSIS

39

Objectives» To quantify the weight of evidence and uncertainty in friction ridge

forensic determination.

➾ Develop a probabilistic framework to assess the strength of comparison between a latent and the suspected print in a robust reliable manner.

➾ provide the fingerprint community with a body of research, empirically validated methods and data to support development of better standards for expressing and supporting the conclusions of fingerprint examinations.

≫ Develop population statistics− to limit the possible reference population of a particular print in a

statistical approach, and− provide examiners with a more robust understanding of the

prevalence of different ridge flows and crease patterns

2

40

Presenter
Presentation Notes
The objective of this research is to quantify the weight of evidence and uncertainty in friction ridge forensic determination. And ultimately help reduce measurement errors and increase confidence in the results achieved with improved statistical tools and methods. We will provide the fingerprint community with a body of research, tools and data to support better understanding of the concept of sufficiency, in order to define better standards for expressing and supporting the conclusions of fingerprint examinations. How is how we’re going to do it ..

Research Area» Measuring discriminating value of the various ridge formations and

clusters of ridge formations- Minutia quality, ridge quality- Rarity of features

» Measuring information content − Sufficiency for individualization− Sufficiency for exclusion− Uncertainty of individualization or exclusion− Likelihood ratio, pros and cons.

3

41

Presenter
Presentation Notes
First we need to measure specificity of features. Different features (minutiae) on a latent print supply different amounts of information to an examiner or an automated algorithm. Therefore we need to develop a quality metric for each feature that measures the discriminating value of a feature. Another goal is to identify and measure variation (in feature extraction or markup and conclusions/determnations) due to composition, size, position, and clarity,   Using these as building blocks, we move on to measure information content of a print as a function of its completeness, quantity and quality (or clarity) of features, the region of fingerprint and so on. Having an objective measure of information content, then we can quantify sufficiency for individualization or exclusion. Currently, there is no objective measure of sufficiency for latent prints. SWGFAST defines Suitable (Sufficient): [the examiner’s] determination that there is adequate quality and quantity of detail in an impression for further analysis, comparison or to reach a conclusion. But this definition is so vague and broad that as pointed out by the NAS report and quantified by Ulery, examiners’ markup and value determinations based on these protocols are not repeatable or reproducible.   Performing studies for different demographic (sex, age, race, etc.) allow us to develop population statistics which are useful to limit the possible reference population of a particular print in a statistical approach, and provide fingerprint community with a more robust understanding of the prevalence of different ridge flows and crease patterns, as it is called in the 2009 NAS report. Our purpose is not to demonstrate the uniqueness of a complete and well produced fingerprint, but to assess the evidential contribution of latents that can be partial, distorted, and with poor clarity or signal to noise ratio.

Black box vs. White box Study

Latent print Comparison Score

Black Box

White Box Study :: Investigate how the quantity and quality of features relate to latent print individualization or exclusion determinations.

White Box

FPIR (False Alarm Rate)

FNIR

(Miss

Rat

e)

Measure the level of certainty of the determination given the quantity, spatial relationship, position, and clarity of features.

Known print (Exemplar)

6

42

Presenter
Presentation Notes
Almost all our evaluations are black box study, where we assess the recognition accuracy, throughput of algorithms with no knowledge of what is inside. Here we want to investigate how specific image characteristics relate to decisions made at various stages.   We aim to preform a white box study to quantify the concept of sufficiency. That is how many and of what quality/composition of features is needed to make a latent suitable for identification. Or how many and of what quality / composition of features in agreement is needed to conclude that the two are coming from the same source. mistakes and misidentifications are not made because someone has an identical fingerprint to someone else in the world. They are made because of guesswork, poor performance, lack of standards, bias and observer error. Quantification of information contained in a fingerprint to make identification decision Analyze the occurrence of indistinguishable minutia configurations between two impressions of different fingers in terms of the quantity and quality of minutiae, the size and the spatial location of the configurations relative to the singular point(s) in a print.  

Estimating variability of feature (inter-finger)

Comparison Score

D(snonMated ,smated ) ≈ f (Numminutia ,Qualityminutia ,Clarityridge,sizepatch ,region,Qualityexemplar )

Comparison Score per patch

Matcher

+

+

7

43

Presenter
Presentation Notes
Start with estimating the probability density function in the denominator of the likelihood ratio, which is an estimation of the variability of the features when they come from different fingers (between fingers variability or inter finger).   And we do it for three populations: unrelated, related (fingers of same person), and twins.   We would need nonmated scores of latents. To introduce one variable at a time, we start with various size. Aloud: genuine (mated) and impostor (non-mated) The probability density function in the denominator is an estimation of the variability of the features when they come from different fingers (between fingers variability). Care in what images to use

… due to quantity

SD4 f0001_01-s0892_06, W = 64, Score = 316, Number of Matched Minutia = 5

8

44

… due to quantity

SD4 f0001_01-s0864_05, W = 128, Score = 45, Number of Matched Minutia = 9

9

45

… due to quantity

SD4 f0001_01-s0178_02, W = 192, Score = 257, Number of Matched Minutia = 14

10

46

… due to quantity

SD4 f0001_01-s0236_04, W = 256, Score = 156, Number of Matched Minutia = 19

11

47

… and locationRank 1 Rank 2 Rank 3

12

48

DataNon-related Impostor SD 4 2000 pairs 400 of each

fingerprint typeRelated Impostor SD 14 2,700 pairs 10print cardsTwins WVU + IAILatent SD 27 258 latent crime

scene and their matching rolled 10prints.

Minutiae features validated by a team of professional latent examiners

Population study Sequestered fingerprint images

> 100K

13

49

Distribution of score vs. patch size

0.000

0.001

0.002

0.003

0.004

0.000

0.001

0.002

0.003

0.004

0.000

0.001

0.002

0.003

0.004

128192

256

0 1000 2000 3000 4000 5000 6000l

dens

ity

Score

e maximp

e genscore

Non-mate

True Mate

14

128256

192

Distribution of score vs. patch size and number of corresponding minutia

Score

maximp

e genscore

Size (pixel)

#of corresponding minutiae

e maximp

e genscore

Non-mate

True Mate

15

128256

192

Rank 1 ID vs. number of corresponding minutia + size + region

False positive

True positive

Distance to the closest singularity point

Num

ber o

f cor

resp

ondi

ng m

inut

iae

poin

ts

7

7

7

16

128x128256x256

192x192

52

Distribution of vs. patch size, number of corresponding minutia and spatial arrangement

Distance to the closest singularity point

Diffe

renc

e(tr

ue m

ate

scor

e,no

n-m

ated

scor

e)

Size

(pix

el)

#of corresponding minutiae

17

128256

192

(preliminary) Observation» Larger size, and more features result in higher evidential

value.– But, some configurations with high number of minutia can

result in false positive (and false negatives).– Some small size patches with low number of minutia were

identified.

» Spatial location is important. » Minutiae close to singularity points have more discriminative

value.

» Fingerprints of same person gives higher non-mate scores than fingerprints of unrelated individuals.

18

54

Underway + Future work» Include quality of the exemplar in the model.

− Explore/investigate the existence of biometric zoo.

» Include quality of features in the model.» Develop algorithm to measure discriminating value of the various

ridge formations and clusters of ridge formations- Minutia quality, ridge quality- Rarity of features.

» Use larger dataset, other matching algorithms.» …

19

55

Outcome and ImpactO

UTC

OM

E

Empirically validated statistical models and data that allow for an objective assessment of sufficiency of information content in latent prints.

➾ A probabilistic framework to support the procedure and decision making in latent fingerprint examination.− understand, analyze and quantify errors and uncertainty

in friction ridge forensic determination.

Quantitative support to standards being developed by OSAC Friction Ridge subcommittee.

IMPA

CT

Help reduce measurement errors and increase confidence in the results achieved with improved statistical tools and methods.

20

56

Presenter
Presentation Notes
Strengthening the scientific basis of fingerprint. Improve statistical foundation of fingerprint This will allow forensic scientists to quantify the level of confidence they have in statistical computations made with these methods and the conclusions reached from those analyses. support the procedure and decision making in latent fingerprint examination, a framework to understand, analyze and quantify errors and uncertainty in friction ridge forensic determination.

THANK YOU.

Elham [email protected]

57

Madrid bombing of March 11, 2004

misidentification

Presenter
Presentation Notes
The determination was verified by two other examiners.

MayfieldDaoud

A case of close non-match?

Sim

on A

. Col

e, M

ore

than

zero

: Acc

ount

ing

for e

rror

in la

tent

fin

gerp

rint i

dent

ifica

tion;

The

jour

nal o

f crim

inal

law

&

crim

inol

ogy,

Vol.

95, N

o. 3

, 200

4.

Non-zero error rate 22 reported cases of misattribution 1920-2004

Presenter
Presentation Notes
Indeed, more than half (12/22) of the known misattribution were attested to by more than one examiner.) This supports that argument posited by Haber and Haber, that, if “verification” is not conducted blind, the “verifier” is more likely to ratify misattributions than detect them.257 The data also show that a high point standard is insufficient to protect against misattribution. Of the twelve cases in the data set for which the number of supposed matching ridge characteristics is known, in fully half of those cases the misattribution was made with at least sixteen points.

Sim

on A

. Col

e, M

ore

than

zero

: Acc

ount

ing

for e

rror

in la

tent

fin

gerp

rint i

dent

ifica

tion;

The

jour

nal o

f crim

inal

law

&

crim

inol

ogy,

Vol.

95, N

o. 3

, 200

4.

Non-zero error rate 22 reported cases of misattribution 1920-2004

Presenter
Presentation Notes
Indeed, more than half (12/22) of the known misattribution were attested to by more than one examiner.) This supports that argument posited by Haber and Haber, that, if “verification” is not conducted blind, the “verifier” is more likely to ratify misattributions than detect them.257 The data also show that a high point standard is insufficient to protect against misattribution. Of the twelve cases in the data set for which the number of supposed matching ridge characteristics is known, in fully half of those cases the misattribution was made with at least sixteen points.