analysis of biometric authentication protocols in the blackbox model

10
arXiv:1101.2569v1 [cs.CR] 13 Jan 2011 1 Analysis of Biometric Authentication Protocols in the Blackbox Model Koen Simoens, Julien Bringer, Herv´ e Chabanne, and Stefaan Seys January 3, 2011 Abstract—In this paper we analyze different biometric authentication protocols considering an internal adversary. Our contribution takes place at two levels. On the one hand, we introduce a new comprehensive framework that encompasses the various schemes we want to look at. On the other hand, we exhibit actual attacks on recent schemes such as those introduced at ACISP 2007, ACISP 2008, and SPIE 2010, and some others. We follow a blackbox approach in which we consider components that perform operations on the biometric data they contain and where only the input/output behavior of these components is analyzed. Keywords—Biometrics, template protection, authentication, protocols, blackbox security model, malicious adversaries I. I NTRODUCTION A LTHOUGH biometric template protection is a relatively young discipline, already over a decade of research has brought many proposals. Methods to secure biometric data can be separated in three levels. The first one is to have biometric data coming in a self-protected form. Many algorithms have been proposed: quantization schemes [1], [2] for continuous biometrics; fuzzy extractors [3] and other fuzzy schemes [4]– [6] for discrete biometrics; and cancellable biometrics [7]– [9]. The security of such template-level protection has been intensively analyzed, e.g., in [10]–[13]. On a second level one can use hardware to obtain secure systems, e.g., [14], [15]. Finally, at a third level advanced protocols can be developed to achieve biometric authentication protocols relying on ad- vanced cryptographic techniques such as Secure Multiparty Computation, homomorphic encryption or Private Information Retrieval protocols [16, Ch. 9] [17]–[24]. The focus of our work is on this third level. In this work, we analyze and attack some existing biometric authentication protocols. We follow [25] where an attack against a hardware- assisted secure architecture [15] is described. The work of [25] introduces a blackbox model that is taken back and extended here. In this blackbox model, internal adversaries are considered. These adversaries can interact with the system by using available input/output of the different functionalities. Moreover, the adversaries are malicious in the sense that they can deviate from the honest-but-curious classical behaviour, which is most often assumed. Our contributions are the following. We extend the blackbox framework initiated in [25] with the distributed system model of [19] in a way that it can handle different existing proposals for biometric authentication. We show how this blackbox approach can lead to attacks against these proposals. We describe in detail our analysis of three existing protocols [19], [20], [22] and give arguments on some others [23], [24]. In the framework we propose, we study how the previous attacks can be formalized. We list all the possible existing attacks points and the different internal entities that can lead the attacks, and we reveal the potential consequences. The rest of the paper is organized as follows. The framework is developed in Section II and introduces the system and attack model. This is then applied to existing protocols in Section III, where detailed attacks are described. Section IV formalizes these attacks and Section V concludes the paper. II. FRAMEWORK In this section we present a framework that forms a basis for the security analysis of biometric authentication protocols. The framework models a generic distributed biometric system and the (internal) adversaries against such system. We define the roles of the different entities that are involved and their potential attack goals. From these roles and attack goals we derive the requirements that are imposed on the data that are exchanged between the entities. Biometric Notation: Two measurements of the same biometric characteristic are never exactly the same. Because of this behavior, a biometric characteristic is modeled as a random variable B, with distribution p B over some range B. A sample is denoted as b. Two samples or templates are related if they originate from the same characteristic. In practice, we will say they are related if their mutual distance is less than some threshold. Therefore, a distance function d is defined over B and for each value in the range of d that is used as the threshold when comparing two samples a false match rate (FMR) and a false non-match rate (FNMR) can be derived. Biometric variables can be continuous or discrete but in the remainder of the paper we will assume that they are discrete. Note that the variables may consist of multiple components. For example, a sample may consist of a bitstring, which is the quantized version of a feature vector, and an other bitstring that indicates erasures or unreliable components in the first and thus act as a mask. A. System Model Our system model follows to a large extent the model defined by Bringer et al. [19], which was also used to define new schemes in [20] and [26]. This model is motivated by

Upload: independent

Post on 05-Dec-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

arX

iv:1

101.

2569

v1 [

cs.C

R]

13 J

an 2

011

1

Analysis of Biometric Authentication Protocols inthe Blackbox Model

Koen Simoens, Julien Bringer, Herve Chabanne, and StefaanSeys

January 3, 2011

Abstract—In this paper we analyze different biometricauthentication protocols considering an internal adversary. Ourcontribution takes place at two levels. On the one hand, weintroduce a new comprehensive framework that encompassesthe various schemes we want to look at. On the other hand,we exhibit actual attacks on recent schemes such as thoseintroduced at ACISP 2007, ACISP 2008, and SPIE 2010, andsome others. We follow a blackbox approach in which weconsider components that perform operations on the biometricdata they contain and where only the input/output behavior ofthese components is analyzed.

Keywords—Biometrics, template protection, authentication,protocols, blackbox security model, malicious adversaries

I. I NTRODUCTION

A LTHOUGH biometric template protection is a relativelyyoung discipline, already over a decade of research has

brought many proposals. Methods to secure biometric data canbe separated in three levels. The first one is to have biometricdata coming in a self-protected form. Many algorithms havebeen proposed: quantization schemes [1], [2] for continuousbiometrics; fuzzy extractors [3] and other fuzzy schemes [4]–[6] for discrete biometrics; and cancellable biometrics [7]–[9]. The security of such template-level protection has beenintensively analyzed, e.g., in [10]–[13]. On a second level onecan use hardware to obtain secure systems, e.g., [14], [15].Finally, at a third level advanced protocols can be developedto achieve biometric authentication protocols relying on ad-vanced cryptographic techniques such as Secure MultipartyComputation, homomorphic encryption or Private InformationRetrieval protocols [16, Ch. 9] [17]–[24].

The focus of our work is on this third level. In this work,we analyze and attack some existing biometric authenticationprotocols. We follow [25] where an attack against a hardware-assisted secure architecture [15] is described. The work of[25] introduces a blackbox model that is taken back andextended here. In this blackbox model, internal adversariesare considered. These adversaries can interact with the systemby using available input/output of the different functionalities.Moreover, the adversaries are malicious in the sense that theycan deviate from the honest-but-curious classical behaviour,which is most often assumed.

Our contributions are the following. We extend the blackboxframework initiated in [25] with the distributed system modelof [19] in a way that it can handle different existing proposalsfor biometric authentication. We show how this blackbox

approach can lead to attacks against these proposals. Wedescribe in detail our analysis of three existing protocols[19],[20], [22] and give arguments on some others [23], [24]. In theframework we propose, we study how the previous attacks canbe formalized. We list all the possible existing attacks pointsand the different internal entities that can lead the attacks, andwe reveal the potential consequences.

The rest of the paper is organized as follows. The frameworkis developed in SectionII and introduces the system and attackmodel. This is then applied to existing protocols in SectionIII ,where detailed attacks are described. SectionIV formalizesthese attacks and SectionV concludes the paper.

II. FRAMEWORK

In this section we present a framework that forms a basisfor the security analysis of biometric authentication protocols.The framework models a generic distributed biometric systemand the (internal) adversaries against such system. We definethe roles of the different entities that are involved and theirpotential attack goals. From these roles and attack goals wederive the requirements that are imposed on the data that areexchanged between the entities.

Biometric Notation: Two measurements of the samebiometric characteristic are never exactly the same. Becauseof this behavior, a biometric characteristic is modeled as arandom variableB, with distributionpB over some rangeB.A sample is denoted asb. Two samples or templates are relatedif they originate from the same characteristic. In practice, wewill say they are related if their mutual distance is less thansome threshold. Therefore, a distance function d is definedover B and for each value in the range of d that is used asthe threshold when comparing two samples a false match rate(FMR) and a false non-match rate (FNMR) can be derived.

Biometric variables can be continuous or discrete but in theremainder of the paper we will assume that they are discrete.Note that the variables may consist of multiple components.For example, a sample may consist of a bitstring, which is thequantized version of a feature vector, and an other bitstringthat indicates erasures or unreliable components in the firstand thus act as a mask.

A. System Model

Our system model follows to a large extent the modeldefined by Bringeret al. [19], which was also used to definenew schemes in [20] and [26]. This model is motivated by

2

a separation-of-duties principle: the different roles fordataprocessing or data storage on a server are separated into threedistinct entities. Using distributed entities is a baseline to avoidone to control all information and it is a realistic representationof how current biometric systems work in practice (cf. [27]).

System Entities:The different entities involved in thesystem are a userUi, a sensorS, an authentication serverAS, a databaseDB and a matcherM. User Ui wishes toauthenticate to a particular service and has, therefore, regis-tered his biometric databi during theenrollmentprocedure.In the context of the service the user has been assigned anidentifier IDi, which only has meaning within this context.The biometric reference databi are stored byDB, who linksthe data to identifieri. The mapping fromIDi to i is onlyknown byAS, if relevant. Note that in some applications it ispossible that the same user is registered for the same serviceor in the same database with different samples,bi andbj, anddifferent identities, i.e.,IDi 6= IDj in the service context ori 6= j in the database context. The property of not being ableto relate queries under these different identities is theidentityprivacy requirement as defined in [19].

During theauthenticationprocedure the sensorS capturesa fresh biometric sampleb′i from user Ui and forwardsthe sample toAS. The authentication serverAS managesauthorizations and controls access to the service. To makethe authorization decision,AS will rely on the result of thebiometricverificationor identificationprocedure that is carriedout by the matcherM. It is assumed that there is no directlink betweenM andDB. As such,AS requests fromDBthe reference data that are needed byM and forwards themto M. It is further assumed that the system accepts onlybiometric credentials. This means that the user provides hisbiometric data and possibly his identity, but no user-specifickey, password or token. Fig.1 shows the participating entities.

Functional Requirements:Enrollment often involves off-line procedures, like identity checks, and is typically carriedout under supervision of a security officer. Therefore, weassume that users are enrolled properly and only authenticationprocedures are analyzed in our framework. A distinction hasto be made between verification and identification. Verificationintroduces aselection step, which implies thatDB returnsonly one of its references, namely thebi that corresponds tothe identifier i that is used in the context of the database.The entity that does the mapping betweenIDi and i, whenapplicable, is generallyAS. In identification mode,DB returnsthe entire set of references, in some protected form, toAS.The database can then be combined withb′i and forwarded toM. The matcherM has to verify thatb′i matches with oneor a limited number ofbi in the received set of references orthat one of the matching references has indexi.

We define the minimal logical functionality to be providedby our system entities in terms of generic information flows,which are included in our model in Fig.1. In this functionalmodel, we represent the result of the biometric comparison as afunction of the distanced(b′i, bi). This is a generic representa-tion of the actual comparison method:M can evaluate simpledistances but also run more complex comparisons and willoutput either similarity measures or decisions that are based

on some thresholdt. The information flows are as follows.User Ui presents a biometric characteristicBi that will

be sampled by the sensorS to produce a sampleb′i. Whenoperating in verification modeUi will claim an identityIDi :

Uib′i←Bi

−−−−→ S or Uib′i←Bi , IDi

−−−−−−−→ S . (1)

The sensorS forwardsb′i andIDi in some form toAS:

Sf1(b

i)−−−−→ AS or Sf1(b

i) , g1(IDi)−−−−−−−−−→ AS . (2)

In generalg1(IDi) = IDi but it can also be a mapping toan encrypted value to hideIDi from AS. If applicableASresolves the mappingg1(IDi) to the identifieri and requestsreference data for one or more users fromDB by sending atleast one requestg2(b′i, i) :

ASg2(b

i,i)−−−−−→ DB . (3)

Note that the functiong2 does not necessarily use all theinformation in its arguments, e.g., the fresh sampleb′i maybe ignored.

DatabaseDB providesAS with reference data for one ormore users in some form. It is possible thatDB returns theentire database, e.g., in case of identification:

ASf2({bi})←−−−−− DB . (4)

The authentication serverAS forwards the fresh sampleb′iand the reference databi in some combined form toM :

ASf3(b

i,{bi})−−−−−−−→M . (5)

Note thatAS has onlyf1(b′i) and f2(bi) at his disposal tocomputef3(b′i, {bi}) .

The matcherM performs a biometric comparison procedureon the receivedb′i and{bi} and returns the result toAS. Theresult may contain decisions or scores or different identities butshould at least be based on one distance calculation betweenthe fresh sampleb′i and a referencebi :

ASf4(d(b

i,{bi}))←−−−−−−−−M . (6)

Different data are stored by the different entities. Thedatabase stores references{bi}. The authentication servicestores the information needed to mapg1(IDi) to i, if appli-cable. The matchers can store non-biometric verification data,e.g., hashes of keys extracted from biometrics, or decryptionkeys that are use to recover the result of combining sample andreference. Also, the sensor can store key material to encryptthe fresh sample.

B. Adversary Model

Attacker Classification:Based on the physical entry pointof an attack a distinction is made between two types ofattackers:internal attackers are corrupted components in thesystem andexternalattackers are entities that only have accessto a communication channel. We will consider here only theissue of an insider attacker. As a baseline, we make thefollowing assumption.

3

Ui S AS

DB

M

b′i, IDi f1(b′

i), g1(IDi)

g2(b′

i, i)

f2({bi})

f3(b′

i, {bi})

f4(d(b′i, {bi}))

(A4) A1

A2

A3

Fig. 1. System model with indication of generic informationflows and attack pointsAi. UserUi ’s biometric is sampled by sensorS. The sampleb′i

andUi’s identity are forwarded to the authentication serverAS, who requests the corresponding referencebi from databaseDB. AS combines the sampleand the reference and forwards the result to matcherM, who performs the actual comparison and returns the result to AS. The solid arrows represent themessages exchanged between the system entities. The dashedarrow represents the implicit feedback on the authentication result to the userUi, i.e., accessto the requested service is granted if the sample matches thereference.

Assumption 1:The protocol ensures the security of thescheme against any external attacker.As this can be reached by classical secure channel techniques,by an external security layer independent of the core protocolspecification, we study further only the internal layer. Note thatthe security of the scheme needs to be expressed in terms ofspecific attack goals, which will be defined in the next section.

A second distinction is made based on an attacker’s capa-bilities. Passive attackers or honest-but-curious attackers areattackers that only eavesdrop the communications in whichthey are involved and that can only observe the data that passesthrough them. They always follow the protocol specifications,never change messages and never generate additional commu-nication. Active or malicious attackers are internal componentsthat can also modify existing or real transactions passingthrough them and that can generate additional messages. Wemainly focus on malicious internal attackers and we formulatethe following additional assumption.

Assumption 2:The protocol ensures the security of thescheme against honest-but-curious entities, i.e. internal systemcomponents that always follow the protocol specifications buteavesdrop internal communication.We will explain in SectionII-C how this has a direct impacton the properties of the different functionalities of our model.

Finally, we put aside the threats on the user or client side, byconcentrating the analysis on the remote server’s side, i.e.,AS,DB orM. The information leakage for the user and the clientis generally only the authentication or identification result.They can, however, try to gain knowledge on the referencedata bi by running queries with differentb′i, e.g., in somekind of hill climbing attack. The difficulty can highly varydepending on the modalities, the threshold and the scenario.A basic line of defense is to limit the number of requests, toensure the aliveness of the biometric inputs provided by theuser and to hide the result when applicable. Although it isimportant to implement such defense mechanisms, the threatsare inherent to any biometric authentication or identificationsystem. So we do not take the user or the sensor into account

as an attacker in this model and the primary attack points areAS, DB andM. Nonetheless, there may be inside attackersthat also control the biometric inputs to some extent. We modelthis with a secondary attack point at the sensor.

Assumption 3:The userUi or the sensorS cannot beattackers on their own but they can act as a secondary attackpoint in combination with a primary attack point atAS, DBorM. If this the case an attacker can choose the input sampleb′i throughS and observe whether the authentication requestwas successful throughUi.

Of course, the baseline assumptions have to be checkedbefore proceeding with a full analysis of the security of ascheme, but as such, they clarify what the big issues are thatmay remain in state-of-the-art schemes. They also underlinewhat the hardest challenges are when designing a securebiometric authentication protocol. Fig.1 sums up the differentattack points we consider in our attack model.

Attack Goals:As noted above, the security of a schemeis expressed in terms of specific attack goals or adversaryobjectives. Therefore, we define the following global attackgoals.

• Learn reference bi . In accordance to the securitydefinitions in [25] we define different gradations in theinformation that an attacker may want to learn frombi.Minimum leakage refers to the minimum information thatallows, e.g., linking of references with high probability.Authorization leakage is the information that is neededto construct a sample that is within distancet, the systemthreshold, of the referencebi. Full leakage gives fullknowledge ofbi. When a scheme is resistant to this attackin all three gradations we say that it providesbiometricreference privacy.

• Learn sample b′i . The same gradations apply as inthe previous attack goal. We call the security propertyassociated with this attack goalbiometric sample privacy.

• Trace users with different identities. This attack canbe achieved when different references from the sameuser, possibly coming from different applications, can be

4

TABLE IRELEVANCE OF ATTACK GOALS FOR DIFFERENT(MALICIOUS ) ENTITIES IN

THE SYSTEM MODEL(? = ONLY RELEVANT IF THE SCHEME UNDERCONSIDERATION WAS DESIGNED TO HIDE REFERENCES FROMDB; * =

ONLY RELEVANT IF THE PROTOCOL OPERATES IN IDENTIFICATION MODE

OR IF IDi AND i ARE HIDDEN FROMAS IN VERIFICATION MODE).

Attack goal AS DB M

Learnbi V ? VLearnb′

iV V V

TraceUi with different identities V ? VTraceUi over different queries V* V V

linked. A system that is resistant to such attack is said toprovide identity privacy[26].

• Trace users over different queries.This attack refersto linking queries, whether anonymized or not, based oni, bi or b′i. The property of a system that prevents suchattack is calledtransaction anonymity[26]. Note that anattacker that is able to learnb′i can automatically traceusers based on the learned sample.

The formulated attack goals may apply to the differentinternal attackers as indicated by the different attack points.The relevance of the attack goals is shown in TABLEI. Attackgoals can be generalized for combinations of inside attackers,e.g.,AS andM. They are relevant for the combination if theyare relevant for each attacker individually. As a counterexam-ple, learningbi is not always relevant for the combinationAS-DB. In some schemes it is assumed thatDB stores referencesin the clear so the attack “learnbi” becomes trivial. It isimportant, however, that such schemes explicitly mention theassumption thatDB is fully trusted. It will become clear inthe further sections that the main focus of our work is onASwho is a powerful attacker. This way of thinking is rather newand many protocols are not designed to be resistant to suchattacker.

For each attacker or combination of attackers, and for eachrelevant attack goal a security requirement can be defined,namely that the average success probability of the givenattacker that mounts the given attack on the scheme shouldbe negligible in terms of some security parameter definedby the application. When analyzing the security of biometricauthentication protocols that include distributed entities, eachof these requirements should be checked individually.

C. Requirements on Data Flows

Coming back to the functionalities in our system model (cf.SectionII-A ), we use the attack goals defined in TABLEI toimpose requirements on the data that are being exchanged.

• AS should not be able to learnb′i hencef1 is at leastone-way, meaning thatb′i should be unrecoverable fromf1(b

′i) with overwhelming probability. To prevent tracing

Ui over different queries, e.g., in identification mode, itcould also be required thatf1 is semantically secure. Wenote that semantic security is a security notion that mightbe too strong but it ensures that the function prevents theminimum leakage as described under attack goallearn bi(SectionII-B).

• AS should not learnbi hencef2 is at least one-way. Toprevent tracing users with different identities it may berequired thatf2 is also semantically secure.

• If applicable,AS should not be able to traceUi by linkingqueries onIDi or i, and thusg1 should be semanticallysecure.

• If applicable,DB may not learnbi, hence thebi wouldneed to be stored in protected form using some semanti-cally secure function.

• DB may not learnb′i, henceg2 is one-way on its firstinput. It should also be semantically secure to preventtracingUi.

• DB may not be able to link the queries at all, henceg2should also be semantically secure on its second input.

• M may not learn the individualbi or b′i and must notbe able to link references or queries from the sameUi,hencef3 should be semantically secure on tuples〈b′i, bj〉

Now as we demand thatM returns a result toAS thatis a function (f4) of the distanced(b′i, bi) while maintainingthe confidentiality and the privacy of the data, this means thatsome operations must be malleable. Malleability refers to theproperty of some cryptosystems that an attacker can modify aciphertext into another valid ciphertext that is the encryption ofsome function of the original message, but without the attackerknowing this message. Depending on the exact step when thecombination ofbi andb′i is realized, eitherg2, f2 or f3 wouldbe malleable. In the following section, we will show the impactof this fundamental limitation and how it can be exploited toattack existing protocols.

III. A PPLICATION TO EXISTING CONSTRUCTIONS

In this section, we begin to extend attacks that have beenintroduced by Bringeret al. in [25] in the context of hardwaresecurity to more complex cryptographic protocols that usehomomorphic encryption in SectionIII-A for a scheme byBringer et al. [19] and in SectionIII-B for a scheme byBarbosaet al. [20]. We then describe another kind of attacksby looking at a scheme by Stoianov [22] in Section III-C.Finally, we briefly discuss attacks on two other schemes [23],[24] in SectionIII-D . All schemes are described with the goalto fit them directly into our model.

A. Bringeret al. ACISP 2007

1) Description: In [19], Bringer et al. presented a newsecurity model for biometric authentication protocols thatseparates the tasks of comparing, storing and authorizing anauthentication request amongst different entities: a fully trustedsensorS, an authentication serverAS, a databaseDB anda matching serviceM. The goal was to prevent any of thelatter three to learn the relation between some identity andthebiometric features that relate to it. Their model forms the basisof our current framework and in this model they presented ascheme that applies the Goldwasser-Micali cryptosystem [28].Let EGM andDGM denote encryption and decryption, respec-tively, and note that for anym,m′ ∈ {0, 1} we have the ho-momorphic propertyDGM(EGM(m, pk)× EGM(m

′, pk), sk) =m⊕m′. The scheme in [19] goes as follows.

5

During the enrollment phase, the userUi registers at theauthentication serverAS. He then gets an indexi and apseudonymIDi. Let N denote the total number of recordsin the system. DatabaseDB receives and stores(bi, i) wherebi stands forUi’s biometric template, a binary vector ofdimensionM , i.e.,bi = (bi,1, bi,2, . . . , bi,M ). In the following,we suppose thati is also the index of the recordbi in thedatabaseDB.

A key pair is generated for the system. MatcherM pos-sesses the secret keysk. The public keypk is known byS,AS andDB. The authentication serverAS stores a table ofrelations(IDi, i) for i ∈ {1, . . . , N}. DatabaseDB containsthe enrolled biometric datab1, . . . , bN

When userUi wants to authenticate himself, theS willsend an encrypted sampleEGM(b

′i, pk) and IDi to AS. The

authentication serverAS will request the encrypted referenceEGM(bi, pk) from DB and combine it with the encryptedsample. Because of the homomorphic property,AS is ableto obtainEGM(b

′i⊕ bi, pk). Note that the encryption is bitwise

soAS will permute theM encryptions and forward these toM. BecauseM has the secret keysk, M can decrypt thepermuted XOR-ed bits and compute the Hamming distancebetween the sample and the reference.

The security of this protocol is proved in [19] under theassumption that all the entities in the system will not colludeand are honest-but-curious. It is this assumption that wechallenge in our framework, which leads to the followingattack.

2) Authentication Server Adversary (A=AS): The follow-ing attack shows how a malicious authentication serverAScan learn the enrolled biometric templatebi corresponding tosome identityIDi. To do so the authentication serverAS re-quests the templatebi without revealingIDi and receives fromDB the encrypted template that was stored during enrolment,i.e., EGM(bi, pk) = 〈EGM(bi,1, pk), . . . , EGM(bi,M , pk)〉.

The attack consists of a bitwise search performed byAS inthe encrypted domain. FirstAS computes the encryption of azero bitEGM(0, pk). If the public key is not known byAS, hecan take an encrypted bit of the template retrieved fromDBand computeEGM(bi,k, pk)

0 = EGM(0, pk). Let the maximumallowed Hamming distance bet.

Now AS will take the first encrypted bitEGM(bi,1, pk),repeat itt+1 times and addM−t−1 encryptions of a zero bit.Note that the ciphertextEGM(bi,1, pk) can be re-randomizedso that it is impossible to detect that the duplicate ciphertextsare “copies”. If bi,1 is one, the total Hamming distance ascomputed byM will be t+ 1 andM will return NOK (notok). If bi,1 is zero, theM will return OK. This process canbe repeated for all bits ofbi, hence,AS can learnbi bit bybit in M queries. To further disguise the attackAS can applypermutations and add up tot encryptions of one-bits to makethe query look genuine.

3) Matcher and Sensor Adversary (A=M+S): A bitwisesearch attack similar to the previous attack can also beconsidered in the case of an adversary made of the matcherassisted by the sensor. The attack consists of the followingsteps:• S sends the encryption of0 = 〈0, . . . , 0〉 ;

• M receivesbi ⊕ 0 bitwise but permuted and records theweight of bi ⊕ 0 ;

• S toggles a bit in the0 vector in positionx and sends itto AS;

• M observes the changed weight (+1 or -1) and learns thebit at positionx in bi .

The adversary learnsbi in M queries.

4) Discussion: What makes the first attack (A=AS) fea-sible is that all bits are encrypted separately and that thecryptosystem is homomorphic and thusf1(b′i) andf2(bi) aremalleable (needed to create the encryption of a zero-bit if thepublic key is not known). Moreover, it is not enforced thatAScombines the input from the sensor and from the database.

To counteract this threat, one could requireS to sign theinput and forceDB to merge the input with references, inthis wayDB combines the sample and the reference andASdoes not receive the referenceEGM(bi, pk) but the combinationof the sampleEGM(b

′i ⊕ bi, pk). Using the previous attack,

however,AS can still learnb′i and theb′i ⊕ bi. Additionalmeasures have to be taken to prevent this, e.g.,DB could berequired to signEGM(b

′i ⊕ bi, pk), which will be verified by

M. Note that in the case whereAS andDB collude, thesecountermeasures are not sufficient anymore.

B. Barbosaet al. ACISP 2008

1) Description: In [20] Barbosaet al. presented a new pro-tocol for biometric authentication, following [19] (see previousSectionIII-A ). A notable difference between these two comesfrom the fact that [19] compares two biometric templates bytheir Hamming distance, enabling biometric authentication,whereas [20] classifies one biometric template into differentclasses thanks to the SVM classifier (support vector machine,see [29] for details) leading to biometric identification. Bio-metric templates are represented as features vector whereeach feature is an integer, i.e.,bi = 〈bi,1, . . . , bi,k〉 ∈ N

k.Barbosaet al. encrypt this vector, feature by feature, withthe Paillier cryptosystem [30]. In particular, they exploit itshomomorphism property to compute its SVM classifier (thinkof a sum of scalar products) in the encrypted domain.

However, as we explain further below in this section, asthe features are encrypted one by one, an adversary can dosomething similar as the attack described in the previoussection (SectionIII-A ).

Let EPaillier (resp. DPaillier) denote the encryption (resp.decryption) with Paillier’s cryptosystem. This cryptosystemenjoys a homomorphic property which ensures that the productof two encrypted texts corresponds to the encryption of theirsum: for m1,m2 ∈ Zn we have thatDPailler(EPailler(m1) ×EPailler(m2)) = m1+m2 mod n . Note thatZn is the plaintextspace of the Paillier cryptosystem.

The SVM classifier takes as inputU classes (or users)and S samples per class, and determines support vectorsSVi,j and weightsαi,j for 1 ≤ i ≤ S and 1 ≤ j ≤ U .Following the notation in [20], let v = (v1, . . . , vk) = bidenote a freshly captured biometric sample. For this sample

6

the classifier computes

cl(j)SVM(v) =

S∑

i=1

αi,j

k∑

l=1

vl(SVi,j)l for j = 1, . . . , U . (7)

With this vectorclSVM(v), it is possible to determine whichclass is the most likely forv or to reject it. The support vectorsSVi,j and the weight coefficientsαi,j are the references thatare stored byDB.

Briefly, the scheme of Barbosaet al. works as follows:1) The sensorS captures a fresh biometric sample and

encrypts each of the features of its templatev =(v1, . . . , vk) with Paillier’s cryptosystem and sendsit to the authentication serverAS. Let auth =(EPaillier(v1), . . . , EPaillier(vk)).

2) The databaseDB computes an encrypted version ofthe SVM classifier for this biometric data:cj =∏S

i=1(∏k

l=1[authj ][SVi,j ]ll )αi,j where [.]l denotes the

lth component of a tuple. Thiscj corresponds to theencryption of thecl(j)SVM with Paillier’s cryptosystem asdefined above. The database returns the valuescj toAS.

3) The authentication serverAS scrambles the valuescjand forwards them toM.1.

4) The matcherM, using the private key of the system,decrypts the components of the SVM classifier and per-forms the classification ofv. The classification returnsthe class for which the valuecl(j)SVM is maximal.

5) Based on the output ofM, AS determines the realidentity of Ui (in case of non-rejection).

2) Authentication Server Adversary (A=AS): The fol-lowing attack shows how a maliciousAS can recover thebiometric references. In this scheme, the biometric referencedata that are stored byDB, i.e., the support vectorsSVi,j

and the weight coefficientsαi,j , represent hyperplanes that areused for classification. Thesek-dimensional hyperplanes areexpressed as linear combinations of enrolment samples (thesupport vectors). We will show how these hyperplanes can berecovered dimension by dimension.

Let us rewrite (7) as

cl(j)SVM(v) = v1

S∑

i=1

αi,j(SVi,j)1 + · · ·+ vk

S∑

i=1

αi,j(SVi,j)k

= v1βj,1 + · · ·+ vkβj,k .

By sending a vectorv = 〈1, 0, . . . , 0〉 to DB, AS will retrievethe encryption ofβj,1 =

∑S

i=1 αi,j(SVi,j)1 for each user,indexed byj, in the database.

Instead of sending allcj = EPaillier(βj,1) to M, only onevalue will be kept byAS, e.g., c1 = EPaillier(β1,1). Theauthentication server will setc2 = EPaillier(x) for some valuex ∈ Zn and all othercj = EPaillier(0). The matcherM willreturn the index of the class with the greatest value, which is1 if β1,1 > x and2 if β1,1 < x.

The initial value ofx = n/2. If β1,1 > x thenAS willadjustx to n/2+n/4, otherwisex = n/2−n/4. By repeatingthis process and adjusting the valuex, the exact valueβ1,1 can

1In [20], the entity that makes the decision is refered to as the verificationserver. To be consistent with our model we continue to use theterm matcher.

be learned afterlog2 n queries. Hence, the reference data of asingle user can be learned ink log2 n queries to the matcher.

with the permutation). Quite logical, as the matcher isdetermining a list of candidates. In particular, although theidentifiers are permuted, he can detect if related inputs areused, to trace the user whole database (with a known input)

3) Discussion: As in Section III-A this attack succeedsbecause features are encrypted separately and there is no checkto see if the sample and the reference were really merged. Thesame attack can in principle be used to learn any informationabout the input sample.

C. Stoianov SPIE 2010

1) Description: In [22], Stoianov introduces several au-thentication schemes relying on the Blum-Blum-Shub pseudo-random generator. We focus on the database setting from thepaper (cf. Section 7 of [22]. In this setting there is a serviceprovider SP that performs the verification. Consistent withourmodel, we will call this entity the matcherM. Sample andreference are combined before being sent toM and althoughthis is not explicitly mentioned in [22] we designate thisfunctionality to the authentication serverAS in our model.

In the schemes of [22], the biometric datab are binarizedand are combined with a random codewordc coming froman error-correcting code to form a secure sketch or codeoffset b ⊕ c where ⊕ stands for the exclusive-or (XOR).When a new captureb′ is made, wheneverb′ is close tob(using the Hamming distance) it is possible to recoverc fromb⊕ b′ ⊕ c using error correction. This technique is known asthe fuzzy commitment scheme of Juels and Wattenberg [5].An additional layer of protection is added by encrypting thesecure sketch using Blum-Goldwasser.

The Blum-Blum-Shub pseudo-random generator [31] is atool used in the Blum-Goldwasser asymmetric cryptosystem[32]. From a seedx0 and a public key, a pseudo-randomsequenceS is generated. In the following,S is XOR-ed tothe biometric data to be encrypted. By doing so, the state ofthe pseudo-random generator is updated toxt+1. From xt+1

and the private key, the sequenceS can be recomputed.In this system of Stoianov,M generates the keys and sends

the public key toS. On enrollment1) SensorS computes(S ⊕ b⊕ c, xt+1) where:

• Sampleb is the freshly captured biometric data,• StringS is a pseudo-random sequence andxt+1 is

the state of the Blum-Blum-Shub pseudo-randomgenerator as described above, and

• c is a random codeword which makes the securesketchc⊕ b;

2) SensorS sendsS ⊕ b⊕ c to DB;3) SensorS sendsxt+1 and H(c) to M whereH is a

cryptographic hash function.Using the private key,M computesS from xt+1 and storesit along H(c). Periodically,M (resp.DB) updatesS (resp.S ⊕ b⊕ c) to S (resp.S ⊕ c⊕ b) with an independent streamcipher.

During authentication sensorS receives a new sampleb′ andforwards(S′ ⊕ b′, x′t+1) to AS, whereS′ is a new pseudo-random sequence. It is assumed that there is some sort of

7

authentication serverAS that retrievesS⊕c⊕b fromDB andmerges it withS′ ⊕ b′. Finally S′ ⊕ b′ ⊕ S ⊕ b ⊕ c andx′t+1

are sent toM. Using the private keyM recoversS′. FromS′

and S,M computesc⊕ b⊕ b′, tries to decode it and verifiesthe consistency of the result withH(c).

2) Matcher Adversary (A=M): Let M be the primaryattacker. It is inherent to the scheme thatM can always trace avalid user over different queries by looking at the codewordc,which is revealed after a successful authentication. Dependingon the entity that colludes withM additional attacks can bedeviced.

If M andDB collude (A=M+DB) they learn the sketchc⊕ b. This implies that they can immediately trace users withdifferent identities following the linkability attack based onthe decoding of the sum of two sketches as described in [11].From a genuine match,M learnsc and thus alsob.

If M and S collude (A=M+S) they control and alwayslearn the input sampleb′. By settingb′ = 0 they learnc ⊕ bfrom a single query. If a successful authentication occurred,the adversary learns everything.

If M andAS collude (A=M+AS) they always learn theinput sampleb′. They can learn the sketchc ⊕ b for anyreference and thus trace users with different identities asin thecase (A=M+DB). They learn the referenceb after successfulauthentication.

exhaustive search block by block in case of an accept toreconstruct b + b’...

3) Authentication Server Adversary (A=AS): In the cur-rent scheme, bits are not encrypted bit per bit independently.Moreover, they are masked with streams generated via Blum-Blum-Shub and a codeword so attacks as in SectionsIII-AandIII-B are no longer possible. Nevertheless, there is still abinary structure thatAS may exploit.

Assume thatAS knows S′ ⊕ b′ that leads to a positivedecision, i.e.,M acceptsb′ because d(b, b′) ≤ t . ThenAScan start fromS′⊕ b′ and add progressively some errors untilhe reaches a negative result. Then, he backtracks one step bydecreasing the error weight by one to come back to the lastpositive result. This givesAS an encrypted templateS′⊕ b′′.Consider now the vectorS′ ⊕ b′′ ⊕ S ⊕ c⊕ b and replace thefirst bits (say of small lengthl) by a l bits vectorx.

• For all possible values ofx,AS sends the resulting vector(the first block is changed by the valuex) toM who actsas a decision oracle.

• If several values give a positive result, thenAS increasesthe errors on all but the first block.

• This is repeated until only one value ofx gives a positiveresult.

• When this step is reached,AS has found the valuex withno errors, i.e., he learns the first block ofS′ ⊕ S ⊕ c.

• AS proceeds to the next block.

Following this strategy, it is feasible to recover all the bits ofb⊕b′. If AS colludes withS, he can retrieve the full referencetemplateb as soon asS knows one sample that is close tob.We call this attack a center search (cf. SectionIV below).

4) Discussion:In a way similar to the inherent traceabilityof users byM, there are no mechanisms described that protect

against the database tracing users over different queries,i.e.,by trackingS + c+ b lookups.

We note that the matcherM is very powerful because heknows the secret key, which allows computingS′, andS. Assoon asM colludes with one of the other entities he is ableto learn everything from a genuine match or a false accept.

D. Other Schemes

Due to the generic design of our model, several otherschemes in the literature fit our model. Nevertheless, as theyare not always designed with the same entities, an adaptationmight be required. Some others are not compatible at all; forinstance those for which the security relies on a user-secretkey stored on the user side. We now present a brief overviewof the schemes [23], [24] when analyzed in our model.

ACM MMSec 2010 eSketch:This scheme of Faillaetal. is described in [23] following a client-server model. Theclient corresponds toAS and the server can be logicallyseparated intoDB andM. The goal of the scheme is toprovide anonymous identification. TheDB stores data derivedfrom the biometric reference, in particular secure sketches, andpart of the data is encrypted via the Paillier cryptosystem withthe corresponding secret key owned byAS. The identificationquery is implemented through different exchanges between theentities and at one step the same randomness is used to maskall the different reference templates and the masked valuesaresent toAS. Consequently, an authentication server adversary(A = AS) learns the whole database after one successfulauthentication, because the client (AS in our model) knows thePaillier secret keys. If the adversary consists of the databaseand the matcher (A = DB + M), it is also possible to learnthe reference template, which is supposed to be hidden for theserver.

ACM MMSec 2010 Secure Multiparty Computation:Thisscheme of Raimondoet al. [24] is also described followinga client-server model with secure multiparty computationbetween them to achieve an identification scenario (authen-tication scenario as well, cf. [24, Fig. 3]). The scheme isnot made to be resistant against malicious adversaries. Fittingit in our model, we haveAS which obtains the result andDB which stores all the references in clear;AS sends anencrypted (via Paillier) query toDB; DB sends back toASall the entries combined with the query (this gives in fact adatabase containing all the Euclidean distances) and thenASandM interact (secure multiparty protocol) to output the listof identifiers for which the distance is below a threshold. Hereagain encryption of the query is made block by block, so asimilar strategy as in SectionIII-B is possible whenA = AS.An adversaryA = DB +M can also tamper the inputs to thelast part of the protocol to learn information about the query.

IV. FORMALIZATION OF ATTACK SCENARIOS

The goal of this section is to explore some generic attackscenarios that can be used for analyzing actual protocolspecifications. These attacks are presented in the frameworkas described in SectionII and generalize the attacks of theprevious SectionIII . As explained in SectionII we only

8

consider malicious internal attackers, i.e.,AS, DB, M andcombinations of these entities. UserUi and the sensorS havebeen excluded as individual attackers.

A. Blackbox Attack Model

The different attacks that can be carried out by the attackersare modeled asblackbox attacks, following recent resultsfrom [25]. This allows us to clearly specify the focus ofthe attack. Our blackbox-attack model consists of two logicalentities:

1) The attacker, i.e., one or more system entities. Theseentities are fully under control of the attacker: internaldata are known, messages can be modified and addi-tional transactions can be generated.

2) The target or the blackbox, i.e., the combination of allother system entities. The attack is focused on the datathat are protected by the system components within theblackbox.

The target is modeled as a blackbox because the attackercan only observe the input-output behavior of the box. Thisadequately reflects remote protocols where only the communi-cation can be seen by the attacker. No details are known aboutthe internal state of the remote components. During the attack,the attacker will “tweak” inputs to the blackbox. However, allcommunication must comply with the protocol specification.Any messages that are malformed or that are sent in the wrongorder are rejected by the blackbox.

It should be noted that there are cases in which the attackercannot generate additional transactions because he has tofollow the protocol specifications. E.g., ifDB is attackinghe has to wait until a request is received fromAS. Whenanalyzing protocols it should be assumed that this will occurwith a reasonable frequency. If relevant, attack complexity canbe expressed in function of this frequency. Similarly, if theattacker isAS, he receives inputs fromS and communicateswith DB andM. In this case we excludeUi and S fromthe blackbox. It should be assumed, though, that a number ofinputs fromS is available toAS. This does not necessarilyimply thatS is under control ofAS. The analysis of the attackcan take into account the amount of data that is available.

We will now consider a number of possible adversaries andblackbox attacks in our framework.

B. Attacker =AS

Decomposed Reference Attack:Let’s assume that onlyone referencebi is returned byDB. The goal of this attackis to learn bi. Biometric samples or references are oftenrepresented as a “string”, i.e., a concatenation (let‖ denoteconcatenation) of (binary) symbols. Let’s assume thatf2(bi)is the concatenation of a subfunctionf2 that is applied oneach of then componentsb′i,j of b′i individually. If AS has tocombinef2(bi) andf1(b′i) without knowing either the sampleor the reference, it is likely thatf1 and f3 will also be theconcatenation of component-wise applied subfunctions, i.e.,f3(bi, b

′i) = f3(bi,1, b

′i,1)‖ . . . ‖f3(bi,n, b

′i,n). Note that in our

modelAS can generate the valuef3(bi,j , b′i,j) but this value

should not reveal toAS whether the inputs are the same ornot. This decomposition of references is used in the schemeanalyzed in SectionIII-A and the following attack applies toit.

Suppose thatAS is able to generate a value that is validoutput of f3 when the two component inputsbi,j and b′i,jare the same and similarly when they are not the same, e.g.,the output is the encryption of one or zero. IfAS can alsocomputef1, thenAS can fully reconstructbi. To do soASchoose the first component ofb′i at random, combines it withthe first component ofbi and sends the result toM. The othercomponents that are sent toM are such thatt of those arean output off3 that reflects different inputs and then− t− 1remaining components are outputs that reflect equal inputs.Note thatt is the comparison threshold. If the guess ofASfor the first component is correct thenM will return a positivematch. Otherwise the guess is wrong andAS can try again.This process can be repeated until all components ofbi arerecovered. For binary samples, this requiresn queries toMand 1 query toDB.

As shown in SectionIII-B a similar attack can be executedif the biometric data are represented as real-valued or integer-valued feature vectors. However, more queries might be re-quired to get an accurate result.

Center Search Attack UsingS: In this attack,S is alsocompromised and under the control of the attacker. The attackgoal is to learn the full referencebi from a close sample. Theinput sample is obviously always known toAS andS. Thusat some point in timeUi will present a sampleb′i that matchesreferencebi. This sample will lie at some distance from thereference. In the case where biometrics are represented asbinary strings and the system implements a hamming distancematcher the attacker can recover the exactbi as follows.

The sensor flips the first bit ofb′i and sends the new sampleto AS who performs the whole authentication procedure. Ifthe authentication succeedsS flips the second bit, leaving thefirst bit also flipped, and sends the sample toAS who followsthe procedure again. This continues until the sample no longermatchesbi. Then the sensor starts again by restoring the firstbit of the sample that is no longer accepted and forwards itto AS. If it gets accepted this means that the first bit of theoriginal sampleb′i was the same as the first bit ofbi. If not,then the first bits were different. One by one the bits inb′i thatare different from those inbi can be corrected. This techniquewas demonstrated in SectionIII-C.

We call this the center search attack because we start froma sample that lies in a sphere with radiust, the matchingthreshold, and the reference as center point. The goal of thisattack is to move the sample to the center of the sphere. Theworst-case complexity of this attack for bitstrings of length nis the greatest of2 ∗ t+n and4t. The complexity is2 ∗ t+nif there aret−1 bit-errors in the beginning and one at the endof the string. The firstt − 1 errors get corrected by flippingthem andt additional bits need to be flipped to invalidate thesample. Locating the bit-errors requires searching till the endof the string where the last error is. The complexity is4t ifthere aret − 1 correct bits followed byt wrong bits. So2tflips are needed before the queries no longer match and then

9

2t positions need to be searched. In practice,t ≤ n/2 andthus the worst-case complexity is2t+ n.

False-Acceptance-Rate Attack:A false acceptance occursif a sample, not coming fromUi, is close enough tobi tobe recognized by the system as a sample coming fromUi.The name comes from the fact that an attacker can take alarge existing database of samples and feed that to a biometricauthentication system. Due to the inherent false acceptancerates, there will be a sample in the attacker’s database thatmatches the reference in the system with high probability.

The goal of this attack is to learnbi from a matching samplethat is unknown to the attacker. This attack combines ideasfrom the previous attacks. The attacker isAS, not includingS, andAS does not know how to compute an output off3that reflects equal (or different) inputs. It is assumed, however,that the attacker can replace the components ofb′i in the valuehe received fromS, i.e., f1(b′i). This is definitely the case iff1 is a concatenation of subfunctions and ifAS can computesuch subfunctionf1.

The actual attack then proceeds as follow. The attackerASwaits until a genuine user presents a valid sample. The attackis similar as in the center-search attack, only nowAS willnot flip bits but simply replace them with a known value, e.g.,one. He will do this until the sample no longer matches. ThenAS already knows that the last bit he replaced was not oneand he will restore that bit. Then he continues to substitutethebits one by, carefully observing whether the sample matchesor not and learning all the bits. The first bits that were flip toinvalidate the sample can be learned by simply restoring them.

C. Attacker =DB or M

The attacker is the databaseDB or the matcherM whocommunicate with the authentication serviceAS only. Theattackers cannot achieve any of the attack goals individuallybecause their blackboxes give output, which cannot be influ-enced by the attacker, before receiving input. If these entitiesdo not collude with other entities they are simply passiveattackers and by Assumption2 they cannot mount any attacks.

Including the sensorS: If the sensor colludes with thedatabase or the matcher, some attacks are trivial: the providedsample is known and thus it is also easy to trace a user basedon the provided sample or identity.

A powerful attacker is the combination of theM and theS,as was shown in SectionIII-C. Because the sensor can sendany input and any identity, the attacker does not have to waitfor a matching sample. The same center-search attack can beperformed as in the case whereAS andS are the attacker.

D. Attacker =AS andDB

The attackers (AS andDB) receive fresh input fromS andcommunicate with the matcher. They can search the entiredatabase and turn to identification although the protocol couldbe designed to operate in verification mode.

The input sampleb′i can be learned in the same way as thebi was learned in the attacks ofAS, if the same conditionshold. Then, depending on the implementation, the attacker canlearn the entire database becauseDB will return any bi andAS will manipulate it until all bits are known.

E. Attacker =AS andM

The attack goal with the highest impact is to learn thereferencebi from the database. Depending on how theMimplements its functionality this can be a very powerfulattacker, e.g., ifM possesses decryption keys for encryptedsamples/templates as was the case in the schemes analyzed inSectionsIII-A , III-B andIII-C.

F. Attacker =DB andM

In this combination of attackers,DB will manipulate itsoutput so that it can be of use to theM. The relevant attackgoals are to learnb′i and to traceUi.

G. Attacker =AS andDB andM

In this particular case, the attacker is a combination ofAS,DB andM, and the goal is to learnb′i. If the referencebiis not stored in the clear by the database, the attacker maywant to learnbi also. TracingUi is almost trivial because theattacker can perform a search (identification) on the databaseDB. The attack goals are easily reached if the data can bedecomposed as explained in the attacks ofAS.

V. CONCLUSION

Biometric authentication protocols that are found in theliterature are usually designed in the honest-but-curiousmodelassuming that there are no malicious insider adversaries. Inthis paper, we have challenged that assumption and shownhow some existing protocols are not secure against suchadversaries. Such analysis is extremely relevant in the contextof independent database providers. Much attention was givento an authentication server attacker, which is a central andpowerful entity in our model. To prevent the attacks thatwere presented, stronger enforcement of the protocol designis needed: many attacks succeed because transactions can beduplicated or manipulated.

REFERENCES

[1] J.-P. M. G. Linnartz and P. Tuyls, “New shielding functions to enhanceprivacy and prevent misuse of biometric templates,” inAVBPA, ser.LNCS, J. Kittler and M. S. Nixon, Eds., vol. 2688. Springer, 2003, pp.393–402.

[2] I. Buhan, J. Doumen, P. H. Hartel, and R. N. J. Veldhuis, “Fuzzy ex-tractors for continuous distributions,” inASIACCS, F. Bao and S. Miller,Eds. ACM, 2007, pp. 353–355.

[3] Y. Dodis, L. Reyzin, and A. Smith, “Fuzzy extractors: Howto generatestrong keys from biometrics and other noisy data,” inAdvances in Cryp-tology - EUROCRYPT 2004, ser. LNCS, C. Cachin and J. Camenisch,Eds., vol. 3027. Springer, 2004, pp. 523–540.

[4] G. Davida, Y. Frankel, and B. Matt, “On enabling secure applicationsthrough off-line biometric identification,”Proc. of the IEEE Symp. onSecurity and Privacy – S&P ’98, pp. 148–157, May 1998.

[5] A. Juels and M. Wattenberg, “A fuzzy commitment scheme,”in CCS’99: Proc. of the 6th ACM Conf. on Computer and CommunicationsSecurity. New York, NY, USA: ACM Press, 1999, pp. 28–36.

[6] A. Juels and M. Sudan, “A fuzzy vault scheme,” inProc. of IEEE Int.Symp. on Information Theory, Lausanne, Switzerland, A. Lapidoth andE. Teletar, Eds. IEEE Press, 2002, p. 408.

[7] N. K. Ratha, J. H. Connell, and R. M. Bolle, “Enhancing security andprivacy in biometrics-based authentication systems,”IBM Systems J.,vol. 40, no. 3, pp. 614–634, 2001.

10

[8] N. Ratha, J. Connell, R. Bolle, and S. Chikkerur, “Cancelable biometrics:A case study in fingerprints,” inPattern Recognition, 2006. ICPR 2006.18th Int. Conf. on, vol. 4, 2006, pp. 370–373.

[9] N. K. Ratha, S. Chikkerur, J. H. Connell, and R. M. Bolle, “Generatingcancelable fingerprint templates,”IEEE Trans. Pattern Anal. Mach.Intell., vol. 29, no. 4, pp. 561–572, 2007.

[10] X. Boyen, “Reusable cryptographic fuzzy extractors,”in CCS ’04: Proc.of the 11th ACM Conf. on Computer and Communications Security.New York, NY, USA: ACM, 2004, pp. 82–91.

[11] K. Simoens, P. Tuyls, and B. Preneel, “Privacy weaknesses in biometricsketches,” in2009 30th IEEE Symp. on Security and Privacy, May 2009,pp. 188–203.

[12] I. Buhan, J. Breebaart, J. Guajardo, K. de Groot, E. Kelkboom, andT. Akkermans, “A quantitative analysis of crossmatching resiliencefor a continuous-domain biometric encryption technique,”in First Int.Workshop on Signal Processing in the EncryptEd Domain, SPEED 2009,2009.

[13] A. Nagar and A. Jain, “On the security of non-invertiblefingerprinttemplate transforms,” inInformation Forensics and Security, 2009. WIFS2009. First IEEE Int. Workshop on, 2009, pp. 81 –85.

[14] L. Rila and C. J. Mitchell, “Security protocols for biometrics-basedcardholder authentication in smartcards,” inACNS, ser. LNCS, J. Zhou,M. Yung, and Y. Han, Eds., vol. 2846. Springer, 2003, pp. 254–264.

[15] J. Bringer, H. Chabanne, T. A. M. Kevenaar, and B. Kindarji, “Extend-ing match-on-card to local biometric identification,” inBiometric IDManagement and Multimodal Communication, BioID-Multicomm 2009,ser. LNCS, J. Fierrez, J. Ortega-Garcia, A. Esposito, A. Drygajlo, andM. Faundez-Zanuy, Eds., vol. 5707. Springer, 2009, pp. 178–186.

[16] P. Tuyls, B.Skoric, and T. Kevenaar, Eds.,Security with Noisy Data: Pri-vate Biometrics, Secure Key Storage and Anti-Counterfeiting. Springer-Verlag London, 2007.

[17] B. Schoenmakers and P. Tuyls,Private Profile Matching, ser. PhilipsResearch Book Series. Springer-Verlag, New York, 2006, vol. 7, pp.259–272.

[18] J. Bringer, H. Chabanne, D. Pointcheval, and Q. Tang, “Extended privateinformation retrieval and its application in biometrics authentications,” inCANS, ser. LNCS, F. Bao, S. Ling, T. Okamoto, H. Wang, and C. Xing,Eds., vol. 4856. Springer, 2007, pp. 175–193.

[19] J. Bringer, H. Chabanne, M. Izabachene, D. Pointcheval, Q. Tang, andS. Zimmer, “An application of the Goldwasser-Micali cryptosystem tobiometric authentication,” inACISP, ser. LNCS, J. Pieprzyk, H. Ghodosi,and E. Dawson, Eds., vol. 4586. Springer, 2007, pp. 96–106.

[20] M. Barbosa, T. Brouard, S. Cauchie, and S. M. de Sousa, “Securebiometric authentication with improved accuracy,” inACISP, ser. LNCS,Y. Mu, W. Susilo, and J. Seberry, Eds., vol. 5107. Springer, 2008, pp.21–36.

[21] J. Bringer and H. Chabanne, “An authentication protocol with encryptedbiometric data,” inAFRICACRYPT, ser. LNCS, S. Vaudenay, Ed., vol.5023. Springer, 2008, pp. 109–124.

[22] A. Stoianov, “Cryptographically secure biometric,” in SPIE BiometricTechnology for Human Identification VII, volume 7667, 2010.

[23] P. Failla, Y. Sutcu, and M. Barni, “eSketch: a privacy-preserving fuzzycommitment scheme for authentication using encrypted biometrics,”in Proc. of the 12th ACM workshop on Multimedia and security(MMSec’10). ACM, 2010, pp. 241–246.

[24] M. D. Raimondo, M. Barni, D. Catalano, R. D. Labati, P. Failla,T. Bianchi, D. Fiore, R. Lazzeretti, V. Piuri, F. Scotti, andA. Piva,“Privacy-preserving fingercode authentication,” inProc. of the 12th ACMworkshop on Multimedia and security (MMSec’10). ACM, 2010, pp.231–240.

[25] J. Bringer, H. Chabanne, and K. Simoens, “Blackbox security of biomet-rics (invited paper),” inIntelligent Information Hiding and MultimediaSignal Processing (IIH-MSP), 2010 6th Int. Conf. on, 2010, pp. 337–340.

[26] Q. Tang, J. Bringer, H. Chabanne, and D. Pointcheval, “Aformal study ofthe privacy concerns in biometric-based remote authentication schemes,”in ISPEC, ser. LNCS, L. Chen, Y. Mu, and W. Susilo, Eds., vol. 4991.Springer, 2008, pp. 56–70.

[27] A. K. Jain, P. Flynn, and A. A. Ross, Eds.,Handbook of Biometrics.Springer, 2008.

[28] S. Goldwasser and S. Micali, “Probabilistic encryption and how to playmental poker keeping secret all partial information,” inSTOC. ACM,1982, pp. 365–377.

[29] K. Crammer and Y. Singer, “On the algorithmic implementation of mul-ticlass kernel-based vector machines,”J. of Machine Learning Research,vol. 2, pp. 265–292, 2001.

[30] P. Paillier, “Public-key cryptosystems based on composite degree resid-uosity classes,” inEUROCRYPT, ser. LNCS, J. Stern, Ed., vol. 1592.Springer, 1999, pp. 223–238.

[31] L. Blum, M. Blum, and M. Shub, “A simple unpredictable pseudo-random number generator,”SIAM J. Comput., vol. 15, no. 2, pp. 364–383, 1986.

[32] M. Blum and S. Goldwasser, “An efficient probabilistic public-keyencryption scheme which hides all partial information,” inCRYPTO,ser. LNCS, G. Blakley and D. Chaum, Eds., vol. 196. Springer,1984,pp. 289–302.