facial image and voice data based one-time password(fv...
TRANSCRIPT
Facial Image and Voice Data based One-Time
Password(FV-OTP) Mechanism for e-Financial
Authentication System on a Smart Phone You Joung Ham , Won-Bin Choi, Hyung-Woo Lee
School of Computer Engineering, Hanshin Univ., 411, Yangsan-dong, Osan,
Gyyunggi Province, 447-791, Rep. of Korea.
e-mail: [email protected], [email protected], [email protected]
Abstract— The number of Internet banking transactions and e-
financial services based on the publicly authenticated
certifications of smart phone have increased rapidly. We can use
several services such as wireless Internet access and Web surfing
for information retrieval on iOS based iPhone device.
Additionally, we can take advantage of using electronic banking
services without the constraints of time and location on a smart
phone. However, personal and financial information stored in
smart phone can also be disclosed to the outside malicious
attacker insecurely. Therefore, it is necessary to authenticate the
smart phone user’s real identity correctly on using wireless e-
financial services. In this study, we proposed a new one-time
password mechanism (FV-OTP) by using user-related
multimedia such as image and voice data captured from the
user’s multimedia capture module embedded on a smart phone
device. A proposed one-time password value was created securely
both from a facial image obtained through the camera module
and a voice information input through the microphone device
mounted on smart phone for providing secure authentication
services. Using proposed FV-OTP mechanism, we can construct
an enhanced authentication system by providing secure e-
financial transactions on a smart phone.
Keywords- e-Financial service, Security, Authentication, One-
time password, Facial image, Voice, Smart Phone
I. INTRODUCTION1
There are ongoing studies that attempt to incorporate voice information in a mobile environment like a smart phone[1,2,3]. However, when smart phone is used for electronic banking services, the vulnerability of such banking services is revealed due to the possible leakage of personal information and malicious attacks on user authentication[4]. For these reasons, the existing e-banking services related to internet banking are recommending the use of multimedia based one-time passwords (OTP) and promoting service standardization in a way to strengthen user authentication [5,6]. Actually, since the e-banking services can be exploited through illegally authorized terminal devices, a stricter user authentication process must be provided and applied.
1 Corresponding Author: Hyung-Woo Lee is with the School of
Computer Engineering, Hanshin Univ., 411, Yangsan-dong, Osan,
Gyyunggi Province, 447-791, Rep. of Korea. (e-mail:
The pre-existing OTP approach used in smart phone based e-banking services does not provide any procedure of authenticating an OTP token as that of the actual smart phone owner and is subject to man-in-the-middle (MITM) attacks on OTP information. That's why there should be a technological approach that compensates for the problem of vulnerability [6].
In an attempt to provide a more strengthened authentication process to smart phone users, the present study comes up with a method by which a final OTP value is generated from the user's voice data in an OTP-based authentication process and used for smart phone user authentication. The proposed approach will be able to help prevent illegal users' possible bypass attacks on smart phone-based services and provide a multifactor authentication feature available on smart phones.
II. OTP AND MULTIMEDIA BASED AUTHENTICATION
A. Vulnerability of User Authentication on Smart Phone
Recently, the frequent occurrence of smart phone-related security incidents has aroused widespread social concern, especially over the leakage of personal information stored in terminal devices (handsets). Smart phone users may encounter problems caused by malicious codes, such as remote control, operational disturbance, and billing inducement related to e-banking services. Moreover, when software installed in a smart phone is running in a multitasking fashion, there is a potential for personal information to leak out and accordingly it is required to take technical action against the problem.
Security vulnerabilities regarding smart phone handsets include malicious-code infection, data leakage, phone misuse/abuse and phone loss/theft; and as potential security vulnerabilities in public or transport networks, data forgery/falsification and leakage can be caused due to attacks or hacks on AP during Wi-Fi based communications. In addition, while using the internet on a smart phone, the user is exposed to the risk of DoS/DDoS attacks by malicious codes, authentication bypassing, or account takeover[13].
Therefore, solutions to those security problems should involve an approach of strengthening smart phone user authentication. More strengthened authentication mechanisms should be used with focus on access to internal information in smart phones.
B. Security of One-Time Password Mechanism
The existing one-time password (OTP) technologies [5,6] are to generate one-time passwords. The OTP generation technologies are divided into two main approaches: synchronization and asynchronous method. The asynchronous approach for OTP generation works using a challenge-response mechanism and implements an authentication process based on the result of response to a challenge value. For this approach, there is no need for synchronization with a server, but it requires user input and sometimes causes a network overload. The OTP synchronization approach works in a mechanism of time synchronization, event synchronization or event-time synchronization and requires an accurate synchronization process between an OTP token and authentication server. The approach is designed to generate a password at specific time intervals based on time information synchronized between the server and OTP device[5,6]. However, this synchronization approach is vulnerable to MITM attacks and restricted by cool down time[13].
C. Facial Image and Voice Data based Authentication
Recently there are ongoing studies that attempt to incorporate voice information in a one-time password generation process [7]. Therefore it is an multi-factor authentication approach to authenticate which requires the presentation of “two or more” of the three authentication factors[8]. Recently, image based two-factor authentication mechanism was proposed for providing confidence on mobile application[9].
In order to strengthen security and improve user authentication vulnerability in smart phone-based e-banking services which are becoming a worldwide issue, this study aims to propose a FV-OTP mechanism that grafts a person's unique facial image and voice information onto OTP technologies used in the pre-existing e-banking services. The proposed architecture is designed to make up for the vulnerabilities of smart phone user authentication and strengthen security, and in the architecture, a one-time password is generated with a user's facial image and voice information added to an existing authentication system which only replies on a user ID and password. As a result, this mechanism is designed and implemented to strengthen smart phone user authentication to a multi-factor authentication system.
III. PROPOSED FV-OTP BASED AUTHENTICATION
A. Architecture of FV-OTP based User Authentication
In clearing a user authentication process, e-banking service users read a challenge value received from a server in their own voice using a microphone in their smart phone and have an OTP generated using captured voice information. Fig. 1 shows detailed procedures of generating an OTP from both a user’s facial image and voice information for FV-OTP authentication.
As a previous step, smart phone users register their own ID and password in advance to the server, in the same way as when using the existing e-banking services. Now on the assumption that UAC's ID and password information is stored
in the server, the approach designed and implemented in this study does not transmit such pre-registered password information via a network in an actual user authentication process but performs user authentication securely through a FV-OTP mechanism by using the information only in the client terminal and server.
The server generates a random challenge value and transmits the value to user device (UAC). The UAC inputs facial image in Step 1 on the challenge value via a camera module mounted on the smart phone and sends the information to the server, which then performs an authentication process for the client terminal based on the information received from the and transmits the result to the UAC using a random value generated in the server. And then the user also input his/her own voice data based on the challenge value received from the server.
Lastly, after validating the information received from the server as a mutual authentication procedure, the UAC can generate a FV-OTP value using the voice information input and challenge value received from the server and returns this FV-OTP value to the server for verification finally. The server performs the final authentication process for UAC by verifying the OTP value.
ServerUser
Fu = Face
IDu, PWu
Privacy DB
Rejection
IDu* Hs*
IDu'* Hs'*
IDu == IDu*
Chal = ( Hs* ㊉ Ls )
M = {Chal}
Ai = voice
Au = H(( Ai ㊉ Fu )| ( IDu ㊉ Hs ))
Cu = H( Ai | Ls | Au )
M = {Cu, Ai} Ai* = Ai
Rejection
Au* = H(( Ai* ㊉ Fu ) | ( IDu* ㊉ Hs*))
Cu* == Cu
M = {Ts}
Cu* = H( Ai* | Ls | Au* )
Chal* == Chal
FVOTPu = H(Au ㊉ H( Ts | Rs* | Chal ))
Rs* = Ts ㊉ Cu ㊉ H( Rc | Au )
Rejection
FVOTPu* = H(Au* ㊉ H( Ts | Rs | Chal ))
VOTPu* == VOTPuRejection
Ts = Cu* ㊉ Rs ㊉ H( Rc | Au* )
END
No
No
No
No Yes
Yes
Rc = H( Fu | IDu) ㊉ Hs
Fu*
Fu'*
Rs = Random Number
Ls = H( Rc ㊉ Hs* ㊉ Rs)
Ls == Chal ㊉ Hs
Hs = H(IDu | PWu)
Chal* = Hs* ㊉ H( Rc ㊉ Hs ㊉ Rs*)
Rc == H(Fu* | IDu*) ㊉ Hs*
Yes
Rejection
No
M = {Rc | IDu | Fu}
M = {FVOTPu}
ID Hs Fi
Figure 1. The Entire Architecture of Proposed FV-OTP based
Authentication Mechanism
B. FV-OTP Generation Mechanism
Detailed steps for FV-OTP-based user authentication are as
follows:
Step 1: User can capture his/her own facial image (Fu)
using the camera module on smart phone. The UAC transmits
the user's ID (IDu) with facial image (Fu) and a randomly
generated random number (Rc) to the server. In this case, the
Rc value will be generated after XOR calculation using both
the user’s uniquely generated hash data Hs and H( Fu | IDu ).
And user sends Message, as an equation (1) below, to the
server. And then the server identifies its database information
to determine whether the user is registered. If the user is not
registered, the FV-OTP generation process is terminated. If
the IDu received from UAC is about a registered user ID, the
server generates a random number (Rs) and then a challenge
value (Chal) with Rc received from UAC for performing facial
image based sender authentication process after image
comparison process stored on the server side DB previously,
as an equation (2) below, and transmits the challenge value to
UAC.
IDu == IDu*, PWu == PWu*, Fu == Fu*
Hs = H( IDu | PWu )
Rc = H( Fu | IDu ) ⊕ Hs
Message = { Rc | IDu | Fu} (1)
Rc* = H( Fu* | IDu* ) ⊕ Hs*, Rc == Rc*, Rs == Rs*
Ls = H( Rc ⊕ Hs* ⊕ Rs ) and Chal = ( Hs* ⊕ Ls) (2)
The Chal value is given as a combination of numbers or
letters and displayed on the client's smart phone terminal.
Random numbers Rc and Rs are selected by the client and
server, and both the values change uniquely whenever FV-
OTP-based user authentication is performed, as a follow Fig. 2.
Yes
Privacy DB
Fu
Fu*
Fu1*
ID
IDu*
IDu1*
Hs
Hs*
Hs1*
IDu
Fu
Rc
MessageValid
Rc*
Fu*
IDu*
Hs*
Hash
㊉
Valid
OK
Rs
ChalLs
Hs*
Ls
㊉Hash
Hs*Rc Rs㊉ ㊉
Server
Login
IDu
PWu
Image
Fu
Hs
IDu PWu
Rc
HsHash
㊉
IDu
Fu
Rc
Message
OK
Step 1User
Hash
Fu
IDu
Figure 2. FV-OTP Step 1 for Image based User Authentication
Step 2: Next step is to authenticate the user using voice data.
The UAC reads the Chal value information received from the
server in a user's voice via a microphone in the smart phone
handset. In first, the client can get Ls* value after calculating
XOR operation on Chal using Hs value because Ls is same as
a Chal ⊕ Hs. In this study, the user is allowed to enter voice
data on the Chal value during five seconds so that an Ai value
is generated. As shown in equation (3), a hash value, Au=
H(( Ai ⊕ Fu ) | ( IDu ⊕ Hs )), is generated using both the
voice information (Ai) input by the user and previously
captured facial image (Fu), and the UAC user's ID information
(IDu) and hashed value Hs = H (IDu | PWu) generated by the
client previously; and the Ls* value recalculated on client side
using Ls value received from the server is used to generate a
response value according to Cu= H(Ai | Ls* | Au ) as given in
equation (4). As a response, the Cu will be sent to the server
together with the user-input voice data value (Ai) as a
concatenated Message = { Cu | Ai } as a follow Fig. 3.
Ai = voice data
Ls* == Chal ⊕ Hs
Au = H(( Ai ⊕ Fu ) | ( IDu ⊕ Hs )) (3)
Cu = H(Ai | Ls* | Au ) (4)
Message = { Cu | Ai }
Step 2User
Ls
HsChal ㊉
Chal
Hs* Ls㊉
Audio
Ai
Au
Hash
Fu
Ai㊉
IDu
㊉Hs
Cu
Hash
Ai
Ls
Au
Message
Cu
Ai
OK
Message
Cu
Ai
Cu
Hash
Ai Ls Au*
Valid Rc
OK
Ts
Cu* Rs㊉ ㊉
Au
Hash
Fu
Ai
㊉
IDu
㊉
Hs
Hash
Rc
Au*
Server
Figure 3. FV-OTP Step 2 for Audio based User Authentication
The server performs a verification process and a integrity
check procedure for Cu and Ai values received from UAC. As
shown in equation (5), a value for Au* = H(( Ai* | Fu* ⊕
IDu* ⊕ Hs*)) is generated by a server using the previously
stored user’s ID (IDu*) and also a hashed identity values Hs*
stored in the database, the facial image value (Fu*) stored on a
server DB, and the voice data value (Ai); And as given in
equation (6), a value for Cu* = H( Ai* | Ls | Au* ) is generated
from a server-produced Ls value and compared with the Cu
value received from UAC for providing a mutual
authentication between client and server. If the results of
mutual authentication are consistent, then a process for FV-
OTP generation is conducted. As can be seen from equation
(7) Ts= Cu* ⊕ Rs ⊕ H( Rc | Au* ), the server generates a
one-time token Ts for FV-OTP procedure using a server side
random number Rs and sends the token Ts to UAC.
Expressions for this step are as follows:
Ai* = Ai
Au* = H(( Ai* | Fu* ⊕ IDu* ⊕ Hs*)) (5)
Cu* = H( Ai* | Ls | Au* ) (6)
Cu* == Cu
Ts = Cu* ⊕ Rs ⊕ H( Rc | Au* ) (7)
Therefore, we can perform both a mutual authentication
procedure and a one-time token generation for which voice
information is used.
Step 3: As shown in equation (8), the UAC performs the
process of getting a server-selected random value Rs* = Ts ⊕
Cu ⊕ H( Rc | Au ) from the Ts value received from the server.
In this step, we can proof the equation (8) as a Rs*= Cu* ⊕
Rs ⊕ H( Rc | Au* ) ⊕ Cu ⊕ H( Rc | Au ) = Rs. In terms of
equation (9), a server re-authentication process is conducted to
determine whether Hs*⊕ H (Rc ⊕ Hs ⊕ Rs*) = Hs* ⊕ Ls
produces the same value as Chal received from the server at
Step 1 and Step 2. Further, as given in equation (10), it is
possible to generate a one-time password value (FVOTPu)
which will be sent to the server as a follow Fig. 4.
Rs* = Ts ⊕ Cu ⊕ H( Rc | Au ) (8)
Chal* = Hs*⊕ H (Rc ⊕ Hs ⊕ Rs*) = Hs* ⊕ Ls (9)
FVOTPu = H( Au ⊕ H( Ts | Rs* | Chal )) (10)
FV-OTPu
Hash
Hash
Rs
Ts
Chal
㊉
㊉
Au*
㊉
END
Step 3User Server
Rs
Ts
Ts
Hash
Cu
Rs
㊉
㊉
Cu
Rc
Au
Hash
㊉
Rc
Au
㊉
Chal
Hs
㊉Hash
Ai
Ls
Au*
㊉
㊉
FV-OTPu
Hash
Hash
Rs*Ts Chal㊉ ㊉Au ㊉
Comparison
Figure 4. FV-OTP Step 3 for OTP based User Authentication
Step 4: In terms of equation (11) the server identifies and
verifies the facial image and voice data based one-time
password (FVOTPu) received from UAC and completes the
process for smart phone user authentication. The FVOTPu
value delivered from UAC is one-time password information
that is generated using the user's voice data (Au) made via a
microphone based on an audio input Ai, and Chal value sent
from the server, token Ts and Rs* values calculated by client
in the mutual authentication process securely.
FVOTPu* = H( Au* ⊕ H( Ts | Rs | Chal )) (11)
FVOTPu* == FVOTPu
In consequence, the proposed approach can be
recommended as a solution to user authentication problems
which occur in the existing smart phone environment, and
since the approach uses a user's voice data, it provides a secure
way to prevent MITM and replay attacks. The approach
allows the authentication process to be performed only with a
cryptographically secure hash function, a random number
generator and a XOR function.
IV. IMPLEMENTATION RESULTS
A. Implementation Results
iPhone iOS4.2 was used to test the implementation results
regarding the proposed mechanism. An Apple-provided Xcode
3.2.5 development environment was employed, and a
MySQL-based OTP server was implemented. As shown in the
figure above, the client has a Chal value generated from the
server using a microphone controller module in the iOS-based
smart phone, and the user is allowed to read the numeric
information of the value in his/her own voice for five seconds
via a microphone. The proposed mechanism also allows the
numeric information to be captured by the smart phone
terminal, and a mutual authentication process takes place
between the client and server. Once a one-time token value
generated from voice information is sent back to the smart
phone, the server works to generate the final one-time
password from voice information. Fig. 5 shows start-up
display pages with an OTP mechanism implemented.
Figure 5. Implementation of Face &Voice Based OTP Authentication
The user is asked to enter his/her ID and password information
registered with the server like in the existing authentication
process. The user's password information is used only in
his/her smart phone handset without being transmitted to the
server via a network. If the user ID input is sent to the server
with a random value (Rc) generated from the user’s facial
image in the client terminal, the server checks the input value
first against the ID list stored in its database. If the ID is found
unregistered, the subsequent process is not activated and an
alert message appears on the smart phone display as shown in
Fig. 5.
After the user identity is confirmed, the server generates a
Chal value and sends it to the smart phone terminal. If the
start-up screen appears, the user will see the challenge value of
‘205871198’ received from the server and can read its numeric
information in his/her own voice using a built-in microphone
device. In this study, users were given a 9-digit number and
asked to enter it as voice information within a time length of
five seconds. With information input done, the numeric
expressions presented above were used to generate a Cu value,
which was then transmitted to the server.
The random value (Rc) of 16807 is generated in terms of a
smarphone user's ID "aaa" and transmitted to the server, and
the Chal value of 205871198 is received as generated and
transmitted by the server. The client generates Au and Cu
values by reading the user's voice information input from the
buffer and sends the values to the server.
Subsequently, the client module extracts an Rs value using a
one-time token value (Ts) generated and sent from the server
and has the same Chal value obtained through the verification
process above. In the end, a OTP value is generated using
voice information and transmitted to the server. A user ID and
random value are received from the client and the server
generates the random value of 879160508. Again in the server,
the PIN value of 205871198 corresponding to a Chal value is
generated using the random value, and the server transmits the
PIN value to the client terminal. The server performs a process
of validating and verifying the Cu value sent from the client
and generates and returns a Ts value to a smart phone. Finally,
with the OTP value from the client validated and verified,
OTP-based authentication in a smart phone environment is
completed.
V. SECURITY AND PERFORMANCE ANALYSIS
A. Security Analysis
An one-time password (FV-OTP) mechanism proposed in this
study has a smart phone user's facial image and voice
information captured via both using a camera and a
microphone device in the handset. And proposed mechanism
allows an Au value to be generated using a Rs* value which is
randomly selected upon request for ID information (IDu) or
password information (PWu) or otherwise for authentication.
The Au value contains not only the user’s identity information,
but also a randomly selected hash value (Rc) which varies
whenever there is an authentication request from a user’s
facial image. In the Au information transmitted via a network,
the user's secret does not transmitted to the server as we can
use both a hash function and XOR function, and as a result the
one way primitive of hash function makes it difficult to find
the original password. In addition, for voice information (Ai)
in Au, since only part of information entered by a user for five
seconds is made available, there is a lower possibility of
password leakage due to dictionary attacks. Therefore, even if
an MITM attack is launched on e-banking services, any
password, once generated, can't be used later again because a
Rc value randomly generated by the client is contained in Au
information transmitted to the client.
The voice information in the Au value is generated with a
PIN value received again by taking as a Chal value the hash
value generated using a server-generated Rs value for the Rc
value sent from the client to the server, and therefore it is also
available for authentication or non-repudiation of the client.
In order to strengthen the security of the OTP-generating
process, the proposed mechanism has allowed the client to
transmit the Chal value received from the server in an
abbreviated form with Au and voice information Ai by
computing a hash value, instead of sending the Au value
directly to the server. As a result, the server has been allowed
to validate message integrity and verify delivered messages.
We can imagine the case where the server spoofs the client
and performs an authentication process instead of the client
after producing information on its own like at steps 2 and 3
described above. In this case, however, the server should also
generate voice information on a Chal value, instead of the
client. In other words, it should generate as much voice
information as possible for a five-second time length by
spoofing the client.
Accordingly, the server can launch an attack of generating
voice information itself or otherwise exploit voice information
(Ai_old) the client sends from a previously performed
transaction. In this instance, however, the server should also
forge or produce a client random value (Rc) in addition to the
Au*_fake information that it should generate on its own. The
self-generated fake value (Cu*_fake) is contained in a one-
time token the server transmits to the client for self-
authentication or V-OTP generation. A Chal value contained
in the fake value is assumed to contain a random value sent by
the client in a form of H(Hs* ⊕ Ls ). This implies that the
server is not likely to launch spoofing attacks on the steps 2
and 3 posing as the client.
The client extracts or computes an Rs* value using the one-
time token (Ts) transmitted from the server at steps 3 where a
facial image and voice based one-time password is generated.
In addition, the client is allowed to perform hashing with a
client-selected random value (Rc) and compare the random
value with the Chal value received from the server at step 2.
All transactions are organically related to each other, and a
process of mutual authentication is performed for security
enhancement: The client serves to authenticate the server and
vice versa.
A one way hash function, an XOR function, and random
values with a cryptographically safe length are only allowed
for the information that is transmitted between the client and
the server the authentication process. This is proposed in view
of computation performance on smart phone and as a way to
effectively strengthen user authentication features with a
minimum of resources. In order to strengthen the security of
OTP approaches that are used for user authentication in smart
phone-based e-banking services, the proposed authentication
mechanism asks individual users to enter a server-transmitted
PIN value in their own voice via a microphone device and
performs smart phone user authentication using FV-OTP
information through the procedures of mutual authentication
and verification. The authentication process can be done via a
network without exposing users' password information.
B. Performance Analysis
As shown in Table 1, the mechanism proposed in this study
and the pre-existing ones were compared in terms of security
and authentication performance. The present study attempted
an analysis of computational complexity compared with the
existing studies, in order to evaluate the performance of the
proposed mechanism based on the number of unidirectional
hash functions (Th) used at each step.
[Table 1] A Comparative Analysis of Authentication Complexity and
Functionality
Process/Step Proposed
Mechanism Wang-Li [10] Yoon-Yoo [11] Khan et al. [12]
Registration 4Th 3Th 3Th 2Th
Login 3Th 2Th 3Th 2Th
Authentication 6Th 5Th 4Th 2Th
Mutual
Authentication O O O O
Time
Synchronization X X O O
FV-OTP Generation
O X X X
Biometric Info. Face & Voice Finger-print Finger-print Finger-print
The proposed mechanism has been found to have similar
computation complexity to existing mechanisms. In the
mechanism, human biometric data such as face and voice are
used for user authentication and OTP generation on a smart
phone. For biometric information available in the
authentication process, the existing mechanisms [10,11,12]
use fingerprint information, while the proposed mechanism
allows a user's biometric information to be used for
authentication. Actually, the new mechanism has been
designed with considerations for the convenience of OTP
users and the environment where smart phones are used. It is
also applicable to fingerprint information like the existing
ones.
VI. CONCLUSION
When smart phones are used for e-banking or internet-based
services, the smart phone environment needs to have more
strengthened user authentication features. One reason is that
such services can be wrongfully used by illegal mobile phones
issued as a result of phone loss or identity theft.
In this study, the proposed mechanism asks smart phone
users to send their facial image and voice information as a PIN
value transmitted from the server, using a camera and a
microphone device in their handset. It enables the server to
perform a verification process and then generate a one-time
token based on a random value and user’s multimedia
information. As a result, this suggests that the mechanism can
make or has made a significant improvement both in security
and authentication performance. The study also came up with
a way to strengthen user authentication in e-banking services
using image and voice based OTP value generated from the
client. When employed in smart phone-based e-banking or
internet services, the proposed mechanism is expected to help
step up the security of mobile wireless services.
ACKNOWLEDGMENT
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (Grant # 2012R1A1A2004573)
REFERENCES
[1] Voice Authentication: Making Access a Figure of Speech, http://www.computerworld.com/s/article/86897/Making_access_a_figure_of_speech.
[2] "Voice verification - for mobile banking security?", http://www.finextra.com/community/fullblog.aspx?id=3949
[3] Voice PIN 2.0, http://www.voiceverified.com/products.htm
[4] Jacek Lach, "Using Mobile Devices for User Authentication", CN2010, CCIS 79, pp.263-268, 2010
[5] Agnitio, "One-Time Password (OTP) Management Secured with Voice Biometrics", Voice Biometrics White Paper, 2009. http://www.banking-businessreview.com/suppliers/agnitio_voice_biometrics_for_homeland_security/whitepapers/one_time_password_otp_management_secured_with_voice_biometrics
[6] Agnitio, "One-Time Password (OTP) Management Secured with Voice Biometrics", Voice Biometrics White Paper, 2009.
[7] Helena Rif'a-Pous, "A Secure Mobile-Based Authentication System for e-Banking", OTM 2009, Part II. LNCS 5871, pp.848-860, 2009
[8] Two-factor authentication, Wikipedia, http://en.wikipedia.org/wiki/Two-factor_authentication
[9] Confident Multifactor Authentication, Two-Factor Authentication Using Images, http://www.confidenttechnologies.com/products/mobile-phone-authentication-factor.
[10] De-Song Wang, Jian-Ping Li, "A new fingerprint -based remote user authentication scheme using mobile devices", International Conference on Apperceiving Computing and Intelligence Analysis, ICACIA 2009, pp.65-68, 2009.
[11] Yoon E.J., and Yoo K.Y., “A secure chaotic hash-based biometric remote user authentication scheme using mobile devices ” , APWeb/WAIM 2007, Huang Shan, pp. 612-623, June 2007.
[12] Khan M.K., Zhang J.S., and Wang X.M., “ Chaotic hash-based fingerprint biometric remote user authentication scheme on mobile devices”, Chaos, Solitons & Fractals, Vol. 35, pp. 519-524, 2008.
[13] Sik-Wan Cho, Hyung-Woo Lee, “Design and Implementation of Voice One-Time Password(V-OTP) based User Authentication Mechanism on Smart Phone,” KIPS Journal, No.2, Vol.18-C, pp.79-88, 2011.