liveness detection framework - implementation guide - aws

34
Liveness Detection Framework Implementation Guide

Upload: khangminh22

Post on 04-Mar-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Liveness Detection FrameworkImplementation Guide

Liveness Detection Framework Implementation Guide

Liveness Detection Framework: Implementation GuideCopyright © Amazon Web Services, Inc. and/or its affiliates. All rights reserved.

Amazon's trademarks and trade dress may not be used in connection with any product or service that is notAmazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages ordiscredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who mayor may not be affiliated with, connected to, or sponsored by Amazon.

Liveness Detection Framework Implementation Guide

Table of ContentsWelcome .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1Cost ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Example cost estimate: 50,000 challenge attempts per month .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Architecture overview .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4Solution components .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Create challenge API workflow ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6Put challenge frame API workflow ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7Verify challenge response API workflow ..... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Security ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9IAM roles .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Cross-origin resource sharing (CORS) .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Security HTTP headers .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Data retention .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9File handling .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9Tracing .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10Amazon Cognito user pools ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Design considerations .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11Nose challenge .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12Pose challenge .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Custom challenge .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14Regional deployments .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

Supported deployment Regions .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14AWS CloudFormation template .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15Automated deployment .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

Deployment overview .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Step 1. Launch the stack .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16Step 2. Sign in to the web interface .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

Additional resources .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Create a custom challenge .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19API reference .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Create challenge API ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Put challenge frame API ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Verify challenge response API ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

Uninstall the solution .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Deleting the AWS CloudFormation stack .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Deleting the Amazon S3 buckets ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25Deleting the Amazon DynamoDB table .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Source code .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Revisions .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28Contributors ... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29Notices .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30AWS glossary .... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

iii

Liveness Detection Framework Implementation Guide

Incorporate liveness detectionmechanisms into your applicationsto address spoofing attacks

Publication date: January 2022

Facial recognition has become a widely used mechanism for identity verification applications. It providesa low-friction user experience and a safer approach than password-based alternatives. Even thoughcurrent technology is capable of identifying a person's face with high accuracy, counterfeiters stillcircumvent such systems by impersonating other users by using static photos, video replays, and masks.

Such vulnerabilities against spoofing attacks can be overcome by augmenting a facial recognition systemwith some form of liveness detection. Liveness detection is any technique used to identify spoofingattempts by determining whether the source of a biometric sample is a live human being or a fakerepresentation. This is accomplished through algorithms that analyze images captured through cameras(and sometimes other types of sensor data) in order to detect signs of reproduced samples.

The Liveness Detection Framework solution helps you implement liveness detection mechanisms intoyour applications by means of an extensible architecture. It comprises a set of APIs to process and verifyliveness challenges, along with two different types of challenges provided as reference implementations.In addition to those, you can extend the framework and implement your own liveness detectionalgorithms. This solution also includes a sample web application fully integrated with the APIs. You canuse it as a reference to create your own front end that fits your business needs.

This implementation guide describes architectural considerations and configuration steps for deployingLiveness Detection Framework in the Amazon Web Services (AWS) Cloud. It includes instructions tolaunch and configure the AWS services required to deploy this solution using AWS best practices forsecurity and availability.

The guide is intended for IT architects and developers who have practical experience architecting in theAWS Cloud.

1

Liveness Detection Framework Implementation GuideExample cost estimate: 50,000challenge attempts per month

CostYou are responsible for the cost of the AWS services used while running the solution, which can varybased on the following factors:

• Number of images processed by Amazon Rekognition per month: the solution uses the DetectFacesoperation to extract face metadata from each image.

• Amount of data served by Amazon CloudFront: static assets such as HTML, JavaScript, and images filesare served by Amazon CloudFront.

• Number of calls to AWS Secrets Manager: API tokens are signed using an AWS Secrets Manager secret.• Number of images stored in the Amazon Simple Storage Service (Amazon S3) bucket per month: the

solution stores all captured user images in Amazon S3.• Number of Amazon DynamoDB write/read requests per month: the solution records all challenge

attempts in DynamoDB.• Number of Amazon API Gateway requests per month: all solution requests go through API Gateway.• Number of AWS Lambda invocations per month: the backend logic runs on an AWS Lambda function.• Number of Amazon Cognito monthly active users.

This solution is based entirely on serverless AWS services. Therefore, when the solution is not in use, youonly pay for data stored in S3 and DynamoDB and for the AWS Secrets Manager’s secret.

We recommend creating a budget through AWS Cost Explorer to help manage costs. For full details, referto the pricing webpage for each AWS service used in this solution.

Example cost estimate: 50,000 challenge attemptsper month

The following table provides a monthly cost breakdown example for deploying this solution with thedefault parameters in the US East (N. Virginia) Region, excluding free tier. This example assumes that:

• 50,000 challenge attempts are performed per month• Only the two provided challenge types are activated• Challenge attempts are equally divided between the two challenge types (25,000 attempts for each

challenge)• Size for processed images is 480 by 480 pixels, with an average size of 100KB• For each nose challenge attempt, 10 images are captured• 50% of the traffic comes from United States and 50% of the traffic comes from Europe and Israel• 2,000 monthly active users signing in

From the assumptions above, we derive the following:

• For each nose challenge attempt, 10 calls to Amazon Recognition’s DetectFaces API are performed(one per image).

• For each pose challenge attempt, one call to Amazon Recognition’s DetectFaces API is performed.• For each nose challenge attempt, 12 API calls are performed (one to start the challenge, 10 to send

each image, and one to verify the challenge). In total: 25,000 x 12 = 300,000.

2

Liveness Detection Framework Implementation GuideExample cost estimate: 50,000challenge attempts per month

• For each pose challenge attempt, three API calls are performed (one to start the challenge, one tosend the frame, and one to verify the challenge). In total: 25,000 x 3 = 75,000.

• The total number of API calls is equal to the number of AWS Lambda requests and AWS SecretsManager API calls, because each API call is backed by the AWS Lambda function and the function usesAWS Secrets Manager.

AWS service Dimensions Cost

Amazon Rekognition 275,000 DetectFaces API calls $275.00

Amazon Cognito 2,000 monthly active users $11.00

Amazon CloudFront 38GB served $4.14

AWS Secrets Manager 1 secret 375,000 API calls $2.28

Amazon S3 28GB stored 275,000 PUTrequests

$2.03

Amazon DynamoDB 0.2GB data stored 375,000write request units 50,000 readrequest units

$1.94

Amazon API Gateway 375,000 REST API requests $1.31

AWS Lambda 375,000 requests 75,000,000ms compute duration (512 MBmemory allocated)

$0.70

TOTAL $298.40 / month

NoteAverage cost per challenge attempt: $0.005968

3

Liveness Detection Framework Implementation Guide

Architecture overviewWe leverage Amazon Rekognition to detect the facial details needed to verify the challenge. Thesolution’s architecture is composed of a web application that serves as the user front end, and aserverless backend with APIs that are invoked by the front end.

The client device allows the user to access the sample web application. The sample web applicationcaptures user images (frames) using the device embedded camera and invokes the solution APIs in theAWS Cloud.

Deploying this solution with the default parameters builds the following environment in the AWS Cloud.

Figure 1: Liveness Detection Framework architecture

The AWS CloudFormation template deploys the following infrastructure:

1. An Amazon CloudFront distribution to serve the web application to the client device.2. An Amazon S3 source bucket to host the sample web application static files (HTML, JavaScript, and

CSS).3. Amazon API Gateway to expose the REST/HTTP API endpoints invoked by the client device.4. AWS Lambda function to process API requests. All liveness detection logic runs inside that function.5. An Amazon DynamoDB table to store information about each user’s challenge attempts, such as user

ID, timestamp, and challenge-related parameters.6. An Amazon S3 object storage bucket that holds user images captured by the client device and

uploaded via the APIs.7. Amazon Rekognition for identifying faces in an image along with their position and landmarks, such

as eyes, nose, and mouth.8. AWS Secrets Manager to store the secrets used to sign tokens.9. Amazon Cognito user pool to provide user access control to the API calls.

4

Liveness Detection Framework Implementation Guide

NoteAlthough the architecture is fully serverless and scalable, with many simultaneous users, you canreach the maximum transactions per second (TPS) for Amazon Rekognition. Service quotas varyby AWS Region and can be increased through the AWS Support Center.

5

Liveness Detection Framework Implementation GuideCreate challenge API workflow

Solution componentsThe solution supports several workflows. These include the create challenge API, put challenge frameAPI, and verify challenge response API workflows.

Create challenge API workflowWhen the user initiates a new action that requires a challenge, the client’s application initiates thischallenge. The client device invokes the API, passing the user's ID and image dimensions from the devicecamera. The API then returns the challenge parameters so that the client device can prompt the userwith instructions on how to perform the challenge.

Figure 2: Create challenge API workflow

1. The user opens your app on their client device.

2. The client device accesses the static files hosted on an Amazon S3 bucket, served through an AmazonCloudFront distribution.

3. The client device passes the username and password entered by the user to Amazon Cognito. Aftersuccessful authentication, Amazon Cognito returns an access token that is used by the client device inall subsequent requests to the API. All endpoints are protected with Amazon Cognito and, therefore,require an access token.

4. The client device issues a POST HTTP request to the API Gateway /challenge endpoint, passing theuser ID and the device camera image dimensions. API Gateway forwards the request to the LambdaChallenge function.

6

Liveness Detection Framework Implementation GuidePut challenge frame API workflow

5. The Lambda Challenge function receives the request and selects which type of challenge willbe presented to the user. Based on the challenge type, it generates the challenge parametervalues. It also generates a challenge ID and a security token, signed with a secret stored in AWSSecrets Manager. The Lambda Challenge function stores the challenge parameters in the AmazonDynamoDB Challenges table and returns them to the client device.

6. The client device receives the challenge parameters and shows the user instructions for performingthe challenge.

Put challenge frame API workflowWhile the user interacts with the challenge, the client device uses the embedded camera to capture oneor more images and uploads the frames, one by one, to the API.

Figure 3: Put challenge frame API workflow

1. The user interacts with the camera on their client device while it captures images.

2. The client device issues a PUT HTTP request to the /challenge/{id}/frame API endpoint,passing the image and the security token. Amazon API Gateway forwards the request to the LambdaChallenge function.

3. The Lambda Challenge function validates the security token. If it is valid, it stores the image in theAmazon S3 Frames bucket. It also updates the challenge record in the DynamoDB Challenges tablewith the image S3 URL.

These steps are repeated for as many images as required by the challenge type until the user completesall challenge instructions.

Verify challenge response API workflowAfter the user successfully completes the challenge instructions, the client device invokes the API forfinal verification.

7

Liveness Detection Framework Implementation GuideVerify challenge response API workflow

Figure 4: Verify challenge response API Workflow

1. The user follows the instructions and completes the challenge on the client device.2. The client device issues a POST HTTP request to the /challenge/{id}/verify API endpoint,

passing the security token, to start the challenge verification in the AWS Cloud. Amazon API Gatewayforwards the request to the Lambda Challenge function.

3. The Lambda Challenge function validates the security token. If it is valid, it looks up the challengedata in the DynamoDB Challenges table. Then, it invokes Amazon Rekognition to analyze theimage(s) stored in the Amazon S3 Frames bucket. The Lambda Challenge function then runs theverification logic specific to the challenge type. The final result (success or fail) is returned to theclient device.

4. The client device displays the final result to the user.

During the final verification, the Lambda Challenge function invokes, for each frame image, theDetectFaces operation from Amazon Rekognition Image. For each detected face, the operation returnsthe facial details. From all details captured from DetectFaces operation, the solution uses thebounding box coordinates of the face, facial landmarks coordinates, pose, and other attributes, such assmile and eyes open or closed.

8

Liveness Detection Framework Implementation GuideIAM roles

SecurityWhen you build systems on AWS infrastructure, security responsibilities are shared between you andAWS. This shared model reduces your operational burden because AWS operates, manages, and controlsthe components including the host operating system, the virtualization layer, and the physical securityof the facilities in which the services operate. For more information about AWS security, visit AWS CloudSecurity.

IAM rolesAWS Identity and Access Management (IAM) roles allow you to assign granular access policies andpermissions to services and users on the AWS Cloud. This solution creates IAM roles that grant thesolution’s AWS Lambda functions access to create Regional resources.

Cross-origin resource sharing (CORS)The AWS Lambda function that implements the APIs, supports CORS HTTP headers, as configured inthe Chalice microframework. As a sample implementation, the default configuration allows API callsfrom any origin. If deployed to production, we recommend that you apply a more fine-grained CORSconfiguration.

Security HTTP headersThe sample web application is provided as a reference implementation used for development purposes.If deployed to production, we recommend that the web hosting service supports security HTTP headersto prevent attacks like Man in the middle (MITM) and Cross-site scripting (XSS). If using Amazon S3 andAmazon CloudFront to host the web application, consider using Lambda@Edge functions to generate theHTTP headers.

Data retentionThe Amazon S3 buckets used in this solution might store sensitive data, such as user images and relatedmetadata. For security reasons, such sensitive data should be stored only long enough to satisfy thebusiness requirements of the application. If the solution is deployed to production, we recommend thatyou delete user images after they are no longer needed. Consider using lifecycle policies or Amazon S3Intelligent-Tiering storage class in the Amazon S3 buckets for automatically expiring objects.

File handlingThe put challenge frame API receives JPEG file content sent by the sample web client application. In aproduction environment, other untrusted sources could attempt to send malicious content to the API.Therefore, we recommend that you perform additional handling to the file content, such as format andsize validation, malware detection, and Content Disarm and Reconstruction (CDR).

9

Liveness Detection Framework Implementation GuideTracing

TracingThis solution doesn’t include tracing capabilities. Consider using AWS X-Ray. This service collects dataabout requests that your application serves, and provides tools that you can use to view, filter, and gaininsights into that data to identify issues and opportunities for optimization.

Amazon Cognito user poolsYou can add multi-factor authentication (MFA) to a user pool to protect the identity of your users. MFAadds a second authentication method that doesn't rely solely on user name and password. You canchoose to use SMS text messages, or time-based one-time (TOTP) passwords as second factors in signingin your users. You can also use adaptive authentication with its risk-based model to predict when youmight need another authentication factor. Adaptive authentication is part of the user pool advancedsecurity features, which also include protections against compromised credentials. Learn more in Addingmulti-factor authentication (MFA) to a user pool and Adding advanced security to a user pool in theAmazon Cognito Developer Guide.

10

Liveness Detection Framework Implementation Guide

Design considerationsThis solution deploys a framework that supports different types of liveness challenges.

The framework backend is implemented in Python and built on top of the Chalice microframework. Inthe backend, the framework architecture provides all of the API implementations and extension points tointegrate logic specifically for your application and custom challenges.

The framework’s front-end web application is implemented using React JavaScript library and TypeScriptsyntax language. The web application is a sample implementation that demonstrates how a clientapplication should interact with the backend APIs and provide a user experience for performing theliveness challenges. Use it as a reference to build a custom web or mobile application.

ImportantThe sample web application is intended for demonstration purposes only. We stronglyrecommend that you customize it to best meet your security, performance, and usage standards.

The framework considers the following assumptions about supported liveness challenges:

• To deliver challenge instructions to the user and run the challenge-specific workflow, the front endmight require some parameter definitions provided by the backend when a challenge attempt isinitiated.

• The challenge verification logic is based on one or more static images from the user, captured by aclient device camera. Verification logic cannot rely on videos, only multiple individual frame images.

• The challenge verification logic is based on the following metadata extracted from each image: facebounding boxes, facial landmark coordinates (eyes, nose, mouth, etc.), face pose (pitch, roll, yaw),attributes (gender, age, beard, glasses, mouth open, eyes open, smile) and emotion (angry, calm,confused, disgusted, happy, surprised, sad). For more details about Amazon Rekognition API types,refer to Data types in the Amazon Rekognition Developer Guide.

• The challenge verification logic can be represented as a state machine with one or more states.

• When multiple types of challenge are used, the backend is responsible for defining the selectedchallenge type when a challenge attempt is initiated by the front end. The selection logic can usemetadata provided by the front end.

Based on these assumptions, the framework exposes the following extension points in the form ofPython function decorators:

• Challenge type selection logic: This is an application-wide extension point. It is used to definewhich challenge type a user should complete when the front end initiates a challenge. The challengeselection can be based on custom client metadata provided by the front end. Exposed as the@challenge_type_selector decorator.

• Challenge parameters definition logic: This challenge-specific extension point is used to definethe parameter values for a certain challenge attempt. The logic runs when the front end initiates achallenge, immediately after the challenge type is selected. Exposed as the @challenge_paramsdecorator.

• Challenge verification logic: This challenge-specific extension point is used to define how a challengeattempt is verified, based on the challenge parameters and the face metadata extracted from theimages. If the challenge requires multiple images, such as video frames, the logic must be definedas a state machine that processes one image at a time. To define the state machine logic, the@challege_state decorator is exposed.

11

Liveness Detection Framework Implementation GuideNose challenge

Included in the framework are two types of liveness challenges (nose challenge and pose challenge),which can be used as-is, customized, or used as a reference for implementing new custom challenges.

Nose challengeThis challenge is an active liveness detection approach that prompts the user to position their face insidean oval area in the center of the image and then move their nose to a target point.

Figure 5: Nose challenge user experience

When a nose challenge is initiated, its challenge parameters definition logic expects to receive theimage dimensions from the client device camera, specifically imageWidth and imageHeight metadataattributes. Based on these dimensions, the logic determines the coordinates for the central oval area(areaTop, areaLeft, areaWidth, and areaHeight) and the random target nose position (noseTop,noseLeft, noseWidth, and noseHeight) and returns them as the challenge parameters.

Based on these parameters, the front end displays the device camera feed and instructs the user toperform the movements. As the user performs the challenge, the front end must also continually captureframes and upload them to the backend API. After the user has concluded the movement, the front endinvokes the verification API.

NoteThe face-api.js library is used in the front end to detect the user’s face and landmarks toprovide real-time feedback as the user performs the challenge. Liveness validation occurs inthe backend only, in the verification API, using Amazon Rekognition. Results from the front-endlibrary are not used for any means of user liveness validation.

The nose challenge verification logic is represented by a state machine that processes the framesuploaded for a certain challenge attempt. For each frame, the state machine checks the detected facemetadata and either advances to the next step, fails, or succeeds in the challenge. The state machine isrepresented below:

Figure 6: Nose challenge verification states

12

Liveness Detection Framework Implementation GuidePose challenge

• Face state: Checks if there is one, and only one, face detected in the frame image. If that is the case,the verification advances to the next state. Otherwise, the challenge fails.

• Area state: Checks if the user's face is positioned inside the central area. If the face is fitted in the areabefore the specified timeout, the verification advances to the next state. Otherwise, the challenge fails.

• Nose state: Checks if the user's nose is at the target position. If the nose reaches the target positionbefore the specified timeout, the challenge succeeds. Otherwise, the challenge fails.

Pose challengeThis challenge is an active liveness detection approach that prompts the user to reproduce a certainpose.

Figure 7: Pose challenge user experience

The pose is random and combines eyes and mouth position variations. Eyes must be opened (lookingforward), closed, looking left, or looking right. The mouth must be closed or smiling.

When a pose challenge is initiated, the backend returns how the eyes and the mouth should look in thepose. The client device uses that information to generate an image with the corresponding pose and asksthe user to reproduce it. The user then needs to take a selfie (self-portrait photo). After the user takes aselfie, they can compare the result with the pose and, if the user doesn’t think they look the same, theycan retake the photo. The user can retake the photo as many times as necessary. When ready, the photois uploaded to the backend for verification.

The backend verifies the following using the photo sent by the client device:

1. There’s one, and only one, face in the photo.2. The confidence of the face detection is high (above a configurable threshold value).3. The face is not rotated.4. The eyes are positioned as required by the challenge (the user is looking in the correct direction, or the

eyes are closed).The mouth is positioned as required by the challenge (closed or smiling).

If all verifications pass, the challenge is considered successfully performed. Otherwise, the challengefails.

NoteSimple challenges are generally easy for users; however, they are more susceptible to spoofingattacks. Keep this in mind when using this challenge as it is. You could present this challenge inlow-risk scenarios or you could extend it by adding more facial expressions or add hand gesturesinto the mix.

13

Liveness Detection Framework Implementation GuideCustom challenge

Custom challengeThis solution allows you to implement custom challenges using the framework. For details, refer toCreate a custom challenge (p. 19).

Regional deploymentsThis solution uses the Amazon Rekognition service, which is not currently available in all AWS Regions.You must launch this solution in an AWS Region where Amazon Rekognition is available.

Supported deployment RegionsLiveness Detection Framework is supported in the following AWS Regions:

Region name

US East (N. Virginia) Asia Pacific (Sydney)

US East (Ohio) Asia Pacific (Tokyo)

US West (Northern California) Canada (Central)

US West (Oregon) Europe (Frankfurt)

Asia Pacific (Mumbai) Europe (Ireland)

Asia Pacific (Seoul) Europe (London)

Asia Pacific (Singapore)  

14

Liveness Detection Framework Implementation Guide

AWS CloudFormation templateTo automate deployment, this solution uses the following AWS CloudFormation template, which you candownload before deployment:

liveness-detection-framework.template: Usethis template to launch the solution and all associated components. The default configuration deploysAmazon Rekognition, Amazon Cognito, Amazon CloudFront, AWS Secrets Manager, Amazon S3, AmazonDynamoDB, Amazon API Gateway, and AWS Lambda, but you can customize the template to meet yourspecific needs.

15

Liveness Detection Framework Implementation GuideDeployment overview

Automated deploymentBefore you launch the automated deployment, review the architecture, components, and otherconsiderations in this guide. Follow the step-by-step instructions in this section to configure and deploythe solution into your account.

Time to deploy: Approximately 10 minutes

Deployment overviewUse the following steps to deploy this solution on AWS. For detailed instructions, follow the links foreach step.

Step 1. Launch the stack (p. 16)

• Launch the AWS CloudFormation template into your AWS account.• Review the templates parameters and enter or adjust the default values as needed.

Step 2. Sign in to the web interface (p. 17)

• Retrieve the URL.

Step 1. Launch the stackThis automated AWS CloudFormation template deploys the Liveness Detection Framework solution inthe AWS Cloud.

NoteYou are responsible for the cost of the AWS services used while running this solution. For moredetails, visit the Cost (p. 2) section in this guide, and refer to the pricing webpage for each AWSservice used in this solution.

1. Sign in to the AWS Management Console and select the button to launch the liveness-detection-framework.template AWS CloudFormation template.

Alternatively, you can download the template as a starting point for your own implementation.2. The template launches in the US East (N. Virginia) Region by default. To launch the solution in a

different AWS Region, use the Region selector in the console navigation bar.

NoteThis solution uses the Amazon Rekognition service, which is not currently available in allAWS Regions. You must launch this solution in an AWS Region where Amazon Rekognition isavailable. For the most current availability by Region, refer to the AWS Regional Services List.

16

Liveness Detection Framework Implementation GuideStep 2. Sign in to the web interface

3. On the Create stack page, verify that the correct template URL is in the Amazon S3 URL text box andchoose Next.

4. On the Specify stack details page, assign a name to your solution stack. For information aboutnaming character limitations, refer to IAM and STS Limits in the AWS Identity and Access ManagementUser Guide.

5. Under Parameters, review the parameters for this solution template and modify them as necessary.This solution uses the following default values.

Parameter Default Description

AdminEmail <Requires input> The email of the systemadministrator.

NoteYou will receive yourtemporary passwordand username at thisaddress.

AdminName <Requires input> The name of the systemadministrator.

6. Choose Next.7. On the Configure stack options page, leave all the values and configurations as default and choose

Next.8. On the Review page, review and confirm the settings. Check the boxes under Capabilities,

acknowledging that the template creates AWS Identity and Access Management (IAM) resources andgrants the CAPABILITY_AUTO_EXPAND option for the template.

9. Choose Create stack to deploy the stack. You can view the status of the stack in the AWSCloudFormation console in the Status column. You should receive a CREATE_COMPLETE status inapproximately 10 minutes.

NoteIn addition to the primary AWS Lambda function, this solution includes a website customresource Lambda function that runs only during initial configuration or when updating ordeleting resources. When you run this solution, you will notice the Lambda function in theAWS Management Console. Do not delete the website custom resource Lambda function, as itis needed to manage associated resources.

Step 2. Sign in to the web interfaceAfter the AWS CloudFormation stack is created, you can sign in to the web interface. The solution sendsan email containing your admin credentials, and a temporary password. Use the following procedure tosign in to the web interface for the first time.

1. Sign in to the AWS CloudFormation console and select the solution’s stack.2. Choose the Outputs tab.3. Under the Key column, locate URL, and select the link.4. From the sign in page, enter the username and temporary password provided in the invitation email.5. From the Change password page, follow the prompts to create a new password. Password

requirements: minimum of 6 characters, requiring at least one upper case character, one lower casecharacter, one number, and one symbol.

6. After signing in, select the liveness detection challenge and follow the steps.

17

Liveness Detection Framework Implementation Guide

Additional resourcesAWS services

• Amazon Cognito • Amazon DynamoDB

• AWS CloudFormation • Amazon Rekognition

• AWS Lambda • AWS Secrets Manager

• Amazon Simple Storage Service • Amazon CloudFront

• Amazon API Gateway

Related projects

• AWS Chalice

18

Liveness Detection Framework Implementation Guide

Create a custom challengeTo implement a custom challenge using the framework, you must edit the source code for the backendpart of the solution. Refer to the GitHub repository for the source code.

First, create a new Python module inside the chalicelib directory. You can use the modulecustom.py as a template. Inside the new module, implement the challenge parameters definition logicand the challenge verification logic.

The framework requires you to define a string value to identify your custom challenge type. For example,for the nose challenge, the identifier is 'NOSE', and for the pose challenge, it is 'POSE'. Choose adifferent identifier for your custom challenge and use it consistently in all functions.

Challenge parameters definition

For the challenge parameters definition logic, modify the function decorated with the@challenge_params decorator. The following sample code is for a challenge parameters definitionfunction, as provided in the custom.py module.

@challenge_params(challenge_type='CUSTOM')def custom_challenge_params(client_metadata): params = dict() params.update(client_metadata) return params

Set the decorator attribute challenge_type with the value of your custom challenge identifier. Thefunction receives the input parameter client_metadata, which is a dictionary that might containcustom attributes provided by the front end when it calls the create challenge API. You can use theseclient-provided attributes inside your logic to modify your parameter values. The function must return adictionary containing attributes representing your custom challenge parameters. The returned dictionaryshould also include the input client metadata attributes.

Challenge verification

For the challenge verification logic, you must determine if your challenge will be based on individual ormultiple images. In the case of individual images, your verification state machine contains only one state.In the case of multiple images, it can contain one or more states. For each state, you must implementa function decorated with the @challenge_state decorator. When the verify challenge response APIis called, the framework is responsible for invoking your custom state functions to process each framemetadata. The following sample code is for a first state (or single state) function, as provided in thecustom.py module.

@challenge_state(challenge_type='CUSTOM', first=True, next_state='second_state')def first_state(params, frame, context): if True: return STATE_NEXT return STATE_CONTINUE

Set the decorator attribute challenge_type with the value of your custom challenge identifier.For the first state, set the attribute first to True. In case your logic has more states after the first,indicate which one is the next by setting the attribute next_state with the name of the function thatrepresents the next state.

The function receives the following input parameters:

19

Liveness Detection Framework Implementation Guide

• params: Dictionary containing the challenge parameters.• frame: Dictionary containing information about the current frame image to be processed by the state.

The face metadata detected by Amazon Rekognition can be found in the rekMetadata attribute.• context: Dictionary containing context information that can be shared across states and frame

iterations. You can use this dictionary's attributes to store variables to be accessed during theprocessing of the next frames by the current state or the next states.

As a result of processing frame metadata, the function must return one of the following values:

• STATE_CONTINUE: Signals the framework to stay in the current state for processing the next frame.• STATE_NEXT: Signals the framework to advance to the next state for processing the next frame.• CHALLENGE_FAIL: Signals the framework that the challenge is considered not valid and ends the

state machine processing.• CHALLENGE_SUCCESS: Signals the framework that the challenge was successfully validated and ends

the state machine processing.

In case your challenge contains only one state, the return value must be either CHALLENGE_FAIL orCHALLENGE_SUCCESS.

The following sample code is for functions that implement other states after the first, as provided in thecustom.py module.

@challenge_state(challenge_type='CUSTOM', next_state='second_state')def second_state(params, frame, context): if True: return STATE_NEXT return STATE_CONTINUE

@challenge_state(challenge_type='CUSTOM')def last_state(params, frame, context): if True: return CHALLENGE_SUCCESS return CHALLENGE_FAIL

Set the decorator attribute challenge_type with the value of your custom challenge identifier.In case your state has more states afterward, indicate which one is the next by setting the attributenext_state with the name of the function that represents the next state. In case your state is the lastone, do not set a value for the attribute next_state.

These other state functions receive the same input parameters and must return the same values as thosedescribed for the first state function.

For the last state function, the return value must be either CHALLENGE_FAIL or CHALLENGE_SUCCESS.

Challenge type selection logic

After you have implemented your custom challenge module, you must modify the application-wide challenge type section logic to include your new challenge. To do this, you must edit the fileapp.py. The default logic randomly selects the default provided challenges: nose challenge or posechallenge. The following default code is for the challenge type selection function, decorated with the@challenge_type_selector decorator.

@challenge_type_selectordef random_challenge_selector(client_metadata): app.log.debug('random_challenge_selector') if CLIENT_CHALLENGE_SELECTION and 'challengeType' in client_metadata:

20

Liveness Detection Framework Implementation Guide

return client_metadata['challengeType'] return random.choice(['POSE', 'NOSE'])

The function receives the input parameter client_metadata, which is a dictionary that can containcustom attributes provided by the front end when it calls the create challenge API. You can usethese client-provided attributes inside your logic to modify your challenge type selection. Thedefault implementation allows the client-side to specify a preferred challenge type via the customattribute challengeType. If the environment variable CLIENT_CHALLENGE_SELECTION is setto True, it returns the preferred challenge type. For your customized challenge selection function,you can implement the logic that best fits your use case and includes any other attributes in theclient_metadata as required, making sure your front end provides the new attributes when invokingthe API. The function must return a string value identifier for the selected challenge type.

Challenge configuration

Additionally, for the framework to run your custom module and invoke your decorated custom functions,you must include an import statement in the file app.py.

The following sample code is to import the provided custom.py module. If you want to create your ownmodule file, modify the statement accordingly.

import_module('chalicelib.nose')import_module('chalicelib.pose')import_module('chalicelib.custom') # <-- Importing the custom module

21

Liveness Detection Framework Implementation GuideCreate challenge API

API reference

Create challenge APIPOST /challenge

Request

{ "string": "string", ...}

The request body can send client metadata to the backend, as one or more pairs of attribute names andvalues. Each pair is in the form "name": "value". The default implementation of the framework usesthe following attributes:

• imageWidth: Width of images captured by the client device.• imageHeight: Height of images captured by the client device.• challengeType: Preferred challenge type selected by the user.

Additional custom attributes can be defined as required by custom challenges and frameworkextensions.

Response

{ "id": "string", "token": "string", "type": "string", "params": { "string": "string", ... }}

The response body contains the following attributes:

• id: The generated ID for the challenge attempt.• token: The security token generated for the challenge attempt, that should be informed in the next

API calls.• type: The string identifier for the type of challenge selected by the API.• params: The challenge parameters for the challenge type selected by the API. Parameters are

specified as one or more name-value pairs, in the form "name": "value".

Put challenge frame APIPUT /challenge/{id}/frame

22

Liveness Detection Framework Implementation GuideVerify challenge response API

The API path must contain the id parameter, which is the challenge ID returned by the create challengeAPI.

Request

{ "token": "string", "timestamp": "string", "frameBase64": "string"}

The request body must contain the following attributes:

• token: The security token generated by the create challenge API.• timestamp: The timestamp when the frame was captured, as the number of milliseconds since

January 1, 1970, 00:00:00 UTC.• frameBase64: Captured frame image in JPEG format, encoded as a string in base64.

Response

{ "message": "string"}

The response body contains the following attribute:

• message: Success or error message.

Verify challenge response APIPOST /challenge/{id}/verify

The API path must contain the id parameter, which is the challenge ID returned by the create challengeAPI.

Request

{ "token": "string", }

The request body must contain the following attribute:

• token: The security token generated by the create challenge API.

Response

{ "success": boolean}

The response body contains the following attribute:

23

Liveness Detection Framework Implementation GuideVerify challenge response API

• success: Boolean value indicating if the challenge is successful or failed.

24

Liveness Detection Framework Implementation GuideDeleting the AWS CloudFormation stack

Uninstall the solutionYou can uninstall the Liveness Detection Framework solution by deleting the AWS CloudFormationstacks. You must manually delete the Amazon S3 buckets and Amazon DynamoDB table created by thissolution. AWS Solutions Implementations do not automatically delete buckets and tables in case youhave stored data to retain.

Deleting the AWS CloudFormation stack1. Sign in to the AWS CloudFormation console.2. On the Stacks page, select the solution’s stack.3. Choose Delete.

Deleting the Amazon S3 bucketsThis solution is configured to retain the solution-created Amazon S3 buckets (for deploying in an opt-inRegion) if you decide to delete the stacks to prevent accidental data loss. After uninstalling the solution,you can manually delete this S3 buckets if you do not need to retain the data. Follow these steps todelete the Amazon S3 buckets.

NoteBefore attempting to delete all the Amazon S3 buckets, each S3 bucket must be empty. Do thisby repeating steps 1-4 for each bucket.

1. Sign in to the Amazon S3 console.2. Choose Buckets from the left navigation pane.3. Locate the S3 bucket to empty.4. Select the S3 bucket and choose Empty.

After all buckets are empty, proceed to delete the buckets:

5. Locate the <stack-name>-backend*-challengebucket-<id> S3 bucket, select it, and chooseDelete.

6. Locate the <stack-name>-backend*-loggingbucket-<id> S3 bucket, select it, and chooseDelete.

7. Locate the <stack-name>-backend*-trailbucket-<id> S3 bucket, select it, and choose Delete.8. Locate the <stack-name>-client*-staticwebsitebucket-<id> S3 bucket, select it, and choose

Delete.9. Locate the <stack-name>-client*-loggingbucket-<id> S3 bucket, select it, and choose Delete.

Deleting the Amazon DynamoDB tableAfter uninstalling the solution, you can manually delete this Amazon DynamoDB table if you do not needto retain the data. Follow these steps to delete the Amazon DynamoDB table.

1. Sign in to the Amazon DynamoDB console.

25

Liveness Detection Framework Implementation GuideDeleting the Amazon DynamoDB table

2. Locate the <stack-name>-BackendStack-<id>-ChallengeTable-<id> table.3. Select the table and choose Delete table.

26

Liveness Detection Framework Implementation Guide

Source codeVisit our GitHub repository to download the source files for this solution and to share yourcustomizations with others.

27

Liveness Detection Framework Implementation Guide

RevisionsDate Change

January 2022 Initial release

28

Liveness Detection Framework Implementation Guide

Contributors• David Laredo• Henrique Fugita• Rafael Werneck• Rafael Ribeiro Martins• Lucas Otsuka

29

Liveness Detection Framework Implementation Guide

NoticesCustomers are responsible for making their own independent assessment of the information in thisdocument. This document: (a) is for informational purposes only, (b) represents AWS current productofferings and practices, which are subject to change without notice, and (c) does not create anycommitments or assurances from AWS and its affiliates, suppliers or licensors. AWS products or servicesare provided “as is” without warranties, representations, or conditions of any kind, whether express orimplied. AWS responsibilities and liabilities to its customers are controlled by AWS agreements, and thisdocument is not part of, nor does it modify, any agreement between AWS and its customers.

Liveness Detection Framework is licensed under the terms of the of the Apache License Version 2.0available at The Apache Software Foundation.

Liveness Detection Framework uses the Amazon Rekognition service. Customers should review the Usecases that involve public safety and the general AWS Service Terms.

30

Liveness Detection Framework Implementation Guide

AWS glossaryFor the latest AWS terminology, see the AWS glossary in the AWS General Reference.

31