major project

79
A PROJECT REPORT On FACE RECOGNITION AND TRACKING SYSTEM Bachelor of Technology In Electronics & Communication Engineering Submitted By ABHISHEK GUPTA (1002831004) A. SANDEEP (1002831001) ABHISHEK SOAM (1002831007) ASHISH KUMAR (1002831029) Under the Guidance of Mr. Prashant Gupta 1

Upload: sandeep-amaravadi

Post on 09-Aug-2015

57 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: MAJOR PROJECT

A PROJECT REPORT

On

FACE RECOGNITION AND TRACKING SYSTEM

Bachelor of TechnologyIn

Electronics & Communication Engineering

Submitted By

ABHISHEK GUPTA (1002831004)

A. SANDEEP (1002831001)

ABHISHEK SOAM (1002831007)

ASHISH KUMAR (1002831029)

Under the Guidance ofMr. Prashant GuptaAssistant Professor

DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGG.IDEAL INSTITUTE OF TECHNOLOGY

GHAZIABAD (INDIA)

1

Page 2: MAJOR PROJECT

ACKNOWLEDGEMENT

I take this opportunity to express my profound gratitude and deep regards to our Guide Mr. PRASHANT GUPTA, Assistant Professor Department of Electronics & Communication Engineering, Ideal Institute of Technology, Ghaziabad for his exemplary guidance, advice and constant encouragement throughout the course of this project. The blessing, help and guidance given by his time to time shall carry me a long way in the journey of life on which I am about to embark.

I am also very thankful to Mr. NARBADA PRASAD GUPTA, H.O.D. ECE Department, Ideal Institute of Technology, Ghaziabad for approving this project as a final year major project.

I want to thank my teammates A.SANDEEP, ABHISHEK GUPTA, ASHISH KUMAR and ABHISHEK SOAM for their valuable role in this project. A.SANDEEP and ABHISHEK SOAM did the motor movement programming and hardware building and synchronizing delays and pauses for perfection in motor movement programming. ASHISH KUMAR and ABHISHEK GUPTA helped in making the face recognition program and the report work. All the team members have a keen role in the research and development of this project.

I am also thankful to my father and mother for motivating me and helping for the testing of this project. I want to acknowledge all my friends, who donated their faces for testing and training algorithm.

A.SANDEEP (1002831001)ABHISHEK GUPTA (1002831004)ABHISHEK SOAM (1002831007)

ASHISH KUMAR(1002831029)

2

Page 3: MAJOR PROJECT

ABSTRACT

Image processing is the future of the world, through this amazing tool we can control or operate almost everything in this world in terms of security, controlling the computer and lots more. In this image project, camera is used as sensor or we can say input device which takes input in the form of photos or video and that input is used to make algorithms. MATLAB is one of the most powerful software through which we can build any kind of project according to our requirement. MATLAB contains many toolboxes including image processing toolbox and image acquisition toolbox. These toolboxes contain various functions and libraries with which we can perform several task. It converts complex tasks into simple ones. This language converts the codes into C/C++ formats and generate HEX file for the microcontroller or processor. This is all about image processing.

The project named, “Face recognition and tracking system” is basically build on

the concept of image processing by using the image processing and computer vision

toolboxes . In this project we use two different cameras one for the purpose of

generate the data base and other for the recognition and tracking purpose which is

movable ,when the face of any person matches with the face from generated

database the operation of detection and tracking get started.

3

Page 4: MAJOR PROJECT

TABLE OF CONTENTS

1. WHAT IS IMAGE PROCESSING?? .…………5 1.1 Introduction ...………. 5 1.2 Overview ………… 6

2. PROJECT INTRODUCTION …….…… 8 2.1 Introduction …………. 8 2.2 Objective ………… 8 2.3 Project overview ………….8

3. SOFTWARE AND HARDWARE USED …………. 9 3.1 Introduction ………….. 9 3.2 Hardware Requirements ………….. 9 3.3 Software Requirements …………..9 3.4 Introduction to hardware: Arduino ….…………9

3.5 Introduction to software: MATLAB & Arduino IDE ……...14

4. BLOCK DIAGRAM …….…….20

5. PRINCIPLE OF OPERATION ....................215.1 Eigen Face Algorithm …………….215.2 Viola Jones algorithm …………….23

6. RESULT ……………49

7. PPOBLEMS FACED ……….…….51

8. APPLICATIONS OF THIS PROJECT ....…………. 52

9. APPENDIX …………….

10.REFERENCES ......………..55

4

Page 5: MAJOR PROJECT

Chapter

WHAT IS IMAGE PROCESSING??

1.1 Introduction

Image processing is a MATLAB software tool which is used for the following purposes:

Transforming digital information representing images Improve pictorial information for human interpretation Remove noise Correct for motion, camera position, distortion Enhance by changing contrast, color Process pictorial information by machine. Segmentation - dividing an image up into constituent parts Representation - representing an image by some more abstract Models Classification Reduce the size of image information for efficient handling. Compression with loss of digital information that minimizes loss

of "perceptual" information. JPEG and GIF, MPEG, Multi-resolution representations. Lens focuses an image on the retina (like a camera). Pattern is affected by distribution of light receptors (rods and cones) The (6-7 million) cones are in the center of the retina (fovea) and are sensitive to color -

each connected to own neuron. The (75-150 million) rods are distributed everywhere, connected in clusters to a neuron. Unlike ordinary camera, the eye is flexible. Range of intensity levels supported by the human visual system is 1010. Uses brightness adaptation to set sensitivity.

Color VisionThe color-responsive chemicals in the cones are called cone pigments and are very similar to the chemicals in the rods. The retinal portion of the chemical is the same, however the scotopsin is replaced with photopsins. Therefore, the color-responsive pigments are made of retinal and photopsins. There are three kinds of color-sensitive pigments:

Red-sensitive pigment Green-sensitive pigment Blue-sensitive pigment Image processing involves changing the nature of an image in order to

either

5

Page 6: MAJOR PROJECT

1.2 Overview

What Is An Image?

An image is a 2D rectilinear array of pixels

Any image from a scanner, or from a digital camera, or in a computer, is a digital image.

A digital image is formed by the collections of different color pixel.

Fig. 1.1 B&W image Fig. 1.2 Gray scale image Fig. 1.3 RGB image

Types of image:There are three different types of image in MATLAB

Binary images or B&W images Intensity images or Gray scale images Indexed images or RGB images

Binary Image:They are also called B&W images, containing ‘1’ for white and ‘0’ for Black.

6

Page 7: MAJOR PROJECT

Fig. 1.4 B&W image with region pixel value

Intensity image:They are also called ‘Gray Scale images’, containing values in the range of 0 to 1.

Fig. 1.5 Intensity image with region pixel value

Indexed image:These are the color images and also represented as ‘RGB image’.

Fig. 1.6 Color spectrum Fig. 1.7 Indexed image with region pixel value

7

Page 8: MAJOR PROJECT

PROJECT INTRODUCTION

2.1 Introduct i on

The project named, “Face recognition and tracking system” is basically build on the concept of

image processing by using image processing and computer vision tool box of MATLAB software.

In this project we use two different cameras one for the purpose of generate the data base and

other for the recognition purpose which is movable ,when the face of any person matches with the

face from generated database the operation of detection and tracking get started.

2.2 Objective

The prime objective of the project is to develop a standalone application and an interlinking hardware that can recognize live faces from a generated database of images. This application will contain features like recognition of faces from still image taken by camera or from hard drive, tracking of faces which will observed by movement of camera in direction of faces, real time recognition from database.

2.3. Project overview

In our project we have to develop a standalone application and interlinking a hardware that can do following tasks:

1. Face detection from live image.2. Face detection from drive.3. Tracking any face.4. Recognize and track a face from live image.5. Recognize and track a face from drive.6. Determine the population density.

8

Page 9: MAJOR PROJECT

SOFTWARE AND HARDWARE USED

3.1 Introduction

For making such a project we need software as well as hardware because here we handle a embedded product with the help of our gesture.

Hardware : It is the physical part of our project that relates with the real world . E.g. Arduino kit ,microcontroller etc.

Software : It is the set of programs and instructions that tells hardware which task is to be performed. E.g. Set of instructions written on MATLAB IDE.

3.2 Hardware Requirements

The following are the hardware components that are required during this project :A) 1 Personal computer.B) 1 Arduino kit (with ATmega328).C) 2 1.3MP cameras.D) 1 Motor driving shield (with L293D IC).E) 1 Seven segment display shield.F) 1 Program burner wire.G) 2 Line wires.H) 20 Single stand wires.I) 2 Batteries.J) 2 Battery holders.K) 2 DC Motors.L) 1 Holding frame.

3.3 Software Requirements

Software required to program our project hardware part microcontroller are given as follow:A) MATLAB R2012b software.B) Arduino IDE software.C) Arduino to MATLAB interfacing files.

9

Page 10: MAJOR PROJECT

3.4 Introduction to hardware: ARDUINO

Arduino is a single-board microcontroller to make using electronics in multidisciplinary projects more accessible. The hardware consists of a simple open source hardware board designed around an 8-bit Atmel AVR microcontroller, or a 32-bit Atmel ARM. The software consists of a standard programming language compiler and a boot loader that executes on the microcontroller.

Fig 3.1: Arduino board

Arduino boards can be purchased pre-assembled or as do-it-yourself kits. Hardware design information is available for those who would like to assemble an Arduino by hand. It was estimated in mid-2011 that over 300,000 official Arduinos had been commercially produced.

The Arduino board exposes most of the microcontroller's I/O pins for use by other circuits. The Diecimila, Duemilanove, and current Uno provide 14 digital I/O pins, six of which can produce pulse-width modulated signals, and six analog inputs. These pins are on the top of the board, via female 0.1-inch (2.5 mm) headers. Several plug-in application shields are also commercially available.

The Arduino Nano, and Arduino-compatible Bare Bones Board and Boarduino boards may provide male header pins on the underside of the board to be plugged into solderless breadboards.

There are a great many Arduino-compatible and Arduino-derived boards. Some are functionally equivalent to an Arduino and may be used interchangeably. Many are the basic Arduino with the addition of commonplace output drivers, often for use in school-level education to simplify the construction of buggies and small robots. Others are electrically equivalent but change the form factor, sometimes permitting the continued use of Shields, sometimes not. Some variants use completely different processors, with varying levels of compatibility.

10

Page 11: MAJOR PROJECT

OFFICIAL BOARDS

The original Arduino hardware is manufactured by the Italian company Smart Projects. Some Arduino-branded boards have been designed by the American company SparkFun Electronics. Sixteen versions of the Arduino hardware have been commercially produced to date.

Duemilanove (rev 2009b) Arduino UNO

Arduino Leonardo Arduino Mega

Arduino Nano Arduino Due (ARM-based) LilyPad (rev 2007)

11

Page 12: MAJOR PROJECT

SHIELDS

Arduino and Arduino-compatible boards make use of shields, printed circuit expansion boards that plug

into the normally supplied Arduino pin-headers. Shields can provide motor controls, GPS, ethernet,

LCD display, or bread boarding (prototyping). A number of shields can also be made DIY.

Fig 3.2: Arduino shields

12

Page 13: MAJOR PROJECT

3.5 Introduction to software : MATLAB & ARDUINO IDE

An introduction to MATLAB

MATLAB = Matrix Laboratory“MATLAB is a high-level language and interactive environment that enables you to perform computationally intensive tasks faster than with traditional programming languages such as C, C++ and Fortran.”MATLAB is an interactive, interpreted language that is designed for fast numerical matrix calculations.

The MATLAB Environment

Fig. 3.3 MATLAB Environment

MATLAB window components

1. Workspace> Displays all the defined variables2. Command Window> To execute commands in the MATLAB environment3. Command History> Displays record of the commands used4. File Editor Window> Define your functions

13

Page 14: MAJOR PROJECT

MATLAB Help

Fig. 3.4: MATLAB Help

MATLAB Help is an extremely powerful assistance to learning MATLABHelp not only contains the theoretical background, but also shows demos for implementationMATLAB Help can be opened by using the HELP pull-down menuThe purpose of this tutorial is to gain familiarity with MATLAB’s Image ProcessingToolbox. This tutorial does not contain all of the functions available in MATLAB. It isvery useful to go to Help\MATLAB Help in the MATLAB window if you have anyquestions not answered by this tutorial. Many of the examples in this tutorial aremodified versions of MATLAB’s help examples. The help tool is especially useful in image processing applications, since there are numerous filter examples.

14

Page 15: MAJOR PROJECT

Fig. 3.5: M-file for Loading Images

15

Page 16: MAJOR PROJECT

Fig. 3.6: Bitmap Image Fig. 3.7: Grayscale Image

BASIC EXAMPLES

16

Page 17: MAJOR PROJECT

EXAMPLE 1

How to build a matrix(or image)?

r = 256;img = zeros(r, c);img(100:105, :) = 0.5;img(:, 100:105) = 1;figure;imshow(img);

OUTPUT

Fig.3.8: Example 1 output

EXAMPLE 2

17

Page 18: MAJOR PROJECT

PROGRAM:

r = 256;c = 256;img = rand(r,c);img = round(img);figure;imshow(img);

3.9:Example 2 output

An introduction to Arduino IDE

18

Page 19: MAJOR PROJECT

The Arduino integrated development environment (IDE) is a cross-platform application written in Java, and is derived from the IDE for the Processing programming language and the Wiring projects. It is designed to introduce programming to artists and other newcomers unfamiliar with software development. It includes a code editor with features such as syntax highlighting, brace matching, and automatic indentation, and is also capable of compiling and uploading programs to the board with a single click. A program or code written for Arduino is called a "sketch".

Arduino programs are written in C or C++. The Arduino IDE comes with a software library called "Wiring" from the original Wiring project, which makes many common input/output operations much easier. Users only need define two functions to make a runnable cyclic executive program:

setup(): a function run once at the start of a program that can initialize settings

loop(): a function called repeatedly until the board powers off

#define LED_PIN 13

void setup () pinMode (LED_PIN, OUTPUT); // Enable pin 13 for digital output

void loop () digitalWrite (LED_PIN, HIGH); // Turn on the LED delay (1000); // Wait one second (1000 milliseconds) digitalWrite (LED_PIN, LOW); // Turn off the LED delay (1000); // Wait one second

It is a feature of most Arduino boards that they have an LED and load resistor connected between pin 13 and ground, a convenient feature for many simple tests.[9] The previous code would not be seen by a standard C++ compiler as a valid program, so when the user clicks the "Upload to I/O board" button in the IDE, a copy of the code is written to a temporary file with an extra include header at the top and a very simple main() function at the bottom, to make it a valid C++ program.

The Arduino IDE uses the GNU toolchain and AVR Libc to compile programs, and uses avrdude to upload programs to the board.

BLOCK DIAGRAM

19

Page 20: MAJOR PROJECT

The complete hardware arrangement is shown by the following block diagram:-

Fig. 4.1 Block diagram

Schemetic diagram

Fig4.2: Schematic of actual circuitActual Hardware Design

20

Page 21: MAJOR PROJECT

Fig. 4.3: Actual Hardware Design

PRINCIPLE OF OPERATION

21

Page 22: MAJOR PROJECT

5.1 Menu

We have created a standalone application that is used for Face detection and recognition.For this we have created graphical menu which contain choices. Simply clicking through mouse on specified choice can perform the desired action. Menu can be made from simple if loops and inside the if loop desired functions can be written. The figure given below will show the menu we have made.

Fig5.1: Menu

5.2 Database generation

The next step was to create a database. We have created database by simply taking face photos through camera, renaming and saving face images of .jpeg format in a particular folder then comparing with the face to recognize with database faces. The number of faces that saves in database is equal to M * times, where M is the no. of person entered by user and times is used for increasing the accuracy i.e. let times be 5, so 5 face images per person. The flow chart for database generation is given below.

22

Page 23: MAJOR PROJECT

Fig 5.2: flow diagram for generating database

5.3 Recognition

The recognition process is done by Eigen face algorithm. The flow diagram for recognition process is given below.

Fig 5.3: flow diagram for recognition process

23

Page 24: MAJOR PROJECT

Eigen Face Algorithm

The Eigen face algorithm used for the detection purpose of any face is described as follow:

1. The first step is to obtain a set S with M face images. In our example M = 25 as shown at the beginning of the tutorial. Each image is transformed into a vector of size N and placed into the set.

2. After you have obtained your set, you will obtain the mean image Ψ

3. Then you will find the difference Φ between the input image and the mean image

4. Next we seek a set of M orthonormal vectors, un, which best describes the distribution of the data. The kth vector, uk, is chosen such that

is a maximum, subject to

Note: uk and λk are the eigenvectors and eigenvalues of the covariance matrix C

5. We obtain the covariance matrix C in the following manner

24

Page 25: MAJOR PROJECT

6. AT

7. Once we have found the eigenvectors, vl, ul

These are the Eigen faces of our set of original images

Recognition Procedure

1. A new face is transformed into its eigenface components. First we compare our input image with our mean image and multiply their difference with each eigenvector of the L matrix. Each value would represent a weight and would be saved on a vector Ω.

2. We now determine which face class provides the best description for the input image. This is done by minimizing the Euclidean distance

3. The input face is consider to belong to a class if εk is bellow an established threshold θε. Then the face image is considered to be a known face. If the difference is above the given threshold, but

25

Page 26: MAJOR PROJECT

bellow a second threshold, the image can be determined as a unknown face. If the input image is above these two thresholds, the image is determined NOT to be a face.

4. If the image is found to be an unknown face, you could decide whether or not you want to add the image to your training set for future recognitions. You would have to repeat steps 1 trough 7 to incorporate this new face image.

5.4 Tracking

The face tracking is done using Viola Jones algorithm. We have combined motors through Arduino UNO board which has ATmega328 microcontroller which is used to run dc motors in direction of face. The flow diagram for tracking is given below.

Fig 5.4: flow diagram for tracking process

The live image coming from camera is of 320*240 resolution. The script file vj_track_face.m returns coordinates (x-axis and y-axis) of bounding box that surrounds the face (starting coordinates) using Viola Jones algorithm.These x and y axis is passed to motor_motion.m file. This function is responsible for the motor movement in direction of faces. We have used two dc motors, the lower one is used for left and right motion and upper/front one is used for up and down motion. The working of motors is defined by coordinates, which is shown below:

1. If x-axis is between 120 to 200 and y between 100 to 160 then there is no movement2. If x-axis is less than 120 then there is left motion.3. If x-axis is greater than 200 then there is right motion.4. If y-axis is greater than 160 then there is up motion.5. If y-axis is less than 100 then there is down motion.

26

Page 27: MAJOR PROJECT

Viola Jones algorithm

The Voila Jones algorithm is given as follow:

– In voila zone algorithm the detection is done by the Feature extraction and feature evaluation Rectangular features are used, with a new image representation their calculation is very fast.

Fig 5.5: Rectangular features

Fig 5.6: Rectangular feature matching with face

– They are easy to calculate.– The white areas are subtracted from the black ones.– A special representation of the sample called the integral image makes feature extraction faster.– Features are extracted from sub windows of a sample image.– The base size for a sub window is 24 by 24 pixels.– Each of the four feature types are scaled and shifted across all possible combinations.– In a 24 pixel by 24 pixel sub window there are ~160,000 possible features to be calculated– A real face may result in multiple nearby detections– Post process detected sub windows to combine overlapping detections into a single detection

27

Page 28: MAJOR PROJECT

Fig 5.7: A Cascade of Classifiers

Fig 5.7: Notice detection at multiple scales

28

Page 29: MAJOR PROJECT

PROCESS FLOW DIAGRAM

Fig 5.9.: Process flow diagram

29

Page 30: MAJOR PROJECT

RESULT

The result of our project is summarized as follows:

1. When the code is compiled, the set of input images is given by

2. The mean image for the respected set of images is given as follow:

30

Page 31: MAJOR PROJECT

3. The Eigen faces for the given set of image with respect to evaluated mean image is given by:

31

Page 32: MAJOR PROJECT

Real time tracking

Fig. Tracking one person in real time( SSD showing 1)

32

Page 33: MAJOR PROJECT

PROBLEMS FACED

There were number of challenges and problems faced by us, some of them were:

1. In generating database, renaming was the main problem and saving them in a proper way so that the images can be easily access in processing. So we have used numbers for naming faces like 1.jpg, 2.jpg… and so on.

2. Coding the Eigen face algorithm with live images was complex.

3. Taking tolerance for real time recognition.

4. Motors assembly on frame as motors were not fixing on frame properly.

5. We have first tried to move the frame using stepper motors, but the code that we have made didn’t work properly as well as tracking from stepper is only 180 degree. DC motors work fine with the code and it can also track 360 degree.

6. Realizing pauses and delays so that motor can work perfectly.

33

Page 34: MAJOR PROJECT

APPLICATION OF THIS PROJECT

APPLICATIONS

This project can have many applications. It can be used in –

1. For accessing a secure area by face .

Fig 8.1 Person entering in an organization.Camera detecting and marking attendance.

2. For attendance registration

Fig 8.2 Attendance is marked with count.

3. For anti terror activity

Fig 8.3 Camera in the station detectsMost wanted criminal. Alarm raises.

34

Page 35: MAJOR PROJECT

Fig 8.4:CCTV camera tracks the positionOf the criminal.

Fig 8.5After all the efforts,Security guards finally capture the criminal.

4. For automatic videography

35

Page 36: MAJOR PROJECT

5. For counting number of person in a room

Fig 8.6: Counting the no. of persons

36

Page 37: MAJOR PROJECT

APPENDIX

Program code %% face_choice.m - main script file

imaqreset;clear all;close all;clc;i=1;global M; %input no. of facesglobal face_idglobal times %no of faces for one person times=5; %no. of pics capture for an individualN=4; %default no. of facesconfig_arduino(); %function for configuring arddino boardvid = videoinput('winvideo',2,'YUY2_320x240'); while (1==1) choice=menu('Face Recognition',... 'Generate database',... 'Recognize face from drive',... 'Recognize face from camera',... 'Track face from camera',... 'Track the recognized face from camera',... 'Exit'); if (choice==1) choice1=menu('Face Recognition',... 'Enter no. of faces',... 'Exit'); if (choice1==1) M=input('Enter : '); preview(vid); while(i<((M*times)+1)) choice2=menu('Face Recognition',... 'Capture'); if(choice2==1) g=getsnapshot(vid); %saving rgb image in specified folder rgbImage=ycbcr2rgb(g); str=strcat(int2str(i),'.jpg'); fullImageFileName = fullfile('E:\New Folder\',str);

37

Page 38: MAJOR PROJECT

imwrite(rgbImage,fullImageFileName); %saving grayscale image in current directory grayImage=rgb2gray(rgbImage); Dir_name=fullfile(pwd,str); imwrite(grayImage,Dir_name); i=(i+1); end end closepreview(vid); end if (choice1==2) clear choice1; end end if(choice==2) if(isempty(M)==1) default=N*times; face_id=recognize_face_drive((default)); else faces=M*times; face_id=recognize_face_drive(faces); end end if(choice==3) if(isempty(M)==1) default=N*times; face_id=recognize_face_cam(default); else faces=M*times; face_id=recognize_face_cam(faces); end end if (choice==4) vj_track_face(); end if (choice==5) vj_faceD_live(); end if (choice==6) close all; return; end

38

Page 39: MAJOR PROJECT

endstop(vid);

%% config_arduino.m – for configuring Arduino board

function config_arduino() %function for configuring arddino boardglobal b %global arduino class objectb=arduino('COM29');b.pinMode(4,'OUTPUT'); % pin 4&5 for right & left and pin 6&7 for up & downb.pinMode(5,'OUTPUT');b.pinMode(6,'OUTPUT');b.pinMode(7,'OUTPUT'); b.pinMode(2,'OUTPUT'); b.pinMode(3,'OUTPUT'); b.pinMode(8,'OUTPUT'); b.pinMode(9,'OUTPUT'); b.pinMode(10,'OUTPUT'); b.pinMode(11,'OUTPUT'); b.pinMode(12,'OUTPUT'); b.pinMode(13,'OUTPUT'); % pin 2,3,8,9,10,11,12,13 for seven segment display 8 leds b.pinMode(14,'OUTPUT');b.pinMode(15,'OUTPUT');b.pinMode(16,'OUTPUT');b.pinMode(17,'OUTPUT'); % pin 14,15,16,17 for multiplexing 4 seven segment displayend

%% recognize_face_drive.m – function for matching face using Eigen face algorithm from hard drive

% Thanks to Santiago Serrano function Min_id = recognize_face_drive(M)close allclc% number of images on your training set. %Chosen std and mean.%It can be any number that it is close to the std and mean of most of the images.um=100;ustd=80;person_no=0;times=5; %read and show images(bmp);S=[]; %img matrixfigure(1);for i=1:M

39

Page 40: MAJOR PROJECT

str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image eval('img=imread(str);'); %eval('img=rgb2gray(image);'); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) if i==3 title('Training set','fontsize',18) end drawnow; [irow icol]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(img',irow*icol,1); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our Send %Here we change the mean and std of all images. We normalize all images.%This is done to reduce the error due to lighting conditions.for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um;end %show normalized imagesfigure(2);for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; eval('imwrite(img,str)'); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) drawnow; if i==3 title('Normalized Training Set','fontsize',18) endend %mean image;m=mean(S,2); %obtains the mean of each row instead of each columntmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matriximg=img'; %creates a N1xN2 matrix by transposing the image.figure(3);imshow(img);title('Mean Image','fontsize',18)

40

Page 41: MAJOR PROJECT

% Change image for manipulationdbx=[]; % A matrixfor i=1:M temp=double(S(:,i)); dbx=[dbx temp];end %Covariance matrix C=A'A, L=AA'A=dbx';L=A*A';% vv are the eigenvector for L% dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx';[vv dd]=eig(L);% Sort and eliminate those whose eigenvalue is zerov=[];d=[];for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; endend %sort, will return an ascending sequence[B index]=sort(d);ind=zeros(size(index));dtemp=zeros(size(index));vtemp=zeros(size(v));len=length(index);for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i);endd=dtemp;v=vtemp; %Normalization of eigenvectorsfor i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp;end %Eigenvectors of C matrixu=[];for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp];

41

Page 42: MAJOR PROJECT

end %Normalization of eigenvectorsfor i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp;end % show eigenfaces;figure(4);for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) drawnow; if i==3 title('Eigenfaces','fontsize',18) endend % Find the weight of each face in the training set.omega = [];for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW];end % Acquire new image% Note: the input image must have a bmp or jpg extension.% It should have the same size as the ones in your training set.% It should be placed on your desktopInputImage = input('Please enter the name of the image and its extension \n','s');InputImage = imread(strcat('E:\',InputImage));figure(5)subplot(1,2,1)imshow(InputImage); colormap('gray');title('Input image','fontsize',18)input_img=rgb2gray(InputImage);%imshow(input_img);InImage=reshape(double(input_img)',irow*icol,1);

42

Page 43: MAJOR PROJECT

temp=InImage;me=mean(temp);st=std(temp);temp=(temp-me)*ustd/st+um;NormImage = temp;Difference = temp-m; p = [];aa=size(u,2);for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare];endReshapedImage = m + u(:,1:aa)*p; %m is the mean image, u is the eigenvectorReshapedImage = reshape(ReshapedImage,icol,irow);ReshapedImage = ReshapedImage';%show the reconstructed image.subplot(1,2,2)imagesc(ReshapedImage); colormap('gray');title('Reconstructed image','fontsize',18) InImWeight = [];for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage];end ll = 1:M;figure(68)subplot(1,2,1)stem(ll,InImWeight)title('Weight of Input Face','fontsize',14) % Find Euclidean distancee=[];for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag];end kk = 1:size(e,2);subplot(1,2,2)stem(kk,e)title('Eucledian distance of input image','fontsize',14) MaximumValue=max(e)MinimumValue=min(e)

43

Page 44: MAJOR PROJECT

Min_id=find(e==min(e));person_no=Min_id/times;p1=(round(person_no));if(person_no<p1) p1=(p1-1); display('Detected face number') display(p1) write_digit(14,15,16,17,p1);endif(person_no>p1) p1=(p1+1); display('Detected face number') display(p1) write_digit(14,15,16,17,p1);endif(person_no==p1) display('Detected face number') display(p1) write_digit(14,15,16,17,p1);end end

%% recognize_face_cam.m – function for matching face using Eigen face algorithm from camera

% Thanks to Santiago Serrano function Min_id = recognize_face_cam(M)imaqreset; close allclc% number of images on your training set. vid = videoinput('winvideo',1,'YUY2_320x240'); %Chosen std and mean.%It can be any number that it is close to the std and mean of most of the images.um=100;ustd=80;person_no=0;times=5; %read and show images(bmp);S=[]; %img matrixfigure(1);for i=1:M str=strcat(int2str(i),'.jpg'); %concatenates two strings that form the name of the image

44

Page 45: MAJOR PROJECT

eval('img=imread(str);'); %eval('img=rgb2gray(image);'); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) if i==3 title('Training set','fontsize',18) end drawnow; [irow icol]=size(img); % get the number of rows (N1) and columns (N2) temp=reshape(img',irow*icol,1); %creates a (N1*N2)x1 matrix S=[S temp]; %X is a N1*N2xM matrix after finishing the sequence %this is our Send %Here we change the mean and std of all images. We normalize all images.%This is done to reduce the error due to lighting conditions.for i=1:size(S,2) temp=double(S(:,i)); m=mean(temp); st=std(temp); S(:,i)=(temp-m)*ustd/st+um;end %show normalized imagesfigure(2);for i=1:M str=strcat(int2str(i),'.jpg'); img=reshape(S(:,i),icol,irow); img=img'; eval('imwrite(img,str)'); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) drawnow; if i==3 title('Normalized Training Set','fontsize',18) endend %mean image;m=mean(S,2); %obtains the mean of each row instead of each columntmimg=uint8(m); %converts to unsigned 8-bit integer. Values range from 0 to 255img=reshape(tmimg,icol,irow); %takes the N1*N2x1 vector and creates a N2xN1 matriximg=img'; %creates a N1xN2 matrix by transposing the image.figure(3);imshow(img);title('Mean Image','fontsize',18) % Change image for manipulation

45

Page 46: MAJOR PROJECT

dbx=[]; % A matrixfor i=1:M temp=double(S(:,i)); dbx=[dbx temp];end %Covariance matrix C=A'A, L=AA'A=dbx';L=A*A';% vv are the eigenvector for L% dd are the eigenvalue for both L=dbx'*dbx and C=dbx*dbx';[vv dd]=eig(L);% Sort and eliminate those whose eigenvalue is zerov=[];d=[];for i=1:size(vv,2) if(dd(i,i)>1e-4) v=[v vv(:,i)]; d=[d dd(i,i)]; endend %sort, will return an ascending sequence[B index]=sort(d);ind=zeros(size(index));dtemp=zeros(size(index));vtemp=zeros(size(v));len=length(index);for i=1:len dtemp(i)=B(len+1-i); ind(i)=len+1-index(i); vtemp(:,ind(i))=v(:,i);endd=dtemp;v=vtemp; %Normalization of eigenvectorsfor i=1:size(v,2) %access each column kk=v(:,i); temp=sqrt(sum(kk.^2)); v(:,i)=v(:,i)./temp;end %Eigenvectors of C matrixu=[];for i=1:size(v,2) temp=sqrt(d(i)); u=[u (dbx*v(:,i))./temp];end

46

Page 47: MAJOR PROJECT

%Normalization of eigenvectorsfor i=1:size(u,2) kk=u(:,i); temp=sqrt(sum(kk.^2)); u(:,i)=u(:,i)./temp;end % show eigenfaces;figure(4);for i=1:size(u,2) img=reshape(u(:,i),icol,irow); img=img'; img=histeq(img,255); subplot(ceil(sqrt(M)),ceil(sqrt(M)),i) imshow(img) drawnow; if i==3 title('Eigenfaces','fontsize',18) endend % Find the weight of each face in the training set.omega = [];for h=1:size(dbx,2) WW=[]; for i=1:size(u,2) t = u(:,i)'; WeightOfImage = dot(t,dbx(:,h)'); WW = [WW; WeightOfImage]; end omega = [omega WW];end % Acquire new image from camerapreview(vid);choice=menu('Push CAM button for taking pic',... 'CAM');if(choice==1) g=getsnapshot(vid);endrgbImage=ycbcr2rgb(g);imwrite(rgbImage,'camshot.jpg');closepreview(vid);InputImage = imread('camshot.jpg');figure(5)subplot(1,2,1)

47

Page 48: MAJOR PROJECT

imshow(InputImage); colormap('gray');title('Input image','fontsize',18)input_img=rgb2gray(InputImage);%imshow(input_img);InImage=reshape(double(input_img)',irow*icol,1);temp=InImage;me=mean(temp);st=std(temp);temp=(temp-me)*ustd/st+um;NormImage = temp;Difference = temp-m; p = [];aa=size(u,2);for i = 1:aa pare = dot(NormImage,u(:,i)); p = [p; pare];endReshapedImage = m + u(:,1:aa)*p; %m is the mean image, u is the eigenvectorReshapedImage = reshape(ReshapedImage,icol,irow);ReshapedImage = ReshapedImage';%show the reconstructed image.subplot(1,2,2)imagesc(ReshapedImage); colormap('gray');title('Reconstructed image','fontsize',18) InImWeight = [];for i=1:size(u,2) t = u(:,i)'; WeightOfInputImage = dot(t,Difference'); InImWeight = [InImWeight; WeightOfInputImage];end ll = 1:M;figure(68)subplot(1,2,1)stem(ll,InImWeight)title('Weight of Input Face','fontsize',14) % Find Euclidean distancee=[];for i=1:size(omega,2) q = omega(:,i); DiffWeight = InImWeight-q; mag = norm(DiffWeight); e = [e mag];end kk = 1:size(e,2);subplot(1,2,2)stem(kk,e)

48

Page 49: MAJOR PROJECT

title('Eucledian distance of input image','fontsize',14) MaximumValue=max(e)MinimumValue=min(e) Min_id=find(e==min(e));person_no=Min_id/times;p1=(round(person_no));if(person_no<p1) p1=(p1-1); display('Detected face number') display(p1); write_digit(14,15,16,17,p1);endif(person_no>p1) p1=(p1+1); display('Detected face number') display(p1); write_digit(14,15,16,17,p1);endif(person_no==p1) display('Detected face number') display(p1); write_digit(14,15,16,17,p1);endstop(vid);end

%% vj_track_face.m – function for tracking face

% created and coded by Abhishek Gupta ([email protected])function vj_track_face()imaqreset;close all;clcno_face=0; %Detect objects using Viola-Jones Algorithmvid = videoinput('winvideo',1,'YUY2_320x240');set(vid,'ReturnedColorSpace','rgb');set(vid,'TriggerRepeat',Inf);vid.FrameGrabInterval = 1;vid.FramesPerTrigger=20; figure; % Ensure smooth displayset(gcf,'doublebuffer','on');start(vid); while(vid.FramesAcquired<=1000);

49

Page 50: MAJOR PROJECT

FDetect = vision.CascadeObjectDetector; %To detect Face I = getsnapshot(vid); %Read the input image BB = step(FDetect,I); %Returns Bounding Box values based on number of objects hold on figure(1),imshow(I); title('Face Detection'); for i = 1:size(BB,1) no_face=size(BB,1); write_digit(14,15,16,17,no_face); rectangle('Position',BB(i,:),'LineWidth',2,'LineStyle','-','EdgeColor','y'); display(BB(1)); display(BB(2)); motor_motion(BB(1),BB(2)); hold off; flushdata(vid); endendstop(vid);end

%% vj_faceD_live.m – function for tracking recognized face

function vj_faceD_live(std,mean)imaqreset;close allclcdetect=0;std_2=0;mean_2=0;tlrnce=7;while (1==1) choice=menu('Face Recognition',... 'Real time recognition',... 'Track last recognised face',... 'Exit'); if (choice==1) [std,mean]= face_stdmean(); %Detect objects using Viola-Jones Algorithm vid = videoinput('winvideo',1,'YUY2_320x240'); set(vid,'ReturnedColorSpace','rgb'); set(vid,'TriggerRepeat',Inf);

50

Page 51: MAJOR PROJECT

vid.FrameGrabInterval = 1; vid.FramesPerTrigger=20; figure; % Ensure smooth display set(gcf,'doublebuffer','on'); start(vid); while(vid.FramesAcquired<=1500); FDetect = vision.CascadeObjectDetector; %To detect Face I = getsnapshot(vid); %Read the input image BB = step(FDetect,I); %Returns Bounding Box values based on number of objects hold on if(size(BB,1) == 1) I2=imcrop(I,BB); gray_face=rgb2gray(I2); std_2 = std2(gray_face); mean_2 = mean2(gray_face); %figure(1),imshow(gray_face); end figure(1),imshow(I); title('Face Recognition'); display(std); display(mean); display(std_2); display(mean_2); for i = 1:size(BB,1) if((((std_2<=(std+tlrnce))&&(std_2>=(std-tlrnce))))&&((mean_2<=(mean+tlrnce))&&(mean_2>=(mean-tlrnce)))) rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','g'); display('DETECTED'); detect=(detect+1) if(detect==2) display('tracking....'); detect=0; motor_motion(BB(1),BB(2));%for motion of motors in direction of faces end else rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','r'); display('NOT DETECTED'); end hold off; flushdata(vid);

51

Page 52: MAJOR PROJECT

end end stop(vid); end if(choice==2) [std,mean]= face_stdmean_recgnz(); %Detect objects using Viola-Jones Algorithm vid = videoinput('winvideo',1,'YUY2_320x240'); set(vid,'ReturnedColorSpace','rgb'); set(vid,'TriggerRepeat',Inf); vid.FrameGrabInterval = 1; vid.FramesPerTrigger=20; figure; % Ensure smooth display set(gcf,'doublebuffer','on'); start(vid); while(vid.FramesAcquired<=600); FDetect = vision.CascadeObjectDetector; %To detect Face I = getsnapshot(vid); %Read the input image BB = step(FDetect,I); %Returns Bounding Box values based on number of objects hold on if(size(BB,1) == 1) I2=imcrop(I,BB); gray_face=rgb2gray(I2); std_2 = std2(gray_face); mean_2 = mean2(gray_face); %figure(1),imshow(gray_face); end figure(1),imshow(I); title('Face Recognition'); display(std); display(mean); display(std_2); display(mean_2); for i = 1:size(BB,1) if((((std_2<=(std+tlrnce))&&(std_2>=(std-tlrnce))))&&((mean_2<=(mean+tlrnce))&&(mean_2>=(mean-tlrnce)))) rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','g'); display('DETECTED'); detect=(detect+1) if(detect==2) display('tracking....'); detect=0;

52

Page 53: MAJOR PROJECT

motor_motion(BB(1),BB(2));%for motion of motors in direction of faces end else rectangle('Position',BB(i,:),'LineWidth',4,'LineStyle','-','EdgeColor','r'); display('NOT DETECTED'); end hold off; flushdata(vid); end end stop(vid); end if(choice==3) return endend

%% face_stdmean.m – function returns standard deviation and mean for real time recognition

function [std_f,mean_f] = face_stdmean()imaqreset; close all;clc;i=1;global stdglobal meanglobal timesstd=0;mean=0;vid = videoinput('winvideo',1,'YUY2_320x240');while (1==1) choice=menu('Face Recognition',... 'Taking photos for recognition',... 'Exit'); if (choice==1) FDetect = vision.CascadeObjectDetector; preview(vid); while(i<(times+1)) choice2=menu('Face Recognition',... 'Capture'); if(choice2==1) g=getsnapshot(vid); %saving rgb image in specified folder rgbImage=ycbcr2rgb(g); str=strcat(int2str(i),'f.jpg');

53

Page 54: MAJOR PROJECT

fullImageFileName = fullfile('E:\New Folder\',str); imwrite(rgbImage,fullImageFileName); BB = step(FDetect,rgbImage); I2=imcrop(rgbImage,BB); %saving grayscale image in current directory grayImage=rgb2gray(I2); Dir_name=fullfile(pwd,str); imwrite(grayImage,Dir_name); std = (std+std2(grayImage)); mean =(mean+mean2(grayImage)); i=(i+1); end std_f=(std/times) mean_f=(mean/times) end end closepreview(vid); if (choice==2) std_f=(std/times) mean_f=(mean/times) return; endendend

%% face_stdmean_recgnz.m– function returns standard deviation and mean of last recognized face

function [std_f,mean_f] = face_stdmean_recgnz()close all;clc;global face_idglobal stdglobal meanglobal timesstd=0;mean=0;%i=face_id;i=input('Enter face id for live recognition: ');FDetect = vision.CascadeObjectDetector;j=(i*times);k=(j-times);while(j>=(k+1)) str=strcat(int2str(j),'.jpg'); fullImageFileName = fullfile('E:\New Folder\',str); I=imread(fullImageFileName); BB = step(FDetect,I); I2=imcrop(I,BB);

54

Page 55: MAJOR PROJECT

grayImage=rgb2gray(I2); str2=strcat(int2str(j),'f.jpg'); Dir_name=fullfile(pwd,str2); imwrite(grayImage,Dir_name); std = (std+std2(grayImage)); mean =(mean+mean2(grayImage)); j=(j-1);end std_f=(std/times) mean_f=(mean/times)end

%% motor_motio.m – function for movement of motors

function motor_motion(x,y) %for motion of motors in direction of facesglobal b %global arduino class object if ((x<200 && x>120)&&(y<160 && y>100)) disp('Stop'); b.digitalWrite(4,0); b.digitalWrite(5,0); b.digitalWrite(6,0); b.digitalWrite(7,0);endif (x<120) disp('right'); b.digitalWrite(4,0); b.digitalWrite(5,1); pause(0.05); b.digitalWrite(4,0); b.digitalWrite(5,0); pause(0.1);endif (x>200) disp('left'); b.digitalWrite(4,1); b.digitalWrite(5,0); pause(0.05); b.digitalWrite(4,0); b.digitalWrite(5,0); pause(0.1);endif (y>160) disp('up'); b.digitalWrite(6,1); b.digitalWrite(7,0); pause(0.05); b.digitalWrite(6,0); b.digitalWrite(7,0);

55

Page 56: MAJOR PROJECT

pause(0.1);endif (y<100) disp('down'); b.digitalWrite(6,0); b.digitalWrite(7,1); pause(0.05); b.digitalWrite(6,0); b.digitalWrite(7,0); pause(0.1);endend

%% write_digit.m – function for switching digits on seven segment display

function write_digit(w,x,y,z,a)global bswitch a case 1 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); one(); pause(.001); case 2 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); two(); pause(.001); case 3 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); three(); pause(.001); case 4 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); four(); pause(.001); case 5 b.digitalWrite(w,1);

56

Page 57: MAJOR PROJECT

b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); five(); pause(.001); case 6 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); six(); pause(.001); case 7 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); seven(); pause(.001); case 8 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); eight(); pause(.001); case 9 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); nine(); pause(.001); case 0 b.digitalWrite(w,1); b.digitalWrite(x,0); b.digitalWrite(y,0); b.digitalWrite(z,0); zero(); pause(.001);endend

%% zero.m - function for write zero

function zero()global bb.digitalWrite(2,1);b.digitalWrite(3,1);

57

Page 58: MAJOR PROJECT

b.digitalWrite(8,0);b.digitalWrite(9,0);b.digitalWrite(10,0);b.digitalWrite(11,0);b.digitalWrite(12,0);b.digitalWrite(13,0);end

%% one.m - function for write one

function one()global bb.digitalWrite(2,1);b.digitalWrite(3,1);b.digitalWrite(8,1);b.digitalWrite(9,1);b.digitalWrite(10,1);b.digitalWrite(11,0);b.digitalWrite(12,0);b.digitalWrite(13,1);end

%% two.m - function for write two

function two()global bb.digitalWrite(2,1);b.digitalWrite(3,0);b.digitalWrite(8,1);b.digitalWrite(9,0);b.digitalWrite(10,0);b.digitalWrite(11,1);b.digitalWrite(12,0);b.digitalWrite(13,0);end

%% three.m - function for write three

function three()global bb.digitalWrite(2,1);b.digitalWrite(3,0);b.digitalWrite(8,1);b.digitalWrite(9,1);b.digitalWrite(10,0);

58

Page 59: MAJOR PROJECT

b.digitalWrite(11,0);b.digitalWrite(12,0);b.digitalWrite(13,0);end

%% four.m - function for write four

function four()global bb.digitalWrite(2,1);b.digitalWrite(3,0);b.digitalWrite(8,0);b.digitalWrite(9,1);b.digitalWrite(10,1);b.digitalWrite(11,0);b.digitalWrite(12,0);b.digitalWrite(13,1);end

%% five.m - function for write five

function five()global bb.digitalWrite(2,1);b.digitalWrite(3,0);b.digitalWrite(8,0);b.digitalWrite(9,1);b.digitalWrite(10,0);b.digitalWrite(11,0);b.digitalWrite(12,1);b.digitalWrite(13,0);end

%% six.m - function for write six

function six()global bb.digitalWrite(2,1);b.digitalWrite(3,0);b.digitalWrite(8,0);b.digitalWrite(9,0);b.digitalWrite(10,0);b.digitalWrite(11,0);b.digitalWrite(12,0);b.digitalWrite(13,1);end

59

Page 60: MAJOR PROJECT

%% seven.m - function for write seven

function seven()global bb.digitalWrite(2,1);b.digitalWrite(3,1);b.digitalWrite(8,1);b.digitalWrite(9,1);b.digitalWrite(10,1);b.digitalWrite(11,0);b.digitalWrite(12,0);b.digitalWrite(13,0);end

%% eight.m - function for write eight

function eight()global bb.digitalWrite(2,1);b.digitalWrite(3,0);b.digitalWrite(8,0);b.digitalWrite(9,0);b.digitalWrite(10,0);b.digitalWrite(11,0);b.digitalWrite(12,0);b.digitalWrite(13,0);end

%% nine.m - function for write nine

function nine()global bb.digitalWrite(2,1);b.digitalWrite(3,0);b.digitalWrite(8,0);b.digitalWrite(9,1);b.digitalWrite(10,1);b.digitalWrite(11,0);b.digitalWrite(12,0);b.digitalWrite(13,0);end

60

Page 61: MAJOR PROJECT

Arduino Uno board Schematics

Fig: Arduino Uno Schematics

Driver for Arduino UNO

The driver that is used for Arduino UNO is Prolific pl2303.This driver is available on the site given below.http:\\www.prolific.com\ MATLAB interfacing files for Arduino

MATLAB has provided interfacing files for almost all Arduino boards. These files should be included in the working directory. There is one file in folder ‘ardiosrv’ in which there is a file named ‘ardiosrv.pde’, this file is need to be burn in Arduino UNO board. These files are available on the link given below. http://www.mathsworks.com/academia/arduino-software/arduino-matlab.html

61

Page 62: MAJOR PROJECT

REFERENCES

The usefull topics for my project are taken from these references :

[1] Santiago Serrano, Eigen Faces tutorial, Drexel University

[2] Padhraic Smyth, Face Detection using the Viola-Jones Method, Department ComputerScience University of California, Irvine.

[3] Abboud, F. Davoine, and M. Dang. Facial expression recognitionand synthesis based on an appearance model. Signal Process., Image Commun., 2004.

[2] J. Ahlberg. Candide-3 – an updated parametrised face. Technical report,Link¨oping University, 2001.

[4] J. Ahlberg and R. Forchheimer. Face tracking for model-based codingand face animation. International journal of imaging systems and technology, 2003.

[5] A. Azerbayejani and A. Pentland. Recursive estimation of motion, structure, and focallength. IEEE PAMI, 1995.

[6] V. Belle, T. Deselaers, and S. Schiffer. Randomized trees for real-timeone-step face detection and recognition. 2008.

[7] M. J. Black and Y. Yacoob. Recognizing facial expressions in imagesequences using local parameterized models of image motion. IJCV, 1997.

[8] S. Basu, I. Essa, and A. Pentland. Motion regularization for model-basedhead tracking. In CVPR 1996, 1996.

[9] M. L. Cascia, S. Scarloff, and V. Athitsos. Fast, reliable head trackingunder varying illumination: An approach based on registration of texturemapped3d models. IEEE PAMI, 2000.

62