mace correlation filter algorithm for face verification in surveillance scenario
DESCRIPTION
Journal of Computer Science and Engineering, ISSN 2043-9091, Volume 18, Issue 1, April 2013 www.journalcse.co.ukTRANSCRIPT
JOURNAL OF COMPUTER SCIENCE AND ENGINEERING, VOLUME 18, ISSUE 1, APRIL 2013
6
MACE Correlation Filter Algorithm for Face Verification in Surveillance Scenario
Omidiora E. O., Olabiyisi S. O., Ojo J. A., Abayomi-Alli A., Akingboye A.Y.,
Izilein F. and Ezomo P. I.
Abstract—Facial recognition is a vividly researched areaof computer vision, pattern recognition and more precisely biometrics and has be-come an important part of everyday lives due to increasing demand for security and law enforcement applications in real life surveillance scenarios. This research work set out to evaluate the performance of the MACE correlation filter algorithm whichhas been utilized in automat-ic target recognition successfully and has exhibited excellent face recognition performance with many databases. The Scface database mim-ics a really complex imaging environment which makes it a difficult performance benchmark for any face recognition algorithm and face matching experiments were carried out using the MACE correlation filter algorithm that was implemented in Matlab for the purpose of the study. Result obtained showed an overall accuracy of 0.8678, Matthew’s correlation coefficient of 0.2880, positive and negative predictive values of 0.3592 and 0.9270 respectively. The algorithm performed best with frontal infrared images taken at a distance of 1m from the cam-era with a recognition rate of 60%. Thus the algorithm is robust to identification in low illumination conditions and future research is expected
to improve it’srobustness to compression artefacts and noisy images, which might be important in forensic and law enforcement applications. Index Terms—Correlation Algorithms, Face recognition, MACE filter, Scface Database, Surveillance.
————————————————————
1 INTRODUCTION
acial recognition is a very important aspect of biome-tric authentication and/or identification [1], [2], [3]. Even though facial recognition is very useful in di-
verse application domains *4+, it isn’t without its prob-lems and challenges [3] due to the variability of the face images for same subject as it changes due to expression, pose angle, illumination, occlusion, and low resolution. The combinations of some of these factors affect the per-formance of the facial recognition system in various ways [1], [4], [5], [6], [7]. Even a big smile can render a face rec-ognition system less effective [7].
When performing recognition under surveillance
scenario, one or more combinations of these factors come
into play, thereby making recognition more difficult. It is
important to put in place an evaluation framework that
can enable evaluation of the performance of a face recog-
nition system implementing a particular algorithm before
deployment so as to have a clue regarding what its even-
tual performance will be like and also to determine if it
will be acceptable for its intended purpose.
One of the advantages that face recognition has
over other forms of biometric identification and authenti-
cation is that it doesn’t require active subject participation
[1] so it is difficult if not impossible to get user coopera-
tion.
Thus there is a departure from the easy scenario,
which makes the face recognition system to experience
severe problems [3] such as pose variation, illumination
conditions, scale variability, aging, glasses, moustaches,
beards, low quality image acquisition, partially occluded
faces etc.
Currently, many security applications use human
observers to recognize the face of an individual. In some
applications, face recognition systems are used in con-
junction with limited human intervention. For autono-
mous operation, it is highly desirable that the face recog-
nition systems be able to provide high reliability and ac-
curacy under multifarious scenarios [8]. This is a difficult
feat to achieve, partly due to the fact that most face rec-
ognition systems are not usually designed and tested
based on surveillance conditions in which the systems are
usually deployed [1], [3], [9]. To overcome this problem it
is important to be able to use manually generated images
that will mimic those that can be obtained from real life
surveillance conditions so as to obtain system evaluation
metrics that will be close to real life scenarios [2]. This has
been achieved by Grgic, Delac and Grgic in [2].
Most approaches to face recognition are in the
image domain whereas it is believed that there are more
advantages to work directly in the spatial frequency do-
main [10]. By going to the spatial frequency domain, im-
F
————————————————
E.O. Omidiora is with the Department of Computer Science and Engineer-ing, LadokeAkintola University of Technology, Ogbomoso, Nigeria.
S.O. Olabiyisi is with the Department of Computer Science and Engineer-ing, LadokeAkintola University of Technology, Ogbomoso, Nigeria.
J. A. Ojo is with the Department of Electronic and Electrical Engineering, LadokeAkintola University of Technology, Ogbomoso, Nigeria.
A. Abayomi-Alli is with the Department of Computer Science, Federal University of Agriculture, Abeokuta, Nigeria.
A.Y. Akingboye is with the Department of Electrical and Computer Engi-neering, Igbinedion University Okada, Benin City, Nigeria.
F. Izilein is with the Department of Electrical and Computer Engineering, Igbinedion University Okada, Benin City, Nigeria.
P. I. Ezomo is with the Department of Electrical and Computer Engineer-ing, Igbinedion University Okada, Benin City, Nigeria.
© 2013 JCSE
www.Journalcse.co.uk
7
age information gets distributed across frequencies pro-
viding tolerance to reasonable deviations and also provid-
ing graceful degradation against distortions to images
(e.g., occlusions) in the spatial domain. Correlation filter
technology [10] is a basic tool for frequency domain im-
age processing.
In correlation filter methods, normal variations in
authentic training images can be accommodated by de-
signing a frequency domain array (called a correlation
filter) that captures the consistent part of training images
while deemphasizing the inconsistent parts (or frequen-
cies). Object recognition is performed by cross-correlating
an input image with a designed correlation filter using
fast Fourier transforms (FFTs) [10].
This research therefore, set out to evaluate the
performance of the correlation filter algorithm specifically
the MACE filter on the Scface surveillance database. The
choice of the correlation filter algorithm is mainly because
research has shown it to exhibit better tolerance to noise
and illumination variations than many space domain me-
thods [11], [12].
The subsequent sections in the paper presents an
introduction to face recognition and its challenges, over-
view of correlation filters algorithm is presented, and the
Minimum Average Correlation Energy (MACE) filter is
discussed in more detail. Finally the results of the verifi-
cation experiments of the MACE filter on the Scface data-
base are shown and analyzed.
2 FACE RECOGNITION
The subject of face recognition is as old as computer vi-
sion [13]; dating as far back as the 1960’s, the first semi-
automated system for face recognition required the ad-
ministrator to locate features (such as eyes, ears, nose and
mouth) on the photograph before it calculated distances
and ratios to a common reference point [14]. Despite the
fact that other methods of identification (such as finger-
prints, or iris scans) can be more accurate, face recogni-
tion has always remained a major focus of research be-
cause of its non-invasive nature and because it is people's
primary method of person identification. Face recognition
is one of the most successful applications of image analy-
sis and understanding, and it has received significant at-
tention in recent years. There are at least two reasons for
this trend; the first is the wide range of commercial and
law enforcement applications and the second is the avail-
ability of feasible technologies after 30 years of research
[15].
Perhaps the most famous early example of a face
recognition system is due to Kohonen et al [16], who
demonstrated that a simple neural net could perform face
recognition for aligned and normalized face images. The
type of network he employed computed a face descrip-
tion by approximating the eigenvectors of the face im-
age's autocorrelation matrix; these eigenvectors are now
known as ‘eigenfaces’. Kohonen's system was not a prac-
tical success, however, because of the need for precise
alignment and normalization. In following years many
researchers tried face recognition schemes based on
edges, inter-feature distances, and other neural net ap-
proaches. While several were successful on small data-
bases of aligned images, none successfully addressed the
more realistic problem of large databases where the loca-
tion and scale of the face is unknown.
2.1 Face Recognition in 2D
Face recognition has been well studied using 2D still im-
ages for over a decade [17], [18], [19]. In 2D still image
based face recognition systems, a snapshot of a user is
acquired and compared with a gallery of snapshots to
establish a person's identity. In this procedure, the user is
expected to be cooperative and provide a frontal face im-
age under uniform lighting conditions with a simple
background to enable the capture and segmentation of a
high quality face image. However, it is now well known
that small variations in pose and lighting can drastically
degrade the performance of the single-shot 2D image
based face recognition systems [20].
2D face recognition is usually categorized ac-
cording to the number of images used in matching as
shown in Table 1.
Some of the well-known algorithms for 2D face
recognition are based on Principle Component Analysis
(PCA) [17], [18], Linear Discriminant Analysis (LDA) [21],
Elastic Graph Bunch Model (EGBM) [22], and correlation
based matching [23].
Table 1
Face Recognition in 2D Domain [19].
Probe
Gallery
Single still Many still
Image images
Single still image
Many still images
one-to-one one-to-many
many-to-one many-to-many
More efforts have been devoted to 2D face recognition
because of the availability of commodity 2D cameras and
deployment opportunities in many security scenarios.
However, 2D face recognition is susceptible to a variety of
factors encountered in practice, such as pose and lighting
variations, expression variations, age variations, and fa-
cial occlusions. Local feature based recognition has been
proposed to overcome the global variations from pose
and lighting changes [24], [25]. The use of multiple frames
with temporal coherence in a video [26], [27] and 3D face
models [28], [29] have also been proposed to improve the
8
recognition rate.
2.2 Face Recognition in 3D
3D face recognition methods use the surface geometry of
the face [29]. Unlike 2D face recognition, 3D face recogni-
tion is robust against pose and lighting variations due to
the invariance of the 3D shape against these variations
[30]. 3D face recognition is a modality of facial recogni-
tion methods in which the three-dimensional geometry of
the human face is used. It has been shown that 3D face
recognition methods can achieve significantly higher ac-
curacy than their 2D counterparts can. 3D face recogni-
tion achieves better accuracy by measuring the geometry
of rigid features on the face. This avoids such pitfalls of
2D face recognition algorithms as change in lighting, dif-
ferent facial expressions, occlusion and head orientation.
Another approach is to use the 3D model to improve ac-
curacy of traditional image based recognition by trans-
forming the head into a known view.
Table 2
Face recognition in 3D domain [19].
Probe
Gallery
Single still Many still
Image images
2D image
2.5D images
2D-to-2D 2D-to-3D
2D-to-3D 3D-to-3D
The main technological limitation of 3D face rec-
ognition methods is the acquisition of 3D images, which
usually requires a range camera. The drawbacks in 3D
face recognition are the large size of the 3D model, which
requires high computation cost in matching, and the ex-
pensive price of 3D imaging sensors [30]. 3D face recogni-
tion is still an active research field, though several ven-
dors offer commercial solutions.
2.3 Face Recognition Problems
In biometrics, the distortion of biometric template data
comes from two main sources: intra-user variability and
the changes in acquisition conditions. Face recognition
therefore faces some issues inherent to the problem defi-
nition, environmental conditions, acquisition conditions,
and hardware constraints.
2.3.1 Intra-user Variability
In face recognition, human face appearance has potential-
ly very large intra-subject variation due to 3-D head pose,
facial expression, occlusion due to other objects or acces-
sories (e.g sunglasses, scarf, etc.), facial hair, aging, etc.
Although there is also the problem of inter-user variabili-
ty but the variations are small due to the similarity of in-
dividual appearances. Zhao, W. In [31] showed that the
difference in face images of the same person due to severe
lighting variation could be more significant than the dif-
ference in face images of different persons.
A study by Li, S. Z. and Jain, A. K. showed that
pose variation is one of the major sources of performance
degradation in face recognition [20]. The face is a 3D ob-
ject that appears different, depending on which direction
the face is imaged. Thus, images taken at two different
viewpoints of the same subject (intra-user variation) may
appear more different from two images taken from the
same viewpoint for two different subjects (inter-user vari-
ation) [31].
2.3.2 Inter-user Variability
The accuracy of facial recognition systems drops signifi-
cantly under certain enrolment, verification, and identifi-
cation conditions. For example to enrol successfully, users
must be facing the acquisition camera and cannot be ac-
quired from sharp horizontal or vertical angles. The user’s
face must be lit evenly, preferably from the front. These
are not problems for applications such as ID card, in
which lighting, angle, and distance from camera can be
controlled, but for verification and identification applica-
tions, where environmental factors vary significantly for
facial recognition system (FRS). FRS is especially ineffec-
tive when the environmental condition of the database
image is different from that of the test image. On the oth-
er hand, when users are enrolled in one location and veri-
fied in another.
Factors such as direct and ambient lighting, sha-
dows and glare, camera quality, distance from camera,
angle of acquisition (camera angle), and background
composition can dramatically reduce performance accu-
racy. Reduced accuracy is most strongly reflected when
terrorists or fraudsters beat identification systems due
different acquisition conditions of images. Alternatively,
authorized users are being incorrectly rejected by the au-
thentication systems [30].
However, results reported in several research
[32], [33], [34] showed that the minimum average correla-
tion energy filter (MACE) in conjunction with the peak-
to-sidelobe ratio (PSR) metric provide a recognition algo-
rithm which is tolerant to even unseen illumination varia-
tions [12].
3 CORRELATION FILTERS FOR FACE
VERIFICATION
Correlation filters have been utilized in automatic target
recognition successfully. It was first used for object detec-
tion [35]. Recently they have been applied for face verifi-
cation and recognition [36], [37], [38]. MACE proposed by
Mahalanobis [35] was derived from the Synthetic Discri-
9
minant Function (SDF) proposed by Hester and Casassent
[39] which was also built to resolve the challenges from
the Matched Spatial Filter (MSF). Research has shown
t𝜆hat among the correlation filters family the MACE filter
has the best results when tested. Thus, this study imple-
ments the MACE filter and tested it on the Scface surveil-
lance database [2]. Essentially MACE filter is the solution
of a constrained optimization problem that seeks to mi-
nimize the average correlation energy while at the same
time satisfies the correlation peak constraints. As a result,
the output of the correlation planes will be close to zero
everywhere except at the locations of the trained objects
that are set to be correct where a peak will be produced
[40].
3.1 Minimum Average Correlation Energy Filter (MACE)
Elias Rentzeperis in [40] gives an exhaustive literature on
the correlation filters family. The following sub-section
discusses the MACE correlation filter problem and its
algorithm as presented in [40] for further details.
Using Parseval’s relationship the optimization problem in
the frequency domain is:
min 𝐻+𝐷𝐻𝑠. 𝑡. 𝑋+ 𝐻 = 𝑑𝐻 ∊ 𝐶𝑁𝑥1 𝑋 ∊ 𝐶𝑁𝑥𝑁𝑡𝑑 ∊
𝐶𝑁𝑡𝑥1𝐷 ∊ 𝐶𝑁𝑥𝑁 (1)
where X is a series of vectors that contains the 2D-FFT
coefficients of the training images, D is a square matrix
that contains along its diagonal the average power spec-
trum of the elements of the training images and H is a
matrix that contains the 2D-FFT coefficients of the result-
ing filter.
3.2 Solution of the MACE constrained optimization problem
From [40], Using Lagrange multipliers, the function to be
minimized becomes
∅ = 𝐻+𝐷𝐻 − 2𝜆1(𝐻+𝑋1 − 𝑑1) − ⋯− 2𝜆1(𝐻+𝑋1
− 𝑑1) (2)
In order to minimize the function we set the gradient of φ
with respect to H equal to the zero vector
𝜕∅
𝜕𝐻= 0 => 𝐷𝐻 = 𝜆1𝑋1 + ⋯ + 𝜆𝑁𝑋𝑁 (3)
Since the matrix D is diagonal by definition it is invertible
𝐻 = 𝐷−1 𝜆1𝑋1
𝑁
𝑖=1
(4)
Defining L as [𝜆1 , …𝜆𝑛 ] we get
𝐻 = 𝐷−1𝑋𝐿 (5)
Substituting (5) into X +H = d
𝑋+𝐷−1𝑋𝐿 = 𝑑 (6)
Solving for L
𝐿 = (𝑋+𝐷−1𝑋)−1𝑑 (7)
Substituting (7) into (5) we get the optimal solution
𝐻 = 𝐷−1𝑋(𝑋+𝐷−1𝑋)−1𝑑 (8)
3.3 MACE Algorithm
In equation (8) the MACE filter is calculated. The correla-
tion peak at the origin is set to 𝑋𝑖+𝐻 = 𝑑𝑖 , where the 𝑑𝑖s
corresponding to the authentic class are set to one, other-
wise they are set to zero. For each class a single MACE
filter is synthesized. Once the MACE filter H(u, v) has
been determined, the input test image f(x, y) is cross cor-
related with it in the following manner:
𝑐 𝑥, 𝑦 = 2𝐷𝐹𝐹𝑇−1 2𝐷𝐹𝐹𝑇 𝑓 𝑥, 𝑦
∗ 𝐻∗ 𝑢, 𝑣 (9)
where the test image is first transformed to frequency
domain and then reshaped to be in the form of a vector.
The result of the previous process is convolved with the
conjugate of the MACE filter. This operation is equivalent
with cross correlation with the MACE filter. The output is
transformed again in the spatial domain.
Fig. 1. Correlation Filter Block Diagram shows a filter de-
signed on N images for class I. When a test image from
class I is input to the system, then the correlation output
yields a sharp peak [12].
10
The Peak to Sidelobe Ratio (PSR) is a metric that measures
the peak sharpness of the correlation plane. For the esti-
mation of the PSR the peak is located first. Then the mean
and standard deviation of the 20X20 sidelobe region -
excluding a 5X5 central mask centred at the peak are
computed. PSR is then calculated as follows:
𝑃𝑆𝑅 =𝑝𝑒𝑎𝑘 −𝑚𝑒𝑎𝑛 (𝑠𝑖𝑑𝑒𝑙𝑜𝑏𝑒 _𝑟𝑒𝑔𝑖𝑜𝑛 )
𝜎 𝑠𝑖𝑑𝑒𝑙𝑜𝑏𝑒 _𝑟𝑒𝑔𝑖𝑜𝑛 (10)
The class with the MACE filter that yields the highest PSR
is considered the most likely candidate [40]. The represen-
tation of equation (10) is depicted in Figure 2.
Fig. 2. Region for estimating the peak-to-sidelobe ratio
(PSR) [12].
4 EXPERIMENT AND RESULTS OBTAINED
4.1 Scface Database
The Scfacedatabase mimicsthe real-world conditions as
close as possible. Using this database to evaluate the per-
formance of the MACE correlation filter algorithm, we
believe will show that real-world indoor face recognition
by using standard videosurveillance equipment will
show the algorithm’s true strength.
The database collected images from 130 subjects,
consisting of a total4,160 images, taken in uncontrolled
lighting (infrared night vision images taken in dark)with
five different quality surveillance video cameras. Subjects’
images weretaken at three distinct distances from the
cameras with the outdoor (sun) light as the only source of
illumination. The fivesurveillance cameras were installed
in one room at the height of 2.25 m. Two cameras were
made to operate ininfrared (IR) night vision mode so that
IR images were recorded as well.
The only source of illumination was the outdoor
light, whichcame through a window on one side. Two out
of five surveillance cameras were able torecord in IR
night vision mode as well. The sixth camera was installed
in a separate,darkened room for capturing IR mug shots.
It was installed at a fixed position and focusedon a chair
on which participants were sitting during IR mug shot
capturing. The IR part ofthe database is important since
there are research efforts in this direction. For the data-
base details like image naming conventions, set up of the
camera, camera quality details and experimental protocol
see the Scface database paper [2].
4.2 Experimental Protocol
Face matching experiments were carried out on the Scface
surveillance database [2] using the MACE correlation
filter algorithm that wasimplemented in Matlab for the
purpose of the study. The study introduced an evalua-
tion protocol much diffent from thatproposed by Grgic,
Delac and Grgic in [2], but which we believe helped to
show a true test of the strength of the algorithm on a dif-
ficult surveillance database like Scface.
A random selection of 10 subjects was used for
the evaluation. For each subject, 17 images were collected
from the Scface database into our own gallery witha total
of 170 images. While a total of 14 probe images are used
per subject so that the program performs identification by
comparing each probe image with images in every class
(a class represents a subject in the gallery) and gives a
score for every class; hence for every subject, the 14 probe
images yield 10 scores each.
The program gives as output, similarity scores
between the probe image and gallery images in the data-
base, the program identifies based on the highest
score.All the scores obtained are presented in tables with
one table containing scores obtained from performing
identification on a given subject. This produces a 10 by 14
matrix table for each subject with each row representing a
subject and each column representing a given probe im-
age.There’s an extra row at the bottom of each table that
contains 1s and 0s. This row is used to represent instances
where the algorithm identifies correctly (TP) with a one
(1) and False negative (FN) with zero (0).
4.3 Results Obtained
The results obtained from the face verification experi-
ments and metrics for performance measurement is pre-
sented in this subsection.
Table 3 shows as an example the result of the verifi-
cation experiment for subject one ofthe ten subjects. The
images from cam2_1, cam3_2, cam4_3, cam7_2 returned
the highest verification score for subject 1, thus a true pos-
itive with a one (1) below the table and a zero (0) to
represent false negative. The aggregrateconfusion matrix
as shown in table 4 presents the performance of the
MACE filter algorithm in term of the correctly recognized
faces and the confused faces (faces recognized as a subject
different from the actual person) while table 5 presents
the table of confusion for each subject as obtained from
the experiments. The aggregate table of confusion, ob-
tained by finding the average of all the true positives
(TP), false positives (FP), true negatives (TN), and false
negatives (FN) is presented in table 6.
11
Table 3. Result of Verification Experiment for Subject One.Images from cam2_1, cam3_2, cam4_3, and cam7_2 returned
the highest recognition score for images that are subject one hence a true positive. While images from L2, L4, R1, R3,
cam1_2, cam2_3, cam4_1, cam5_2, cam6_1, and cam6_3 returned a false positive as the images with the highest recog-
nition scores were not subject one.
Table 4.Overall Confusion Matrix from the Experiment. Results along the diagonal plane represent the total true posi-
tive (TP) for the respective subjects.
4.4 Performance metrics
A MATLAB program was written to compute the various
metrics used for the evaluation of the system. The value
of all the metrics gotten from the output of the program is
presented in table 7.
4.4.1 False Positive Rate
The false positive rate value of 0.0745 implies that on the
average, only a small fraction of the recognition outcome
wrongly predicted a particular subject for each of the sub-
jects.
4.4.2 Sensitivity (True positive rate)
Sensitivity value of 0.3643 means that the proportion of
actual subjects predicted correctly by the algorithm is
rather small; this though is not an absolute scale for judg-
ing the algorithm as the results of other metrics also have
a role to play.
4.4.3 Accuracy
An accuracy of 0.8678 suggests great performance but can
also be misleading in isolation because this value could be
high even if recognition rate for certain classes is zero.
4.4.4 Specificity
A specificity value as high as 0.9255 suggests the converse
of what a false positive rate value of 0.0745 suggests – a
large proportion of actual negative outcomes were la-
beled as such with respect to each subject.
12
Table 5. Table of Confusion for All Subjects
Subject TP FN FP TN
1 4 10 15 111
2 6 8 3 113
3 5 9 3 113
4 1 13 5 121
5 9 5 28 77
6 4 10 3 123
7 6 8 5 121
8 11 3 5 121
9 2 12 6 120
10 3 11 5 121
Table 6. Aggregate Table of Confusion
TP=5.1 FN=8.9
FP=7.8 TN=114
Table 7. Overall Performance Metrics
Metric Value
False positive rate 0.0745
Sensitivity (True positive rate) 0.3643
Accuracy 0.8678
Specificity 0.9255
Positive predictive value 0.3592
Negative predictive value 0.9270
False discovery rate 0.6408
Matthew’s correlation coefficient 0.2880
F1 score 0.0748
4.4.5Positive predictive value
A PPV of 0.3592 is somewhat low and could mean that
the algorithm has a weak ability to positively predict each
subject’s identity.
4.4.6Negative Predictive Value
The NPV is 0.9270 and this means that there is high ten-
dency for the algorithm to declare the outcome of recog-
nition as rightfully not been a particular subject for all
subjects in the test sample. This is consistent with the
Specificity and FPR value.
4.4.7False Discovery Rate
A value of 0.6408 implies that a rather large proportion of
recognition outcome is expected to be false positives.
4.4.8Matthew’s Correlation Coefficient
A value of 0.2880 indicates a slightly-above-random iden-
tity recognition outcome.
4.4.9F1 Score
An F1 score of 0.0748 is low but can not be used as sole
criteriafor measuring performance.
5 DISCUSSION
The ROC plot in figure 3 indicates a point above the ran-
dom guess line, so the algorithm’s performance can be
considered only as satisfactorily under surveillance condi-
tion.This is simplybecause images in the Scface database
are of different variations and quality; hence the algo-
rithm will perform far better with high quality images.
Fig. 3. ROC Curve showing the MACE correlation filter
algorithm’s performance on the Scface database.
It was observed that the algorithm failed to rec-
ognize any of the images taken at an angle of +22.5° to the
camera (i.e. image R1). This can be considered an anoma-
ly as it is inconsistent with performance when recognition
is performed on images taken at other angles.There was
at least 40% recognition for images taken at the three oth-
er angles. It was expected that images taken at extreme
angles (such as +90° and -90°) should have lesser recogni-
tion rates but the results of this experiment revealed con-
trary as images taken at the angle of -90° (L4) yielded 40%
recognition rate which is same as the recognition rate for
images taken at -45° (L2) and higher than the rate for im-
ages taken at +22.5°. This somewhat confirms the MACE
filter invariance to pose variation.
Results also revealedthat the algorithm per-
formed best for frontal infrared images taken at a distance
of 1.00 meter from the camera (i.e. Cam6_3) with an over-
all recognition rate of 60%. This is consistent with the re-
sult reported in [2] where night-time experiments per-
formed using frontal infrared mugshotsyielded better
results relative to results gotten from daytime experi-
ments.
13
Fig. 4.Bar Chart of Recognition Rates. Images taken at angles of +22.5° degrees to the right had zero recognition rates
while frontal infrared images from cam6_3 had the highest recognition rate.
A total of 43% of all image classes had a recogni-
tion rate of 40%. This is the modal recognition, i.e. majori-
ty of the image type’s yielded 40% recognition rate.
The entirety of the recognition rates is presented in figure
4.
6 CONCLUSION
The research work evaluated the MACE correlation filter
algorithm for face verification under surveillance scenario
using the Scface database. Although the MACE Correla-
tion filter has exhibited excellent face recognition perfor-
mance with many databases, experimental results in this
study shows that it performed just slightly above avera-
gebecause the Scface database mimics a really complex
imaging environment, such as is expected when building
an airport security system or any other national security
system. Thus the database is a difficult performance
benchmark for any face recognition algorithm. It was discovered that the MACE filter algorithm
is not effective for surveillance scenarios as variations in pose angle and distance from camera caused serious de-gradation in the performance. This is further confirmed by [40] that success rate of MACE filter degrades severely as the face strays from its ideal position. The algorithm will however be very effective for one-on-one face verifi-cation applications with high quality probe and gallery pair since it performs better with high quality frontal shots.
For future study,more experiments need to be done in order to imporve the MACE filter’s value in the face verification process to strengthen its robustness to pose and noise such as extreme image compression and low quality acquisition sensors.
ACKNOWLEDGMENT
The authors wish to thank Professors MislavGrgic, Kre-simirDelac and Sonja Grgic for the release of the SCFace - surveillance cameras face database.
REFERENCES
[1] John D. Woodward, Jr., Christopher Horn, Julius Ga-
tune, and Aryn Thomas, Biometrics - A Look At Facial
Recognition, RAND public safety and justice, Virginia,
2003.
[2] MislavGrgic, KresimirDelac and Sonja Grgic, SCFace -
surveillance cameras face Database, Multimed Tools Appl
(2011) 51:863–879, DOI 10.1007/s11042-009-0417-2
[3] Luis Torres, Is There Any Hope For Face Recognition?
Technical University of Catalonia, Barcelona, Spain, 2004.
[4] Ion Marqu´es, Face Recognition Algorithms, Proyecto
Fin de Carrera, Universidad del Pais Vasco, EuskalHerri-
koUnibertsitatea, 2010.
[5] Narayanan Ramanathan, Rama Chellapp (Center for
Automation Research and Dept. of Electrical & Computer
Engineering , University of Maryland) and Amit K. Roy
Chowdhury (Dept. of Electrical Engineering University of
California), Facial Similarity Across Age, Disguise, Illu-
mination and Pose, 2003.
[6] Eric P. Kukula and Stephen J. Elliott, Evaluation of a
Facial Recognition Algorithm Across Three Illumination
Conditions, Purdue University, 2003.
[7] Wikipedia, Facial Recognition System, available online
at http://en.wikipedia.org/wiki/Facial_recognition
_system.
[8] G. Givens, J. R. Beveridge, B. A. Draper, P. Grother,
and P. J.Phillips, How features of the human face affect
14
recognition: a statistical comparison of three face recogni-
tion algorithms, CVPR, vol. 2, 2004.
[9] Richa Singh, MayankVatsa and AfzelNoore (2008).
Recognizing Face Images with Disguise Variations, Re-
cent Advances in Face Recognition, KresimirDelac, Mis-
lavGrgic and Marian Stewart Bartlett (Ed.), ISBN: 978-953-
7619-34-3, InTech.
[10] B.V.K. Vijaya Kumar, AbhijitMahalanobis and Ri-
chard D. Juday, Correlation Pattern Recognition, Univer-
sity Press, UK, November 2005.
[11] B.V.K. Vijaya Kumar, M. Savvides, K. Venkatarama-
ni, and C. Xie, ‚Spatial Frequency Domain Image
Processing For Biometric Recognition,‛ Proceedings IEEE
International Conference on Image Processing, pp. 53-56,
2002.
[12] M. Savvides, B.V.K. Vijaya Kumar, and P.K. Khosla,
"‘Corefaces’- Robust Shift Invariant PCA based correlation
filter for illumination tolerant face recognition,‛ Proceed-
ings IEEE Computer Vision and Pattern Recognition
(CVPR), pp. 834-841, June 2004.
[13] Andrew W. Senior and Ruud M. Bolle, Face Recogni-
tion and Its Applications, IBM T.J. Watson Research Cen-
ter, 2002.
[14] Face Recognition, available online
athttp://vismod.media.mit.edu/techreports/TR516/node7.
html
[15] Zhao, W.; Chellappa, R.; Rosenfeld, A.; Phillips, P. J.
(2003), Face Recognition: A Literature Survey, ACM Com-
puting Surveys, Vol. 35, Issue 4, December 2003, pp. 399-
458.
[16] Available online at http://www.biometrics cata-
log.org/ NSTCSubcommittee.
[17+ Kirby, M. and Sirovich, L. (1990): ‚Application of the
Karhunen-loeve procedure for the characterization of
human faces‛. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 12(1):103-108.
*18+ Turk, M. and Pentland, A. (1991): ‚Eigenfaces for rec-
ognition‛, Journal of Cognitive Neuroscience, 3(1):72-86.
[19] Zhao, W.; Chellappa, R.; Rosenfeld, A.; Phillips, P. J.
(2003), Face Recognition: A Literature Survey, ACM Com-
puting Surveys, Vol. 35, Issue 4, December 2003, pp. 399-
458.
[20] Li, S. Z., and Jain, A. K. (2005). eds., Handbook of
Face Recognition. Springer-Verlag, Secaucus, New York,
USA, 2005.
*21+ Fisher, R. A. (1938): ‚The statistical utilization of mul-
tiple measurements‛, Annals of Eugenics, 8:376-386.
[22] Wiskott, L.; Fellous, J.-M.; Kruger, N.; and Von der
Malsburg, C. (1997): ‚Face recognition by elastic bunch
graph matching‛. IEEE Transactions on Pattern Analysis
and Machine Intelligence, 19(7):775-779, 1997.
*23+ Lewis, J. P. (1995): ‚Fast normalized cross-
correlation‛, Vision Interface, pp 120-123.
[24] Hadid, A.; Ahonen, T.; and Pietikainen, M. (2004):
‚Face recognition with local binary patterns‛, In Proc.
European Conference on Computer Vision, pages 469-
481.
[25] Arca, S.;Campadelli, P. and Lanzarotti, R. (2003): A
face recognition system based on local feature analysis. In
Proc. Audio- and Video-Based Biometric Person Authen-
tication, pp 182-189.
[26] Zhou, S.; Krueger, V.; and Chellappa, R. (2003):
‚Probabilistic recognition of human faces from video‛,
Computer Vision and Image Understanding, Vol. 91, pp
214-245.
[27] Aggarwal, G.; Roy-Chowdhury, A. K.; and Chellappa,
R. (2004): ‚A system identification approach for video-
based face recognition‛, In Proc. International Conference
on Pattern Recognition, volume 4, pp 175-178.
[28+ Blanz, V. and Vetter, T. (2003): ‚Face recognition
based on fitting a 3d morphable model‛, IEEE Transac-
tions on Pattern Analysis and Machine Intelligence,
25(9):1063-1074.
*29+ Lu, X.; Jain, A. K.; and Colbry, D. (2006): ‚Matching
2.5d face scans to 3d models‛, IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, Vol. 28(1), pp31-
43.
*30+ Park, U. (2009): ‚Face Recognition: Face in Video, Age
Invariance, and Facial Marks‛, An unpublished Ph.D Dis-
sertation submitted to the Department of Computer
Science, Michigan State University, U.S.A.
*31+ Zhao, W. and Chellappa, R. (1999): ‚Robust face rec-
ognition using symmetric shape-from-shading‛, Technical
Report, Center for Automation Research, University of
Maryland.
[32] M. Savvides and B.V.K. Vijaya Kumar, "Quad-phase
minimum average correlation energy filters for reduced-
memory illumination-tolerant face authentication," Audio
and Visual Biometrics based Person Authentication
(AVBPA), 2003.
[33] M. Savvides, B.V.K. Vijaya Kumar and P.K. Khosla,
"Robust, Shift-Invariant Biometric Identification from Par-
tial Face Images", accepted for publication in Biometric
Technologies for Human Identification (OR51) 2004.
[34] M. Savvides, B.V.K. Vijaya Kumar and P.K. Khosla,
"Cancellable Biometric Filters for Face Recognition", ac-
cepted for publication in International Conference in Pat-
tern Recognition (ICPR) 2004.
[35] AbhijitMahalanobis, B. V. K. Vijaya Kumar and David
Casasent, ‚Minimum Average Correlation Energy Fil-
ters‚, Journal of Applied Optics, Vol. 26, No 17 , 1987.
[36] MariosSavvides, B. V. K. Vijaya Kumar and Pra-
deepKhosla, ‚Face Verification using Correlation Filters‛,
Proc. Of the Third IEEE Automatic Identification Ad-
vanced Technologies, 56-61, Tarrytown, NY, 2002.
[37] B. V. K. Vijaya Kumar, MariosSavvides, KrithikaVen-
kataramani and ChunyanXie, ‚Spatial Frequency Domain
Image Processing For Biometric Recognition‛, Proc. of
15
Intl. Conf. on Image Processing (ICIP), Vol. I, 53-56, 2002.
*38+ B. V. K. Vijaya Kumar, MariosSavvides, ‚Efficient
Design of Advanced Correlation Filters For Robust Dis-
tortion-Tolerant Face Recognition‛, Proc. of the IEEE Con-
ference on Advanced Video and Signal Based Surveil-
lance, 2003.
*39+ C. F. Hester and D. Casassent, ‚Multivariant tech-
nique for multiclass pattern recognition‛, Journal of Ap-
plied Optics 19, pp. 1758-1761, 1980.
*40+ Elias Rentzeperis (2003), ‚A Comparative Analysis of
Face Recognition Algorithms: Hidden Markov Models,
Correlation Filters and Laplacianfaces Vs. Linear sub-
space projection and Elastic Bunch Graph Matching‛, an
MSc. Thesis in Information Networking, Autonomic and
Grid Computing Group, Carnegie Mellon University,
USA.
Omidiora, E. O. received the B.Eng degree in Computer Engineer-ing, ObafemiAwolowo University, Ile-Ife in 1992, MSc Computer Science, University of Lagos, Lagos in 1998 and Ph.D Computer Science, LadokeAkintola University of Technology, Ogbomoso, Nige-ria in 2006. He is currently an Associate Professor in the Department of Computer Science and Engineering, LadokeAkintola University of Technology, Ogbomoso, Nigeria. He has published in reputable jour-nals and learned conferences. DrOmidiora is a full member of Com-puter Professional (Registration) Council of Nigeria (CPN) and a registered engineer by COREN. His research interests’ biometrical-gorithm and application, image processing and microprocessor based system. Olabiyisi, S. O. received his B. Tech., M. Tech and Ph.D degrees in Mathematics from LadokeAkintola University of Technology, Ogbo-moso, Nigeria, in 1999, 2002 and 2006 respectively. He also re-ceived M.Sc. degree in Computer Science from University of Ibadan, Ibadan, Nigeria in 2003. He is currently an Associate Professor in the Department of Computer Science and Engineering, LadokeAkintola University of Technology, Ogbomoso, Nigeria. He has published in reputable journals and learned conferences. DrOlabiyisi is a full member of Computer Professional (Registration) Council of Nigeria (CPN). His research interests are in computational mathematics, theoretical computer science, information systems and performance modelling and simulation.
OJO, J. A. received a B.Techdegree in Electronic and Electrical Engineering from LadokeAkintola University of Technology (LAU-TECH), Ogbomoso, Nigeria in 1998, M.Sc. in the same field from University of Lagos, Nigeria in 2003 and Ph.D from LAUTECH in 2011. He is presently a senior Lecturer in the Department of Elec-tronic and Electrical Engineering in the same institution; he is a member of the Nigerian Society of Engineers (NSE) and a registered engineer by COREN. His research interest includes biometrics, se-curity and surveillance systems, e-health and mobile-health bio-medical engineering and image pre-processing techniques.
Abayomi-Alli, A. obtained his B.Tech Degree in Computer Engi-neering from LadokeAkintola University of Technology (LAUTECH), Ogbomoso in 2005, MSc Computer Science from the University of Ibadan, Nigeria in the 2009. He is a registered engineer by COREN and a chartered information technology practitioneer by the Com-puter Professional Registration Council of Nigeria (CPN). His current research interests include biometrics, image quality assessment and machine learning. Akingboye, A.Y. graduated from the LadokeAkintola University of Technology, Ogbomoso with a Master of Technology (M.Tech) and Bachelor of Technology (B.Tech) degrees in Computer Science in 2012 and 2005 respectively. His research interests include micro-processor systems, image processing and human computer interac-
tion. He is presently with the Department of Electrical and Computer Engineering at Igbinedion University Okada. Izilein, F. A. has a B.Eng and M.Eng Degree in Electrical Electron-ics. He presently lectures at the Department of Electrical and Com-puter Engineering in Igbinedion University Okada. He is a registered engineer by COREN and a member of the NigerinSoceity for Engi-neers (NSE). Ezomo, P. I. has a B.Eng and M.Eng Degree in Electrical Electron-ics. He is presently a lecturer in the Department of Electrical and Computer Engineering at Igbinedion University Okada. He has ex-tensive experience in microprocessor systems and biometrics while he worked in the oil and gas industry for over twenty years. He is a registered engineer by COREN and a member of the NigerinSoceity for Engineers (NSE).