eigenhill vs. eigenface and eigenedge

4
Pattern Recognition 34 (2001) 181}184 Eigenhill vs. eigenface and eigenedge Alper Yilmaz, Muhittin Go K kmen* Department of Computer Engineering, Istanbul Technical University, Maslak, 80626 Istanbul, Turkey Received 22 November 1999; accepted 18 January 2000 Abstract In this study, we present a new approach to overcome the problems in face recognition associated with illumination changes by utilizing the edge images rather than intensity values. However, using edges directly has its problems. To combine the advantages of algorithms based on shading and edges while overcoming their drawbacks, we introduced `hillsa which are obtained by covering edges with a membrane. Each hill image is then described as a combination of most descriptive eigenvectors, called `eigenhillsa, spanning hills space. We compare the recognition performances of eigenface, eigenedge and eigenhills methods by considering illumination and orientation changes on Purdue A & R face database and showed experimentally that our approach has the best recognition performance. ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. 1. Introduction Face recognition algorithms are based on extraction of facial features. There are essentially two types of features: partial (nose, mouth, etc.) and holistic (where each feature is characteristic of the whole face). Many face recognition algorithms can be used for extracting both partial and holistic features. Yuille et al. [1] used template matching approach for holistic feature extraction. Likewise Pent- land and Turk [2] used Karhunen}Loeve transforma- tion (KLT) for extracting holistic features, which are called eigenfaces. In eigenface algorithm, the goal is to describe maximum variation among faces while reducing the high-dimensional face space to a low-dimensional eigenspace. Though eigenfaces became popular among researches, the idea itself is prone varying illumination like many other face recognition algorithms. There were several attempts to solve the illumination problem which are based on modeling of illumination e!ect on faces or "nding illumination free features to describe a face. Ta`kacs and Wechsler proposed an edge-based approach * Corresponding author. Tel.: #90-212-285-6706; fax: #90- 212-285-3025. E-mail address: gokmen@cs.itu.edu.tr (M. Go K kmen). to recognize faces using Hausdor! distance [3]. Be- lhumeur et al. [4] developed "scherface approach to overcome the illumination changes. In this study, we proposed a new algorithm based on KLT to overcome problems due to illumination vari- ation and pose changes. Our approach relies on edges that do not change considerably in varying illumination. However, edges bring their own problems, they are very sensitive to pose and orientation changes in the face. We overcome these problems by covering the edges with a membrane, which is related to regularization theory. The organization of this paper is as follows. In the next section, a theoretical analysis of illumination e!ect on gray scales and edge maps is stated. In the following section, we give brief description of our method, the eigenhill. Then, performance comparisons on Purdue face database and a conclusion are given. 2. Illumination e4ect on face and edges For two point light sources, one is on at a particular time, illumination change can alter the appearances of objects. To describe e!ect of illumination variation, we will identify the di!erences of two lighting patterns of an object for two distinct light sources. In the case of a lam- bertian surface, the image is determined by the image 0031-3203/00/$20.00 ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved. PII: S 0 0 3 1 - 3 2 0 3 ( 0 0 ) 0 0 0 3 1 - 5

Upload: alper-yilmaz

Post on 14-Jul-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Pattern Recognition 34 (2001) 181}184

Eigenhill vs. eigenface and eigenedge

Alper Yilmaz, Muhittin GoK kmen*

Department of Computer Engineering, Istanbul Technical University, Maslak, 80626 Istanbul, Turkey

Received 22 November 1999; accepted 18 January 2000

Abstract

In this study, we present a new approach to overcome the problems in face recognition associated with illuminationchanges by utilizing the edge images rather than intensity values. However, using edges directly has its problems. Tocombine the advantages of algorithms based on shading and edges while overcoming their drawbacks, we introduced`hillsa which are obtained by covering edges with a membrane. Each hill image is then described as a combination ofmost descriptive eigenvectors, called `eigenhillsa, spanning hills space. We compare the recognition performances ofeigenface, eigenedge and eigenhills methods by considering illumination and orientation changes on Purdue A & R facedatabase and showed experimentally that our approach has the best recognition performance. ( 2000 PatternRecognition Society. Published by Elsevier Science Ltd. All rights reserved.

1. Introduction

Face recognition algorithms are based on extraction offacial features. There are essentially two types of features:partial (nose, mouth, etc.) and holistic (where each featureis characteristic of the whole face). Many face recognitionalgorithms can be used for extracting both partial andholistic features. Yuille et al. [1] used template matchingapproach for holistic feature extraction. Likewise Pent-land and Turk [2] used Karhunen}Loeve transforma-tion (KLT) for extracting holistic features, which arecalled eigenfaces. In eigenface algorithm, the goal is todescribe maximum variation among faces while reducingthe high-dimensional face space to a low-dimensionaleigenspace.

Though eigenfaces became popular among researches,the idea itself is prone varying illumination like manyother face recognition algorithms. There were severalattempts to solve the illumination problem which arebased on modeling of illumination e!ect on faces or"nding illumination free features to describe a face.Takacs and Wechsler proposed an edge-based approach

*Corresponding author. Tel.: #90-212-285-6706; fax: #90-212-285-3025.

E-mail address: [email protected] (M. GoK kmen).

to recognize faces using Hausdor! distance [3]. Be-lhumeur et al. [4] developed "scherface approach toovercome the illumination changes.

In this study, we proposed a new algorithm based onKLT to overcome problems due to illumination vari-ation and pose changes. Our approach relies on edgesthat do not change considerably in varying illumination.However, edges bring their own problems, they are verysensitive to pose and orientation changes in the face. Weovercome these problems by covering the edges witha membrane, which is related to regularization theory.

The organization of this paper is as follows. In the nextsection, a theoretical analysis of illumination e!ect ongray scales and edge maps is stated. In the followingsection, we give brief description of our method, theeigenhill. Then, performance comparisons on Purdueface database and a conclusion are given.

2. Illumination e4ect on face and edges

For two point light sources, one is on at a particulartime, illumination change can alter the appearances ofobjects. To describe e!ect of illumination variation, wewill identify the di!erences of two lighting patterns of anobject for two distinct light sources. In the case of a lam-bertian surface, the image is determined by the image

0031-3203/00/$20.00 ( 2000 Pattern Recognition Society. Published by Elsevier Science Ltd. All rights reserved.PII: S 0 0 3 1 - 3 2 0 3 ( 0 0 ) 0 0 0 3 1 - 5

Fig. 1. Ideal face (leftmost) and test faces.

irradiance equation, I(x, y)"R(p, q, ps, q

s), where

R(p, q; ps, q

s)"

1#pps#qq

s

J1#p2s#q2

sJ1#p2#q2

(1)

and p"Lz/Lx, q"Lz/Ly, while z is the depth map of theobject. For two di!erent light sources (p

s1, q

s1) and

(ps2

, qs2

), where ps2"p

s1#d

psand q

s2"q

s1#d

qs, cor-

responding re#ectance maps will be

R1(p, q; p

s1, q

s1)"

1#pps1#qq

s1

JAJ1#p2#q2, (2)

R2(p, q; p

s2, q

s2)"

1#pps1#qq

s1#pd

ps#qd

qs

JBJ1#p2#q2, (3)

where dpsO0, d

qsO0, and

A"1#p2s1#q2

s1, (4)

B"1#p2s1#q2

s1#2p

s1dps#2q

s1dqs#d2

ps#d2

qs. (5)

The edge of objects can be obtained via zero crossings oflaplacian operator zc(+2R), where +2R"R

xx#R

yy, and

Rxx

and Ryy

represent second derivatives of R with re-spect to x and y.

In the following sub-sections, analysis of changes be-tween edges and textures of objects for four cases will begiven.

2.1. Planar object

In planar lambertian surfaces, the depth, z, does notchange. Thus, p"z

x"0, q"z

y"0 and the image

is seen equally bright in all directions. The re#ectancemap corresponding to planar case is, R(p

s, q

s)"

1/J1#p2s#q2

s. The laplacian of the R(p

s, q

s) is, +2R"

Rxx#R

yy"0#0"0, and the planar case does not

have any edges except on the boundaries. Two edge mapsobtained for two di!erent illuminations will be the same.

On the other hand, the di!erence of gray levels be-tween two images of the same planar surface is

DR1!R

2D"DJB!JAD / DrJBJAD (6)

and JB!JAO0. Since Dzc(+2R1)!zc(+2R

2)D(

DR1!R

2D, we can claim that edges are more robust

representations than intensity values for planar surfaces.

2.2. Spherical object

For spherical lambertian objects, the depth informa-tion is z2"r2!x2!y2. Let us assume C"r2!x2!

y2. Thus, R will be

R(p, q; ps, q

s)"

JC!xps!yq

s

rJA. (7)

The Laplacian of this re#ectance map can be obtained as

+2R"(x2#y2!2r2)/rJAJC3. (8)

Zero crossings of +2R will occur only when x2#y2"

2r2, which are the boundary locations of the sphere.Hence, the di!erence between edge maps is Dzc(+2R

1)!

zc(+2R2)D"0. On the other hand, the di!erence between

intensity values is

DR1!R

2D"K

(JC!xps1!yq

s1)

rJAJB(JB!JA)

#

xdps#yd

qs

rJB K, (9)

which is zero if and only if dps"d

qs"0.

This reveals that Dzc(+2R1)!zc(+2R

2)D)DR

1!R

2D

for the spherical lambertian objects.We can conclude that edges are more robust features

as compared to intensity changes for these speci"c ob-jects. In the next section, we analyze the e!ect of illumina-tion change on the edges for the real images.

2.3. Real face images

A human face can be considered as a combination ofplanar and spherical surfaces discussed above. However,it does not have any analytical expression. In order toobserve the importance of non-convex structure onedges, we constructed experiments on real faces of Pur-due A & R face database. We used the natural lightingconditions of 113 subject faces as the ideal set and threegroups of 339 images as the test set, containing facesilluminated with left, right and both left and right lightsources. The three groups and the ideal face and edgesassociated with them are shown in Fig. 1.

To quantitatively evaluate the e!ect of variation inillumination for faces, we used the probabilistic measures,Pr(IEDDE), Pr(DEDIE) and mean square distance (MSD),where IE is ideal edge and DE is detected edge. Theprobabilistic calculations for quantitative evaluations areplotted in Fig. 2 for 1]1 and 7]7 neighborhood masks.As seen from this "gure, 7]7 neighborhood gives a bet-ter performance as compared to 1]1. The reason for thisis the small displacement of edges due to pose and illu-mination changes. This reveals that edges cannot be

182 A. Yilmaz, M. Go( kmen / Pattern Recognition 34 (2001) 181}184

Fig. 2. Probabilistic measures, Pr(IEDDE) and Pr(DEDIE) for1]1 and 7]7 masks.

Fig. 3. Hills.

Fig. 4. Eigenhills.

Fig. 5. (a) Learning set; (b) test set faces.

directly utilized for robust face recognition due to theirnarrow structure. This problem is solved in our approachby using widened edges obtained by membrane "tting.

3. Eigenhills

Given a set of faces, the classi"cation of faces can beachieved in a low-dimensional descriptive space found byKLT, rather than using a high-dimensional face space.

KLT-based approach can be applied for the facesdirectly [2]. However, this approach su!ers from prob-lems related to illumination variation and presence ofnoise, since it highly depends on local texture of thefaces.

Instead of using highly variable local information,more robust descriptive property can be used. Edgemaps, which can be obtained by using `generalized edgedetector (GED)a [5], are important features that are notdistorted by illumination changes as shown in the pre-vious section. However, a drawback of edge-based ap-proach is the locality of edges. Any change in facialexpression or a shift in edge locations due to smallrotation of the face will degrade the recognition perfor-mance. To overcome the locality problem of edges, weintroduced spread edge pro"les obtained by membranefunctional of regularization theory,

Em( f, j)"PP) ( f!d)2 dx dy

#j(1!q)PP) ( f 2x#f 2

y) dx dy. (10)

The spread edge pro"le composes a ghostly face, `hilla.Hills have high values on boundary locations and de-crease as we move apart from edges, as shown in Fig. 3.Low-dimensional representation of hills can be deter-mined by KLT by following the algorithm derived inRef. [2]. Eigenvectors of covariance matrix, de"ned forhills, is named as `eigenhillsa, and they are shown inFig. 4.

4. Results and conclusion

To show that eigenhills is more robust to illuminationchanges compared to eigenface and eigenedge ap-proaches, we used real face images by utilizing two cri-teria: recognition measure, e"DD=

l!=

tDD2, where l is

learning and t is test set, and reconstruction error,e"DD!

r!!

iDD2, where !T is reconstructed face.

To observe the performance of real face images, wetested three approaches on 126 individuals of Purdueface database. In experiments, we used natural lightingconditions as learning set, Fig. 5(a), test set of 756 facesconsist of six categories: left side light is on, right sidesource is on, both sidelights are on, and three facialexpressions, angry, laughing and screaming, shown inFig. 5(b).

Average reconstruction error given in Table 1 con-"rms that eigenhill has better performance compared toeigenedges, and eigenfaces. Eigenedges gave the poorestperformance due to non-ideal alignment and low correla-tion among edge maps.

Recognition performances for the three systems aregiven in Table 2. From the table it can be observed thatthe recognition performance for all three lighting condi-tions for eigenhills is 86.4%, while it is only 69.8% foreigenface. Eigenedge method has the poorest recognitionperformance. Recognition performance of eigenhills forfacial expression change is 88.8%, which is above eigen-face's performance, 88.4%.

A. Yilmaz, M. Go( kmen / Pattern Recognition 34 (2001) 181}184 183

Table 1Average distance to face space

Both sidelight Left sidelight Right sidelight Scream Laugh Anger

Eigenface 300 167 219 131 80 98Eigenedge 250 252 250 258 236 252Eigenhill 151 108 120 140 86 109

Table 2Recognition performances

Both sidelight % Left sidelight % Right sidelight % Scream % Laugh % Anger %

Eigenface 44.7 85.8 79.1 83.8 93 89Eigenedge 18.7 34.2 27.5 16.7 42 30Eigenhill 80.2 91.6 87.5 79.8 95 92

Note that, overall recognition performance of eigenhillis 89.4%, which is above eigenface's overall recognitionperformance, 82.3%. Also, overall performance of theeigenedge method, 38.8%, is below the acceptable range.

From the results of the experiments given above, weconclude that eigenhills approach is more robust to illu-mination variation than eigenface and eigenedge for facerecognition.

References

[1] A.L. Yuille, D.S. Cohen, P.W. Hallinan, Feature extractionfrom faces using deformable templates, Proceedings ofCVPR, San Diego, CA, June 1989.

[2] M. Turk, A. Pentland, Eigenfaces for recognition, CognitiveSci. 3 (1) (1991) 71}86.

[3] B. Tacacs, H. Wescler, Face recognition using binary imagemetrics, Proceedings of the Third International Conferenceon Automatic Face and Gesture Recognition, Nara, Japan,April 1998, pp. 294}299.

[4] P.N. Belhumeur, J.P. Hespanha, D.J. Kriegman, Eigenfacesvs. "scherfaces: recognition using class speci"c linear projec-tion, Proceedings of European Conference on ComputerVision, 1996.

[5] M. GoK kmen, A.K. Jain, jq-space representation of imagesand generalized edge detector, IEEE Trans. PAMI 19 (6)(1997) 545}563.

184 A. Yilmaz, M. Go( kmen / Pattern Recognition 34 (2001) 181}184