robust illumination and pose invariant face recognition ... · j. shermina and v. vasudevan [15]...

13
International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701 © Research India Publications, http://www.ripublication.com Robust Illumination and Pose Invariant Face Recognition System using Support Vector Machines Pradip Panchal Research Scholar, Charotar University of Science and Technology, Changa Hiren Mewada Associate Professor, Charotar University of Science and Technology, Changa Abstract: The fundamental objective behind the denoising is to get rid of the noise while recollecting the significant signal features to the maximum possible extent. This issue seems to be very simple against the backdrop of realis- tic scenarios, where the category and quantity of noise, along with the noise and the kind of images all are variable parameters, and a solitary technique or approach is incom- petent to yield reasonable results. There is a host of methods employed to eliminate the noise in images and carryout the classification procedure efficiently. In the innovative approach, at the outset, the images are shortlisted from the database, and thereafter the technique flows through the fol- lowing three phases such as the pre-processing procedure, feature extraction procedure and the classification proce- dure by means of the SupportVector Machine (SVM). In the feature extraction procedure the Gray Level Co-occurrence Matrix(GLCM) traits like the autocorrelation, contrast, cluster prominence, cluster shade, dissimilarity, energy, area, homogeneity, perimeter, circularity and entropy are extracted. Subsequently, SVM is employed for the purpose of the face recognition, because the optimal separating hyper plane can be achieved easily after ascertaining the thinner product between feature vectors, which constitutes an exemplary quality of the SVM. The kernel functions are able to achieve only the inner product value in the feature space being unaware of the nonlinear mapping. Keywords: GLCM Features, Face recognition, Nonlinear function, Support Vector Machines. INTRODUCTION With several innovative biometric techniques like the finger print, iris, palm, gait and so on, the face detection method have become one of the most exciting domains till date [1]. The face is one of the most frequently employed organs used by human being to detect one other. Right from its growth, the human brain has built up superbly specialized regions devoted to the examination of the facial images. In the past decades, face recognition has emerged as a syner- getic investigation zone with the launch of several kinds of innovative algorithms and methods intended to be identical the amazing skills of the human brain [2]. The everyday task of face detection on its users is one of the causes for the ever zooming enthusiasm created by it among the investigating community as a whole during the past several decades. This has paved the way for the design and launch of several face recognition methods such as the homogeneous lighting and permanent frontal poses which is able to achieve amazing and consistent efficiency in execution [3]. The face recognition has emerged as one of the most dynamic investigation domains in the pattern recognition. It plays a significant role in several application regions like the human machine interface, validation and inspection [4]. The concurrent automatic face recognition techniques are faced with a multitude of sources of within-class distinction such as the pose, expression, and illumination, in addition to the occlusion or disguise. From times immemorial, inten- sive investigation by experimenters dedicated to the pattern recognition have yielded good results in starting several novel techniques intended for the successful addressing of the relative factors independently [5]. At present, it is

Upload: others

Post on 03-Jul-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

Robust Illumination and Pose Invariant Face RecognitionSystem using Support Vector Machines

Pradip PanchalResearch Scholar, Charotar University of Science and Technology, Changa

Hiren Mewada

Associate Professor, Charotar University of Science and Technology, Changa

Abstract: The fundamental objective behind the denoisingis to get rid of the noise while recollecting the significantsignal features to the maximum possible extent. This issueseems to be very simple against the backdrop of realis-tic scenarios, where the category and quantity of noise,along with the noise and the kind of images all are variableparameters, and a solitary technique or approach is incom-petent to yield reasonable results. There is a host of methodsemployed to eliminate the noise in images and carryoutthe classification procedure efficiently. In the innovativeapproach, at the outset, the images are shortlisted from thedatabase, and thereafter the technique flows through the fol-lowing three phases such as the pre-processing procedure,feature extraction procedure and the classification proce-dure by means of the SupportVector Machine (SVM). In thefeature extraction procedure the Gray Level Co-occurrenceMatrix(GLCM) traits like the autocorrelation, contrast,cluster prominence, cluster shade, dissimilarity, energy,area, homogeneity, perimeter, circularity and entropy areextracted. Subsequently, SVM is employed for the purposeof the face recognition, because the optimal separatinghyper plane can be achieved easily after ascertaining thethinner product between feature vectors, which constitutesan exemplary quality of the SVM. The kernel functions areable to achieve only the inner product value in the featurespace being unaware of the nonlinear mapping.

Keywords: GLCM Features, Face recognition, Nonlinearfunction, Support Vector Machines.

INTRODUCTION

With several innovative biometric techniques like the fingerprint, iris, palm, gait and so on, the face detection methodhave become one of the most exciting domains till date [1].The face is one of the most frequently employed organsused by human being to detect one other. Right from itsgrowth, the human brain has built up superbly specializedregions devoted to the examination of the facial images. Inthe past decades, face recognition has emerged as a syner-getic investigation zone with the launch of several kinds ofinnovative algorithms and methods intended to be identicalthe amazing skills of the human brain [2].The everyday taskof face detection on its users is one of the causes for the everzooming enthusiasm created by it among the investigatingcommunity as a whole during the past several decades. Thishas paved the way for the design and launch of several facerecognition methods such as the homogeneous lighting andpermanent frontal poses which is able to achieve amazingand consistent efficiency in execution [3].

The face recognition has emerged as one of the mostdynamic investigation domains in the pattern recognition.It plays a significant role in several application regions likethe human machine interface, validation and inspection [4].The concurrent automatic face recognition techniques arefaced with a multitude of sources of within-class distinctionsuch as the pose, expression, and illumination, in additionto the occlusion or disguise. From times immemorial, inten-sive investigation by experimenters dedicated to the patternrecognition have yielded good results in starting severalnovel techniques intended for the successful addressingof the relative factors independently [5]. At present, it is

Page 2: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

common knowledge that the changes in the illuminationsettings are bound to have significant effect on the faceappearance in such a way that the alterations between theimages of the identical face on account of lighting can begreater than image changes caused by the modification inthe face identity [6]. It is widely expected that the video-based face recognition techniques have immense potentialin several applications where motion can be deployed asa signal for face segmentation and tracking, and the inci-dence of added data necessarily leads to enhancement inrecognition efficiency.

Nevertheless, these techniques are troubled by their ownhassles. The video sequence and the recognition techniqueswhich are capable of integrating the data over the entirevideo [7]. The capacity of several techniques to tackle theface, pose and misalignment can be generally decided bythe quantity of overt geometric data utilized by them theface representations [8]. The vital object of the face recog-nition mechanism is to segregate the traits of a face whichare decided by the inherent shape and color of the facial sur-face from the given circumstances of image generation [9],[10]. The Illumination invariant in the non-sub-sampledcontour let transform domain extracts the geometric struc-ture devoid of pseudo Gibbs event around singularitiesand halo artifacts, which attributes to the qualities of non-subsampled contour let transform [11], The IlluminationRobust Dictionary-Based Face Recognition is based onconcurrent sparse approximations against the backdrop ofchanging lighting. In this case, a dictionary is educatedfor each face class in accordance with the specified train-ing examples which drastically reduces the representationfault with a sparseness restraint. Subsequently a test imageis projected onto the span of the atoms in each skilleddictionary [12].

LITERATURE SURVEY

A lots of investigations make their way in the domain ofliterature, which area dedicated to the Face recognition.Given below is a concise account of some of the researchworks in this regard.

Mohan et al. [13] launched an innovative method of facerecognition in accordance with the extraction of texture fea-tures to tackle the challenge thrown by the features whichincredibly impact the face recognition technique such as thepose and illumination changes. With the intention of over-whelming the intricacy of employing the texture features

on the whole image, it segregated the face into four seg-ments and assessed the texture features in each and everysegment independently. The texture features, in turn, wereobtained from the co-occurrence constraints with diverseorientations, leading to the easy performance of the facerecognition, without any modification in the pose, illumi-nation and rotation. The test outcomes on the FG-NETaging database and Google Images evidently emphasize theconsistency, viability and effectiveness of the innovativetechnique.

Chen et al. [14] proposed a novel technique for theface recognition or certification against the pose, illumina-tion, and expression (PIE) changes by employing modularface features. A sub-image in low-frequency sub-band wasextorted by a wavelet transform (WT) to curtail the imagedimensionality. It was segmented into four parts for charac-terizing the local features and cutting back the PIE impacts,and the minute image in a rude scale was produced throughthe WT keeping intact the global face features. Altogether,five modular feature spaces were built up. The most dis-tinguishing universal vectors in each feature space werelocated, and a nearest feature space-based (NFS-based)distance was evaluated for classification. The weightedsummation was executed to integrate the five distances.The astounding test outcomes illustrated without doubt thatthe innovative technique was incredibly superior to the peermethods with regard to the recognition and validation rates.

J. Shermina and V. Vasudevan [15] gave a green lightto an innovative face recognition technique which exhib-ited robustness in relation to the pose and lighting changes.For the purpose of processing the pose invariant image, theLocally Linear Regression (LLR) technique was employedto generate the virtual frontal view face image from thenon-frontal view face image. In order to process the illu-mination invariant image, minimal frequency componentsof Discrete Cosine Transform (DCT) were utilized to cus-tomize the illuminated image. Taking into account, the factof identifying the facial images which were both pose vari-ant and illumination variant, the Fisher Linear DiscriminantAnalysis (FLDA) method and Principal Component Anal-ysis (PCA) techniques were utilized. In the final stage, thescores of FLDA and PCA were integrated by means ofa hybrid approach in accordance with the Feed ForwardNeural Network (FFN). As per the scores accomplishedin the preliminary recognition system, a weight was dis-tributed to the image, which was distinguished by meansof the corresponding weight allocated and the integra-tion of scores. It was clear from the test outcomes on the

12690

Page 3: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

hybridization method that it was all around prepared toadequately distinguish the face images successfully.

Ajay et al. [16] assessed and contrasted the feats ofseveral blends of the edge operators and linear subspacetechniques to ascertain the combination for the pose clas-sification. To assess the efficiency in execution of theinnovative technique, they performed several tests on theCMU-PIE database consisting of images with extensivechanges in lighting and pose. They were able to find thatthe feat of the pose classification invariably dependenton the selection of the edge operator and the linear sub-space approach. The superb classification precision wasachieved by the Prewitt edge operator and Eigen featureregularization method. With a view to successfully addressthe lighting oscillations, they deployed the adaptive his-togram equalization as a pre-processing measure leadingto the incredible improvement in the performance with theexception of that of the Robert’s operator.

S. Muruganantham [17] launched an innovative tech-nique which furnished an up-to-date assessment of the vitalhuman face recognition investigation. They were able tooffer a summary of face recognition and its applications.Thus, a literary assessment of the largely employed facerecognition methods was furnished. Explanations and con-straints of face databases which were employed to assessthe efficiency in execution of the related face recogni-tion techniques were offered. The most significant factorsimpacting the face recognition mechanism was the poseillumination, identity, occlusion and expression. In the doc-ument, they spotlighted a critical analysis of the moderninvestigations linked to the face detection procedure. Theyoffered an extensive assessment of vital investigations onthe face recognition procedure dependent on several con-straints. Moreover, a summarizing account of the Facerecognition procedure together with the methods associ-ated with the several constraints which have a telling impacton the face recognition procedure.

PROBLEM DEFINITION

The human activity identification has emerged as anunsolved issue, unsolved issue in spite of several monu-mental investigations have been carried in this direction.The human motion analysis in the computer vision detectshuman actions. It includes a wide-range of applicationslike the security watch, human machine interactions, videoannotations, sports, therapeutic diagnostics and passage,

way out control. Nevertheless, it proceeds as an exception-ally troublesome undertaking to distinguish human actions,in perspective of their variable looks and the broad extentof poses they can expect. In the case of classification, attimes, linear classifiers are considered unsuitable for real-istic issues as certain issues are endowed with the nonlinearproperty in the input space. However with the help of a non-linear map, information may be mapped from the data spaceinto a higher dimensional feature space. However, the issueis that the nonlinear mapping is unequipped for making anycalculations. These issues are tackled by the SVM by bring-ing in the kernel functions. When the entire deficiencies areresolved in the literary works, the efficiency of our systemcan be considerably enhanced. However, the absence of anysolutions for such deficiencies has motivated me to performthe investigational work in this regard.

Proposed methodology of Face Recognition System ofInvariant Pose, Expression and Illumination usingModified Kernel based SVM

Let Db represent the database consisting of N num-ber of frames, and Fi symbolize the database framesFi=(f1,f2, . . . , fN ) size of M X N. Thus, after furnishingan input (frame) from the database Db, the user has to indi-cate the category of the image like the Expression, Left,Looking Down, Looking Up, Normal and Right Poses. Inaccordance with the captioned specification, the innovativeworks are exhibited below along with the comprehensivesections.

The innovative technique includes the following threephases.

• Pre-processing• Feature extraction• Classification

PRE-PROCESSING

The input image is initially treated with a set of pre-processing tasks in order that the image is adapted so asto suitable for the additional processing. In the innovativetechnique, the pre-processing process is initiated in whichthe color image is changed in to gray image to cutback theevaluation complication. In the case of the color image eachimage has diverse contrast and intensity values and hencewe have changed the image in to gray image. In the gray

12691

Page 4: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

image all the image have the indistinguishable of 0 and 1,and the assessment complexity gets to be diminished in thegray image.

Denoising using Gaussian Filter

Let us suppose that the database Db tainted with noise,which paves the way for the reduction in the classificationaccuracy of the frames in the shape of various invariantposes like the Expression, Left, Looking Down, LookingUp and Right poses. Taking into account these tasks, theGaussian filter is elegantly employed for the purpose ofcarrying out the function of denoising. As regards the pre-processing task, a Gaussian filter acts an effective filter inwhich the Gaussian function is devoted for the eliminationof the noise. The technique obtains the input image, whichis subjected to the pre-processing function, where the noiseis eradicated by means of the Gaussian filter, leading to theaccomplishment of the zero-noise output.

G(x) = 1√2πσ

e−x2

2σ2 (1)

The pre-processed frames from the database areexpressed by means of Equation 2 and 3appearing below.

Db = Fi (2)

Fi = f1, f2, . . . , fN (3)

Db - denotes the database of the innovative techniqueFi - symbolizes the set of Frames in the databasef1,f2, . . . , fN - characterizes the converted frames

Feature Extraction using GLCM

One of the limitations of real time face recognition systemsis the computational complexity. In the image analysis, onerequires feature extraction method to reduce the processingtime and complexity. The feature extraction is done in orderto get the most important features in the image. Features areproperties which describe the whole image and serves as animportant piece of information that is subjected to solve thecomputational task related to specific application. For eachface image, a feature vector is formed by converting thegenerated gray-level co-occurrence matrix (GLCM) to avector and then it is used for classification. Gray-level co-occurrence matrix (GLCM) is the statistical method that

examines the textures which takes into account the spatialrelationship of the pixels.

Compare to GLCM, Principal Component Analysis(PCA) is a standard technique used in statistical pat-tern recognition and signal processing for dimensionalityreduction and feature extraction. GLCM method is verycompetitive with state of the art face recognition othertechniques such as Linear discriminant Analysis, GaborWavelets and Local Binary Pattern(LBP). Using smallernumber of gray levels (bins) shrinks the size of GLCMwhich reduces the computational cost of the algorithm andat the same time preserves the high recognition rates. Thiscan be due to the process of quantization which helps insuppressing the noise of the images at higher gray levels.Moreover, GLCM is a robust method for face recognitionwith competitive performance.

The Gray Level Co-Occurrence Matrix (GLCM) repre-sents to the numerical strategy of investigating the textureswhich take into account the spatial relationship of the pix-els. The GLCM qualities symbolize the composition of animage by assessing the recurrence of event of the sets ofpixel with determined qualities and in a specific spatialrelationship in an image, creating a GLCM, and in thismanner extorting the statistical measures from the relatedmatrix. The graycomatrix capacity in MATLAB createsa gray-level co-event network (GLCM) by assessing therecurrence of a pixel with the intensity (gray level) i ina specific spatial relationship to a pixel with the value j.By default, the spatial relationship is briefly described asthe pixel of significance and the pixel to its prompt right,however it is additionally conceivable to demonstrate otherspatial connections between the two pixels as sought.

FIGURE 1. Process of GLCM Matrix

Gray Level Co-Occurrence Matrix (GLCM)

A GLCM constitutes a matrix as appeared in Figure 1 [18]in which the size of the matrix is indistinguishable to the

12692

Page 5: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

number of gray levels, GM , in the image. The matrix com-ponent Txy(x, y|�p, �q) describes the comparing isolatedby a pixel separation �p, �q. The matrix component islikewise meant as Txy(x, y|d, θ ) which is home to the sec-ond order probability values for the variety between thegray level x and y separation d a particular edge θ . Presently,a few attributes are obtained from the GLCM. GM sym-bolizes the quantity of gray levels utilized and mean ofthe same. Tx(x) relates to the xth row of entry attained byaggregating the row of Txy(x, y).

Tx(x) =∑

y

Txy(x, y) and Ty(y) =∑

x

Txy(x, y) (4)

ψp =∑

x

xTx(x) and ψq =∑

y

yTy(y) (5)

ζp =∑

x

(Tx(x) − ψp(x))2 (6)

ζq =∑

y

(Tx(y) − ψq(y))2 (7)

With the efficient employment of the ensuing equations,we are able to estimate the diverse traits which can beeffectively employed to train the classifier. In the currentresearch, additional noteworthy traits are shortlisted forexecution by appropriately deploying them.

Area:The plain shape descriptor employed in the innovative tech-nique represents the area. The area of a specific image iscomputed by means of Equation 8 shown as follows.

Area, E = Ig

Id

(8)

Where,Ig represents the image height.Id denotes the image width

Perimeter:

T1 = 2(Ig + Id ) (9)

Circularity:The shape descriptor known as the circularity representsthe measure of perimeter to that of the area in an imagewhich is computed by means of the following Equation 10.

Circularity, U = E2

T(10)

FIGURE 2. Proposed Methodology of our System

Where, E characterizes the areaT symbolises the perimeter, which is calculated by the fol-lowing Equation 11.

T = 2∏ √

((Id/2)2 + (Ig/2)2)/2 (11)

Auto Correlation:The relationship assesses the non-linear independency ofgray levels of neighbouring pixels. The Digital Image Cor-relation speaks to an optical method which uses the changesin the images. This is habitually employed to evaluatedeformation, displacement, strain and optical flow, as a veryusual application for estimating the motion of an opticalmouse. It is furnished by the following Equation 12.

Ar =∑

x

∑y (x, y)p(x, y) − ψxψY

ζnζm

(12)

Contrasts:The contrast characterizes the variance of the gray level andis the difference between the maximum and the minimumvalues of a set of pixels. The GLCM contrast is invari-ably very much associated with spatial frequencies. It iscalculated by the Equation 13 shown below.

S =∑

x

∑y

(x − y)2Txy(x, y) (13)

12693

Page 6: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

Cluster Prominence:

CP =∑x,y

((x − ψx) + (y − ψy))4Txy(x, y) (14)

Cluster Shade:

CS =∑x,y

((x − ψx) + (y − ψy))3Txy(x, y) (15)

Dissimilarity:

Dis =∑x,y

|x − y| Txy(x, y) (16)

Homogeneity:

Hom =∑x,y

P (x, y)

[1 + (x − y)](17)

FIGURE 3. Extracted Features used for classification ofoutput

Energy:The Angular Second Moment is otherwise called the Uni-formity or Energy. It reflects to the aggregate of squares ofsections in the GLCM. It evaluates the image homogeneityand is found to be high when the image possesses excellenthomogeneity or when the pixels are very identical.

Energy =∑x,y

Txy(x, y)2 (18)

Entropy:This constraint effectively evaluates the disorder of animage. When the image is not textually identical sev-eral GLCM elements possess insignificant values, whichindicate that the entropy is exceedingly large.

Entropy =∑x,y

Txy(x, y)log(Txy(x, y)) (19)

SUPPORT VECTOR MACHINES

The SVM symbolizes a machine learning techniquedesigned in accordance with the statistical learning theoryand is fruitfully employed for classification and regressionwith high-dimensional space. The SVM classification tech-nique is targeted at locating an optimal hyper plane. Theoptimal hyper plane represents segregation between twoclasses devoid of discreet faults, and incredibly enhancesthe segregating margin. A SVM algorithm was designed tolocate the optimal hyperplane separating two classes withinsufficient data. Nevertheless, an absolute test vector ishighly essential for classification. To estimate the values ofthe missing elements from those in the entire set, a linearleast square technique is employed. The vital objective ofthe SVM technique is to locate the hyper plane which con-siderably enhances the margin, and needs the solution ofthe following optimization issue. Considering non-linearlynon-separable data, the target of most extreme edge charac-terization is to isolate the two classes by a hyperplane suchthat the separation to the support vectors is improved to thegreatest. This hyperplane is known as the optimal separat-ing hyperplane (OSH). The OSH equation is outfitted astakes after.

f (x) =l∑

i=1

ξiziVi .V + b (20)

Where ξ and b represent the solution of a quadraticprogramming issue.

SVM tries to locate an isolating hyperplane in the fea-ture space, a Hilbert space for a binarization issue. Thesoft-margin SVM algorithm depends on the accompanyingcompelled minimization optimal issue:

min1

2rT r + M ,

m∑k=1

ξk (21)

Subjected to (xkrT ϕ(Yk) + a) ≥ 1 − ξk (22)

ξk ≥ 0, k = 1, . . . , m (23)

Where r is a vector normal to the hyperplane, a is an biasterm such that a/ ‖r‖ speaks to the separation betweenthe hyperplane, M, is the soft margin parameter and theorigin, ϕ : P4→H is a nonlinear mapping capacity,ξk’s are loose variables to control the preparation lapses,[ξ1, . . . , ξm]T , and Mk ∈P ∗ is a penalty parameter fortuning the generalization ability.

12694

Page 7: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

Also, the general form of kernel function is given us,

K(U , V ) = ϕ(U )tϕ(V ) (24)

Normally utilized kernel functions are of linear kernel,Polynomial kernel, Quadratic kernel, Sigmoid and RadialBasis function. The expressions for kernel functions arerepresented as below.

For Linear Kernel:

Klin(U , V ) = uT v + c (25)

Where u, v represents the inner products in linear kerneland c is a constant.

For Quadratic Kernel:

Kquad (U , V ) = 1 − ‖u − v‖2

‖u − v‖2 + c(26)

Where, u, v are the vectors of the polynomial kernel func-tion in the input space.For Polynomial Kernel:

Kpoly(U , V ) = (λuT v + c)d , λ > 0 (27)

For Sigmoid Kernel:

Ksig(U , V ) = tanh(λuT v + c), λ > 0 (28)

The adequacy of SVM relies on upon the choice of ker-nel, the kernel’s parameters, and delicate soft margin Ms .The parallel SVM can be stretched out to multiclass. Mul-ticlass SVM’s are generally executed by consolidating afew two-class SVM’s either by one-versus-all techniquesor one-versus-one strategy. On the off chance that if the fea-ture space is straightly entwined, it must be mapped intoa high dimensional space through Radial basis functionkernel, so that the issue turns out to be directly divisible.The mix of any two kernel capacities can give preferableexactness over utilizing any of one kernel capacity.

Modified Support Vector Machines (Multi classClassification:)

In our Modified Support Vector Machines (MSVM) classi-fication, the two kernel functions like linear and quadratickernel functions are integrated to obtain better performanceratio. By integrating equations 25 and 26 the average isfound out which is proposed in this technique. The inte-grated kernel function is used in the modified SVM and

the average of the kernel function, Kavg(U , V ) is given asfollows,

Kavg(U , V ) = 1

2(Klin(U , V ) + Kquad (U , V )) (29)

Kavg(U , V ) = 1

2

((uT v + c

) +(

1 − ‖u − v‖2

‖u − v‖2 + c

))

(30)

In this technique, the color image is furnished as inputand the color image are converted to gray image to avoid thecomputation complication. Thereafter, for the gray imagethe Gray Level Co-Occurrence Matrix (GLCM) techniqueis employed followed by the performance of the SupportVector Machine (SVM) classifier procedure [19], [20],[21]. However, in this modified Support Vector Machinewe have to consider two kernels linear and quadratic tolocate the hyperplane. By integrating these two outcomes,the average of the outcomes is obtained and employed tolocate the hyperplane. Linear kernel function gives betterperformance in large data sets where as quadratic kernelfunction for better accuracy and precision. Other than aver-age of these two kernels, kernels with weight parameters ζ

and ς can contribute better results, given as follows

Kwei(U , V ) = ζ(uT v + c

) + ς

(1 − ‖u − v‖2

‖u − v‖2 + c

)

(31)

Where, ζ = δ and ς = 1 − δ, 0 < δ < 1.

RESULTS AND DISCUSSION

This section puts in a nutshell the upshots realized togetherwith the Modified Support Vector Machine(SVM). Theexperimental association along with recognition results iscolorfully carved out below. The database has been exten-sively employed for acquiring the productivity from timesimmemorial. In this case, medical image data base is usedfor the face recognition process.

The innovative method for the face recognition improve-ment is performed in a system having 8 GB RAM with 32bit operating system having i5 Processor employing theMATLAB Version 2014a. In the novel technique, for arriv-ing at the efficiency we have employed certain parameterswhich are shown below.

12695

Page 8: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

The sample input medical images gathered from themedical database is employed for the performance of thenovel technique as illustrated in the following Figure 4.First, proposed techniques discussed thoroughly on thisdatabase and extended on standard data sets.

FIGURE 4. Database Images

PERFORMANCE EVALUATION

The performance assessment of the innovative techniqueis carried out by estimating various parameters such asthe accuracy, sensitivity and specificity of the method, andthe related values are evaluated by means of the followingexpressions.

Sensitivity = (T P/(T P + FN )) (32)

Specif icity = (T N/(FP + T N )) (33)

Accuracy = (T P + T N/(T P + FN + FP + T N ))(34)

Where,True positive (TP) represents the number of images whichare accurately classified.

True negative (TN) characterizes the number of immate-rial images which are accurately classified.False positive (FP) signifies the number of pertinentimages which are erroneously classified as immaterialimages.False negative (FN) relates to the number of immaterialimages which are erroneously classified as pertinent image.

Selection of points for classification

The SVM classifier classifies the input image by means oftaking the GLCM Features and the pixel intensity values.The pixel intensity values are obtained by selecting thesignificant points. The significant points include the areasof eyebrows (10 points), eyes (10 points), mouth (5 points),nose (5 points) and around face (10 points). The outputclassified image can be further obtained by comparing thepixel intensity values and the GLCM features.

Input/Output Images

Certain image samples chosen from the medical databaseimages are employed for each category. Now, the imagesare categorized with regard to several invariant poses suchas the Expression, Left, Looking Down, Looking up, Pose,Right poses shown in Fig. 5.

The consequent output images for the specified input areexhibited in the following Table 1 by means of the MAT-LAB. It is cheering to note that the outcomes achievedby the innovative technique are incredibly superior withrespect to the invariant poses and illumination. Further, thediverse outputs for the specified input image are achievedand are contrasted with the input images, which also exhib-ited superlative outcomes for the epoch-making techniquevis-a-vis those of the modern methods.

PERFORMANCE EVALUATION OFPROPOSED METHOD

The efficiency in execution of our enchanting technique isassessed with the assistance of several performance mea-sures like the specificity, sensitivity and accuracy and whichare elegantly exhibited in Figures 6. Further the charismatictechnique is contrasted with the modern approach, which

12696

Page 9: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

FIGURE 5. Input Images bearing (a)-(c) Expression (d)-(f)Left (g)-(i) Looking Down (j)-(l) Looking up (m)-(o) Pose(p)-(r) Right

illustrated incredible improvement in terms of the entireperformance measures.

The innovative technique along with several perfor-mance measures like the Accuracy, Sensitivity and Speci-ficity for the entire kernel functions are exhibited in Table2 and 3. The kernel functions relate to the quadratic, RBF(Rate basis function), polynomial and linear functions. TheAccuracy, Specificity and Sensitivity metrics are found tobe superior for the quadratic function which is 89.12%,92.40% and 77.78% respectively. Moreover, the accuracyvalues for the RBF, Polynomial and linear functions arefound to be 82.54%, 83.45% and 82.31% respectively. Asregards the specificity measure, the corresponding valuesvary for the RBF, Polynomial and linear functions whichare 86.55%, 88.01% and 86.26% respectively and the sen-sitivity values for the three functions are observed to be

Table 1. Resultant image comparison between existing ker-nel against proposed kernel

Table 2. Performance Measures of Existing Techniques

Performance Existing TechniquesMetrics

hline Quadratic RBF Polynomial LinearAccuracy 89.12 82.54 83.45 82.31Sensitivity 77.78 68.69 67.68 68.69Specificity 90.40 86.55 88.01 86.26

12697

Page 10: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

Table 3. Performance Measures of Proposed Approach

Performance Metrics Proposed ApproachAccuracy 91.6Sensitivity 85.86Specificity 92.69

68.69%, 67.68% and 68.69% correspondingly. The inno-vative technique takes the average of both the quadratic andlinear kernel functions to achieve superb outcomes.

FIGURE 6. Performance Measures of Proposed approachwith Existing Techniques

Further, the comparison of the all the modern ker-nel functions with regard to the performance metrics isfurnished in Fig.6 illustrating that that our innovative tech-nique attains superior efficiency which outsmarts those ofmodern methods and verified on YALE, JAFFE, PIE andFEI standard databases.

Performance verification on standard databases

TheYALE face database contains 165 images of 15 persons(11 images per person). Images having different expres-sions like happy, sad, sleepy, surprised, wink etc in differentlightning conditions. In this simulation, first 5 images ofeach person are taken as training images and remaining areused as testing images. Overall 75 face images are used astraining images and 90 images as testing images and classi-fied in pose, expression and illumination category as shownin Table 4. Table 5 shows comparison of performance ofdifferent methods Eigenfaces, ICA, 2DPCA, Kernel Eigen-faces and proposed model using PIE and YALE database.Tabular results show that proposed model achieved highestaccuracy in comparison with other methods.

In the second simulation, first 6 images among 7 con-sidered in previous simulation per person (Total: 6x10 = 60images) in different expressions were used for training andthe rest of 153 used for testing. Overall 7 times simulation

Table 4. Data partition on YALE database for performingvarious experiments

SOE* Category Training Testing1 YALE_A1 Any one All remaining

randomly (i.e., remainingten imagesexcept theone selectedfor the training)

2 YALE_A2 Random 2 or 3 Remaining 9 or 83 YALE_A3 Random 6 Remaining 54 YALE_B1 a f, g, h, I

(Testing consistsof experimentsagainst illuminationvariation)

5 YALE_C1 a b, c, d, e, j, k(Testing consists ofexperimentsagainstexpression variation)

Table 5. Comparison of the Performance of different meth-ods using YALE database (Note that ICA is tested using

Euclidean distance in [22])Method Total Recognized Recognition

images images Accuracy(%)Eigenfaces [23] 165 118 71.52%

ICA [23] 165 118 71.52%Kernel 165 120 72.73%

Eigenfaces [23]2DPCA [24] 165 139 84.24%

Proposed method 165 147 89.09%

Table 6. Performance analysis after data partition of otherdatabase for performing various experimentsData Sets Training Testing Recognition

PIE 636 617 97.04YALE 165 162 98.6JAFFE 153 147 96.07

FEI 173 145 83.81

is carried out. Similarly, Performance analysis after datapartition of other database for performing various experi-ments shown in Table 6.

The JAFFE database [25] contains 213 images of 7 facialexpressions which includes 6 basic facial expressions and1 neutral of 10 females. There are 3 or 4 images for eachexpression. First, a simulation is performed using the 7images per person in different expressions and the remain-ing images for test. Thus, the total number of training andtesting images became 70 and 143 respectively. figure 7

12698

Page 11: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

FIGURE 7. Seven training images of face, JAFFE database[25]

Table 7. Recognition Accuracy of JAFFE database whentraining images per class varies from 7 to 1

Training Total Recognized Recognitionimages/class images images Accuracy (%)

7 143 141 98.60%6 153 147 96.07%5 163 156 95.70%4 173 145 83.81%3 183 148 80.87%2 193 134 69.43%1 203 136 66.99%

Table 8. Recognition Accuracy of methods across fourdatasets

Data sets MethodsLSVM PCA ICA EF PA

PIE 83% 76% 80% 85% 97%YALE 81% 82% 84% 87% 98%JAFFE 89% 84% 81% 91% 96%

FEI 84% 83% 88% 87% 83%

shows the Seven training images of one female from theJAFFE database. Each face image has different expression.

Table 7 show the recognition accuracy for trainingimages per class varying from 7 to 1. It also shows thenumber of correctly recognized images from total numberof test images. The tabular results show that the recognitionaccuracy is more than 80% when system is trained using3 or more than 3 images per class. The Recognition accu-racy reduced to 66% when only a single image is used torecognized person in different expressions. Table 8 showthe results on various data sets, as recognition accuracy iscomputed.

In which PIE data set, with illumination, expression andpose of 60 persons have been covered, proposed approachgives 97% accuracy. Similar performance achieved formore than 10 to 15 persons in the other data sets listedin Table 8. is maximum in the case of proposed approach,performance analysis compare to other existing methodsLSVM, PCA, ICA, and EF. In simulation results, Accu-racy of the algorithm for standard data set is calculated aswith different experiments for one image in training andrest four in test set.

FIGURE 8. Simulation results as a face recognized in dif-ferent face poses: (a-e) Left side face (f-j) 45◦ on the left (k-o)45◦ on the right (p-t) Right side face (u-y) Front face, fromFEI data set.

FIGURE 9. Simulation results as face recognized in differ-ent illuminations: (a-e) Very low (f-j) Low (k-o) Medium (p-t)High.

Where as figure 8 shows the different frames of sim-ulation results in which the face is correctly recognizedin different poses. This poses includes side face of left andright, front face, 45◦ angled to left and right side. Similarly,

12699

Page 12: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

figure 9 shows the different frames of simulation results inwhich the person is correctly recognized in variation ofillumination. Illumination varies from very low to high.

CONCLUSION

In this work, with an eye on fine-tuning the performance offace recognition, face images are detected and are assessedunder various pose, expression and illumination scenarios.With a view to assess the face recognition, at first we choosethe image from the database, and the technique proceedsthrough three phases such as the pre-processing, phase fea-ture extraction and classification processes by means of theSVM. In the feature extraction procedure, the GLCM fea-tures are extorted. Subsequently, the SVM sets out for theface recognition. The innovative methods are competent tofurnish superior visibility for various pose and illumina-tion scenarios. The newfangled Face recognition method isperformed in the working platform of the MATLAB. Theperformance of the innovative approach is assessed andcontrasted with the modern methods techniques which haveillustrated, verified on various standard data sets (YALE,FEI, PIE and JAFFE) and the fact that our dream scheme iscompetent to usher in amazing performance by generatingincredible improvement in recognition.

REFERENCES

[1] Gary A. Atkinson, C. Lakshmi Deepika, A. Kan-daswamy, Pradipta K. Banerjee, Jayanta K. Cahndra,and Asit K. Datta, “Proceedings of the internationalconference and exhibition on biometrics technology afrequency domain face recognition technique based oncorrelation plane features as input to a regression neuralnetwork,” Procedia Computer Science, vol. 2, pp. 75–82,2010.

[2] Manoj Kumar Naik and Rutuparna Panda, “A noveladaptive cuckoo search algorithm for intrinsic discrim-inant analysis based face recognition,” Applied SoftComputing, vol. 38, pp. 661–675, 2016.

[3] N. J. Cheung, X. M. Ding, and H. B. Shen, “A non-homogeneous cuckoo search algorithm based on quan-tum mechanism for real parameter optimization,” IEEETransactions on Cybernetics, vol. PP, no. 99, pp. 1–12,2016.

[4] Poonam Sharma, K.V. Arya, and R.N. Yadav, “Efficientface recognition using wavelet-based generalized neuralnetwork,” Signal Processing, vol. 93, no. 6, pp. 1557–1565, 2013.

[5] Jordi Mansanet, Alberto Albiol, and Roberto Paredes,“Local deep neural networks for gender recognition,”Pattern Recognition Letters, vol. 70, pp. 80–86, 2016.

[6] Sung-Hoon Yoo, Sung-Kwun Oh, and Witold Pedrycz,“Optimized face recognition algorithm using radial basisfunction neural networks and its practical applications,”Neural Networks, vol. 69, pp. 111–125, 2015.

[7] Shye-Chorng Kuo, Cheng-Jian Lin, and Jan-Ray Liao,“3d reconstruction and face recognition using kernel-based {ICA} and neural networks,” Expert Systems withApplications, vol. 38, no. 5, pp. 5406–5415, 2011.

[8] Pradipta K. Banerjee and Asit K. Datta, “General-ized regression neural network trained preprocessing offrequency domain correlation filter for improved facerecognition and its optical implementation,” Optics andLaser Technology, vol. 45, pp. 217–227, 2013.

[9] Gary A. Atkinson, C. Lakshmi Deepika, A. Kan-daswamy, K. Rama Linga Reddy, G.R. Babu, and LalKishore, “Proceedings of the international conferenceand exhibition on biometrics technology face recog-nition based on eigen features of multi scaled facecomponents and an artificial neural network,” ProcediaComputer Science, vol. 2, pp. 62–74, 2010.

[10] N. Sudha, A. R. Mohan, and P. K. Meher, “A self-configurable systolic architecture for face recognitionsystem based on principal component neural network,”IEEE Transactions on Circuits and Systems for VideoTechnology, vol. 21, no. 8, pp. 1071–1084, 2011.

[11] Z. Huang, S. Shan, R. Wang, H. Zhang, S. Lao, A. Kuer-ban, and X. Chen, “A benchmark and comparative studyof video-based face recognition on cox face database,”IEEE Transactions on Image Processing, vol. 24, no. 12,pp. 5967–5981, 2015.

[12] K. J. Hsu, Y. Y. Lin, and Y. Y. Chuang, “Augmentedmultiple instance regression for inferring object con-tours in bounding boxes,” IEEE Transactions on ImageProcessing, vol. 23, no. 4, pp. 1722–1736, 2014.

[13] Sh. Ch. Pang and Zh. Zh. Yu, “Face recognition: a noveldeep learning approach,” J. Opt. Technol., vol. 82, no. 4,pp. 237–245, 2015.

[14] Y. Xu, X. Fang, X. Li, J.Yang, J.You, H. Liu, and S. Teng,“Data uncertainty in face recognition,” IEEE Transac-tions on Cybernetics, vol. 44, no. 10, pp. 1950–1961,2014.

[15] M. Jian and K. M. Lam, “Simultaneous hallucination andrecognition of low-resolution faces based on singularvalue decomposition,” IEEE Transactions on Circuitsand Systems for Video Technology, vol. 25, no. 11, pp.1761–1772, 2015.

[16] B. F. Klare, M. J. Burge, J. C. Klontz, R. W. VorderBruegge, andA. K. Jain, “Face recognition performance:Role of demographic information,” IEEE Transactionson Information Forensics and Security, vol. 7, no. 6, pp.1789–1801, 2012.

12700

Page 13: Robust Illumination and Pose Invariant Face Recognition ... · J. Shermina and V. Vasudevan [15] gave a green light to an innovative face recognition technique which exhib-ited robustness

International Journal of Applied Engineering Research, ISSN 0973-4562 Volume 13, Number 16 (2018) pp. 12689–12701© Research India Publications, http://www.ripublication.com

[17] M. Uzair, A. Mahmood, and A. Mian, “Hyperspec-tral face recognition with spatiospectral informationfusion and pls regression,” IEEE Transactions on ImageProcessing, vol. 24, no. 3, pp. 1127–1137, 2015.

[18] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multires-olution gray-scale and rotation invariant texture clas-sification with local binary patterns,” TPAMI, vol. 24,2002.

[19] V. Kellokumpu, G. Zhao, and M. Pietikainen, “Humanactivity recognition using a dynamic texture basedmethod,” BMVC, 2008.

[20] Yaguo Lei, Zongyao Liu, Xionghui Wu, Wu ChenNaipeng Li, and Jing Lin, “Health condition iden-tification of multi-stage planetary gearboxes using amrvm-based method,” Mechanical Systems and SignalProcessing, Elsevier, pp. 289–300, 2015.

[21] Chengliang Wang, Libin Lan, Yuwei Zhang, and Min-jie Gu, “Face recognition based on principle componentanalysis and support vector machine,” Intelligent Sys-tems and Applications (ISA), 2011 3rd InternationalWorkshop on, pp. 1–4, 2011.

[22] Smola and B. Scholkopf., “On a kernel-based methodfor pattern recognition, regression, approximation andoperator inversion,” Technical Report 1064, GMD First,Berlin and MPI, Tubingen, Germany, 1997.

[23] Yu Su, Shiguang Shan, Xilin Chen, and Wen Gao, “Hier-archical ensemble of global and local classifiers for facerecognition,” Computer Vision, 2007. ICCV 2007. IEEE11th International Conference on, pp. 1–8, Oct 2007.

[24] M.S. Bartlett, Javier R. Movellan, and T.J. Sejnowski,“Face recognition by independent component analysis,”Neural Networks, IEEE Transactions on, vol. 13, no. 6,pp. 1450–1464, Nov 2002.

[25] “Full-body person recognition system,” Pattern Recog-nition, vol. 36, no. 9, pp. 1997–2006, 2003.

12701