committees - gunadarma

14

Upload: others

Post on 21-Oct-2021

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Committees - Gunadarma
Page 2: Committees - Gunadarma
Page 3: Committees - Gunadarma

Committees

General Chair

Teddy Mantoro, APTIKOM, Indonesia

Program Co-Chairs

Achmad Benny Mutiara, APTIKOM, Indonesia

Gerard Borg, Australian National University, Canberra, Australia

Publication Co-Chairs

Achmad Nizar Hidayanto, University of Indonesia, Indonesia

Suryono, Diponegoro University, Indonesia

Publicity Co-Chairs

Prihandoko, Universitas Gunadarma, Indonesia

M. Izman Herdiansyah, Bina Darma University, Indonesia

Technical Program Committee Chair

Media Anugerah Ayu, Sampoerna University, Indonesia

Organizing/Local Committee Co- Chairs

Dwiza Riana, Universitas BSI, Indonesia

Widya Cholil, Bina Darma University, Indonesia

TPC members

Abdullah Alkalbani University of Buraimi, Sultanate of Oman

Achmad Benny Mutiara, Universitas Gunadarma, Indonesia

Adamu Ibrahim, International Islamic University Malaysia, Malaysia

Agus Buono, Bogor Agricultural University, Indonesia

Agus Hardjoko, Gajah Mada University, Indonesia

Ahmad Zeki, Bahrain University, Bahrain

Akram M. Zeki, International Islamic University Malaysia, Malaysia

Alamin Mansouri, Universite de Bourgogne, France

Anton Prabuwaono, King Abdul Azziz University, Saudi Arabia

Asep Juarna, Universitas Gunadarma, Indonesia

Ayu Purwarianti, Bandung Institute of Technology, Indonesia

Bharanidharan Shanmugam, University of Darwin, Australia

Christophoros Nikou, University of Ioannina, Greece

Dwiza Riana, Universitas BSI, Indonesia

Page 4: Committees - Gunadarma

Eko Kuswardono Budiardjo, University of Indonesia, Indonesia

Eri Prasetyo Wibowo, Gunadarma University, Indonesia

Evizal Abdul Kadir, Universitas Islam Riau, Indonesia.

Frederic Ezerman, Nanyang Technological Univiversity, Singapore

Fredy Purnomo, Binus University, Indonesia

H. Dawid, Universitaet Bielefeld, Germany

Heru Suhartanto, University of Indonesia, Indonesia

Iping Supriana Suwandi, Bandung Institute of Technology, Indonesia

Ismail Khalil, Johannes Kepler University, Linz, Austria

Kridanto Surendro, Bandung Institute of Technology, Indonesia

Lukito Edi Nugroho, Gajah Mada University, Indonesia

Michel Paindavoine, Burgundy University, France

Moedjiono, Budi Luhur University, Indonesia

Mohammad Essaaidi, Chair of IEEE Morocco Section, Morocco

Muhammad Zarlis, University of Sumatera Utara, Indonesia

Murni Mahmud , International Islamic University Malaysia, Malaysia

Naufal M. Saad, Universiti Teknologi Petronas, Malaysia

Normaziah Azis, International Islamic University Malaysia, Malaysia

Norshida Mohammad, Prince University, Saudi Arabia

Paulus Insap Santosa, Gajah Mada University, Indonesia

Prihandoko, Gunadarma University, Indonesia

Rila Mandala, Bandung Institute of Technology, Indonesia

Sabir Jacquir, Universite de Bourgogne, France

Salwani BTE Mohd Daud, Universiti Teknologi Malaysia, Malaysia

Shelvie Neyman, Institut Pertanian Bogor, Indonesia

Supriyanto, Universitas Gunadarma, Indonesia

Tole Sutikno, Ahmad Dahlan University, Indonesia

Tubagus Maulana Kusuma, Gunadarma University, Indonesia

Untung Rahardja, STIMIK Rahardja Banten, Indonesia

Vincent Vajnovzski, Universite de Bourgogne, France

Waralak Siricharoen, University of the Thai Chamber of Commerce,Thailand

Wendi Usino, Budi Luhur University, Indonesia

Wisnu Jatmiko, University of Indonesia, Indonesia

Youssef Zaz, Abdelmalek Essaadi University, Morocco

Yusuf Yudi Prayudi, Universitas Islam Indonesia, Yogjakarta, Indonesia

Yugo Isal, University of Indonesia, Indonesia

Page 5: Committees - Gunadarma

PROGRAM SCHEDULE

16 October 2018

13.00:18.00 Transfer from airport to main hotel and registration

17 October 2018

07:30-08:15 Registration

08:15-09:15 Opening session of the ICIC 2018*

08.15-08.22 Welcoming Remark by General Chair Prof. Dr. Teddy Mantoro

08.22-08.30 Remark from Aptikom Prof Dr. Zainal Hasibuan

08.30-08.40 Welcoming Remark from the Host-Rector UBD

08.40-08.55 Traditional Dance

08.55-09.05 Opening Remark by the Governor of South Sumatera or Mayor

of Palembang

09.05-09.15 Official opening of the ICIC 2018 and Photo Session

09:15-09:30 Coffee Break

09:30-10:25 Keynote Speech 1**

10:30-12:00 Technical Session 1 – TS1 (Parallel: 6 tracks)

12:00-13:05 Lunch Break

13:10-14:10 Keynote Speech 2**

14:15-15:55 Technical Session 2 – TS2 (Parallel: 6 tracks)

15:55-16:15 Coffee Break

14:15-15:55 Technical Session 3 – TS3 (Parallel: 6 tracks)

18:10-19.00 Break

19.00-21.30 Gala Dinner

18 October 2018

07.15-08.55 Technical Session 4 – TS4 (Parallel: 6 tracks)

08.55-09.55 Keynote Speech 3**

09.55-10.10 Coffee Break

10.10-11.30 Technical Session 5 – TS5 (Parallel: 6 tracks)

11.30-12.00 Closing and Best papers announcement*

12.00-13.00 Lunch and Disperse (Go to Munas Aptikom Opening)

Page 6: Committees - Gunadarma

Technical Session Schedule

Technical Session-1

Track 1

PaperID114

Texture Feature Extraction Based On GLCM and DWT for Beef Tenderness

Classification

Sigit Widiyanto, Sarifuddin Madenda, Eri Prasetyo Wibowo, Yuhara Sukra and

Dini Tri Wardani

PaperID119 Data Mining Classification of Intelligence Quotient in High School Students

Des Suryani, Ause Labellapansa, Setia Wulandari and Ahmad Hidayat

PaperID134

Feature Extraction Using Histogram of Oriented Gradient and Hu Invariant

Moment for Face Recognition

Eri Prasetyo Wibowo, Sandi Agung Harseno and Robby Kurniawan Harahap

PaperID148

Deep Learning Long-Short Term Memory for Indonesian Speech Digit

Recognition using LPC and MFCC Feature

Ericks Rachmat Swedia, Achmad Benny Mutiara, Muhammad Subali, Ernastuti

Track 2

PaperID280

Educational Data Mining (EDM) as a Model For Students’s Evaluation in

Learning Environment

Nurul Hidayat, Retantyo Wardoyo and Azhari SN

PaperID289

Critical Success Factors for Project Management Office: an Insight from

Indonesia

Teguh Raharjo, Betty Purwandari, Riri Satria and Iis Solichah

PaperID293

Webuse Usability Testing for Farmer and Farmer Group Data Collection

System

Halim Budi Santoso, Rosa Delima and Wahyuni

PaperID296

Comparison of Two Methods Between TOPSIS and MAUT In Determining

BIDIKMISI Scholarship

Ramadiani, Heliza Rahmania Hatta, Nurlia Novita and Azainil

PaperID301

Evaluation of User Engagement in E-learning Standardization and

Conformity Assessment Using Subjective and Objective Measurement

Lintang Yuniar Banowosari and Komang Anom Budi Utama

Track 3

Page 7: Committees - Gunadarma

PaperID236 Integration of Region-based Open Data Using Semantic Web

A.A.Gede Yudhi Paramartha, Kadek Yota Ernanda Aryanto and Gede Rasben

Dantes

PaperID173

Cloud-based e-Business Framework for Small and Medium Enterprises:

Literature Review

Ni Made Satvika Iswari, Harry B. Santoso, and Zainal A. Hasibuan

PaperID180 Usability Evaluation and Development of a University Staff Website

Andre Valerian, Harry Budi Santoso, Gladhi Guarddin and Martin Schrepp

PaperID183 The Ontology of SMEs’s Form Application for Interoperability Systems

Masodah Wibisono, Aris Budi Setyawan, Dini Tri Wardani and Sigit Widiyanto

Track 4

PaperID213

Comparative Evaluation of Object Tracking with Background Subtraction

Methods

Dennis Aprilla Christie and Topan Sukma

PaperID225

Peripapillary Atrophy Detection in Fundus Images Based on Sector With

Scan Lines Approach

Anindita Septiarini, Agus Harjoko, Reza Pulungan and Retno Ekantini

PaperID227

Drivers' visual search behaviour: Eye tracking analysis approach (Case

study: on Ir. H. Juanda Street Depok)

Dian Kemala Putri, Mohammad Iqbal, Karmilasari and Kemal Ade Sekarwati

PaperID246

The Generalized Learning Vector Quantization Model to Recognize

Indonesian Sign Language (BISINDO)

Tri Handhika, Ilmiyati Sari, Revaldo Ilfestra Metzi Zen, Dewi Putrie Lestari and

Murni

PaperID250 Algorithm for Simple Sentence Identification in Bahasa Indonesia

Dina Anggraini, Achmad Benny Mutiara, Tb. Maulana Kusuma and Lily

Wulandari

Track 5

PaperID3 Template Matching Algorithm For Noise Detection in Cargo Container

Doni Setio Pambudi, Ruktin Handayani and Lailatul Hidayah

PaperID22

Genetic Algorithm Modification Of Mutation Operators In Max One

Problem

Ummul Khair, Adidtya Perdana, Arief Budiman, Yuyun Dwi Lestari and Dody

Hidayat

PaperID24

Meme Opinion Categorization by Using Optical Character Recognition

(OCR) and Naïve Bayes Algorithm

Page 8: Committees - Gunadarma

Amalia Amalia, Amer Sharif, Fikri Haisar, Dani Gunawan and Benny B Nasution

PaperID79

Improving Naïve Bayes in Sentiment Analysis For Hotel Industry in

Indonesia

Tata Sutabri, Agung Suryatno, Dedi Setiadi and Edi Surya Negara

PaperID109

Early Identification of Leaf Stain Disease in Sugar Cane Plants Using

Speeded-Up Method Robust Features

Romi Fadillah Rahmat, Dani Gunawan, Sharfina Faza, Karina Ginting and Erna

Budhiarti Nababan

Track 6

PaperID188 Design of Orchid Monitoring System Based on IoT

Farid Al Rafi, N.S.Salahudin, Anacostia Kowanda and Trini Septriani

PaperID190 Remote Sensing System of Odometry and Telemetry data in Real-Time

Dnur Fathurrochman, Purnawarman Musa, Dinda Desita Wimananda, Octarina

Budi Lestari

PaperID199

Framework for Identifying Agent’s Role in Multi-agent Based Self-healing

System

Falahah, Iping S Suwardi and Kridanto Surendro

PaperID207

Fuzzy Rule-Based System for Monitoring Traffic Congestion using

Technology Radio Frequency Identification

Arif Wicaksono Septyanto, Suryono Suryono and Isnaini Rosyida

PaperID209

Prediction of Smartphone Charging using K-Nearest Neighbor Machine

Learning

Faza Ghassani, Maman Abdurohman and Aji Gautama Putrada

Page 9: Committees - Gunadarma

Feature Extraction UsingHistogram of Oriented Gradient and Hu Invariant

Moment for Face RecognitionEri Prasetyo WibowoInformation TechnologyGunadarma University

Depok, [email protected]

Sandi Agung HarsenoManagement Information System

Gunadarma UniversityDepok, Indonesia

[email protected]

Robby Kurniawan HarahapComputer EngineeringGunadarma University

Depok, Indonesiarobby [email protected]

Abstract—Face recognition was one of many popular fieldsin Image Processing and Computer Vision. Face recognitionhad many problems like the pose variation, Illumination, ImageQuality, etc. Many methods were developed to find the solutionto this problem. Some method uses Approach that uses thewhole face as features, while other method uses Features-BasedApproach that uses Local Feature like Eyes, Nose, and Mouth asfeatures. The proposed method to combine two methods: HOGand Hu Invariant moment. Two methods as feature extractionhas been implemented with tested on 3 database : Markus’s,ORL/AT&T and our database. This database was tested withthree scenarios testing. Finally, the proposed method result onRecognition Rate average is 79.82 %, then the best RecognitionRate result is 97.22 %.

Index Terms—Combine, Face Recogniton, Feature-Based Ap-proach, Holistic Approach, HOG, Hu Invariant moment

I. INTRODUCTION

Image processing and computer vision has become aninteresting topic researched. The application of both are alsowidely used one of them is biometrics. In [1], the applicationof modern statistical methods to the measurement of thebiological object is called biometric. The biometric is consistof the face, fingerprint, iris, hand geometry, signature, etc.There are many ways of using biometric to recognize a person,one of the objects used is a face because the individual facehas a unique feature or characteristic like eyes [2]. The facerecognition or facial recognition is use face as an object torecognize, which the identification of human face by using ofvisible characteristics [1].

To be able to recognize the object of a person’s face, ittakes something unique from the face is called the feature. Toget feature can be used feature extraction technique. There aretwo main categories of feature extraction techniques: HolisticApproach and Feature-based Approach [3]. The Histogram ofOriented Gradients (HOG) descriptor was introduced by Dalaland Triggs, in their work using HOG as a feature and SVMas a classifier, the results obtained very well for detecting aperson [4]. Then HOG as recognizing the face is also doneby Alberto Albiol in [1], in his work combines HOG withElastic Bunch Graph Matching (EBGM) for face recognition,

by using the eye as a priority of placing 25 key points on theeye. O. Deniz also in [5] uses the HOG feature, in his workthe HOG feature used is the whole face and then processedwith the use of Principal Component Analysis (PCA) or LinearDiscriminant Analysis (LDA), got the better result .

Moment Invariant is a method to shape descriptor, in[6] and used for basic feature descriptor for image. In [7]moment invariant by hu is combined with Legendre momentsas a producer feature vector recognition process. This set ofmoments will describe the shape of object that will be invariantto translation, rotation, scale, position. This set contains sevenvalues that descript the object. The previous work use methoduses Holistic Approach that uses whole face as features, whileother method uses Features-Based Aprroach that uses LocalFeature like Eyes [2], Nose, and Mouth as features. Therefore,this work want to combine that two method Holistic Approachwith Features-Based Approach.

In This paper present the another method for face recogni-tion by combining two method. The combine is use betweenHistogram Oriented Gradient [4] with Hu Invariant Moment[6]. This combining method to get best rate of face recognitionproblem.

This paper is organized and structured as follows: explaina-tion and analysis of face recognition and Histogram OrientedGradient (HOG) as literature study in section 2, the combinetwo method as method proposed in section 3, the evaluationan result of method proposed is disscused in section 4, finallyconclusions are given in section 5.

II. LITERATURE STUDY

A. Face Recognition

The identical 2 basic terms are face detection and facerecognition that is part of facial processing was called Imageprocessing. Then first step to find faces before face recognitionprocess was called Face detection. In [8], the Facial Recog-nition process normaly has four interrelated phases or steps:(1) Face Detection, (2) normalization, (3) Feature Extraction,(4) Face Recognition. Facial Processing challenges like faceposition, facial expressions, and the condition for take image

Page 10: Committees - Gunadarma

(Illumination). Ramchandra and Kumar [8] categorized ImageQuality, Illumination Problem and Approaches to Minimalizeit, Pose Variation.

B. Histogram of Oriented Gradients

Histogram of Oriented Gradients or HOG is feature descrip-tor used in computer vision and image processing. HOG pro-cess to count occurrences of gradient orientation in localizedportions of an image. The local object appearance and shapein an image can be described by the distribution of intensitygradients or edge directions as Main idea of HOG.

Dalal and Triggs implemented HOG for human Detectionwith SVM as classifier, by diving the image into small spatialregions (cells) [4]. The cell have N x N pixel, this part wasuseful as a deduction dimention, because it will to compress Nx N pixel into one cell. For each cells, calculated the gradientvector in each cell and accumulating a local 1D histogram ofgradient direction or edge oriantation for the pixel within thecell. There are 2 type of range histogram, first is unsigned (0to 180 degrees). The other one is signed (0 to 360 degrees).So that each bin was meansuring total range/number of bins.So if we use the signed range (0 to 360 degrees) and N-bin16, then range of each bin is 360 / 16 degrees or 22.5 degrees.For make it more invariance to illumination, shadowing andcontrast, they normalize the local responses before using them.This can be done by accumulating local histogram from eachcell into larger spatial region that call blocks. Dalal and Triggsmethod show on figure 1

Fig. 1. HOG Feature Extraction Chain Dalal and Triggs [4]

This work good result, with following properties of theirwork: RGP colour space with no gamma correction ; [-1,0,1]gradient filter with no smotthing ; linear gradient voting into9bin orientation with unsigned range (0 to 180 degrees) ; with8 x 8 Pixel cell, and 2 x 2 bloks (or 16 x 16 pixel per bloks); L2-Hys block normalization ; 64x128 detection window ;linear SVM classifier.

Albiol on [1] combine HOG with EBGM (Elastic BunchGraph Matching) for face recognition. This method (HOG-EBGM Algorithm) consist of three steps:

1) Image Normalization, to reduce variablility produced bychanges in illumination, scale, and rotation.

2) Creation of Face Graphs, not every facial area of con-tributing equally to face recognition, [9] it is shown thatthe area around eyes and nose are very important forrecognition. For this reason, more facial landmarks areplaced in these two areas.

3) Graph Matching, Using nearest neighbor classifier tocheck matching the graph.

This work using full face to take some local feature containedon the face and the eyes is focus.

This method find the exact location of both eyes andthen put point location. 25 location (point) is placed on theface show on 2. Used 2x2 cell while each cell is a squareof 5x5pixel. This size is chosen according to the distancebetween eyes of the normalized face, which in their work is40pixels. Testing with FERET database, this method got betterresult than other algorithm like PCA Eulidean, Bayesian MAP,Gabor-EBGM.

Fig. 2. 25 Location on face Albiol method [1]

Deniz on [5]implemented HOG for Face Recognition(HOG-EGBM).

[5]implemented HOG for Face Recognition. If [FaceRecognition using HOG-EBGM] using HOG and 25 faciallandmarks ware first localized using EBGM, and then HOGfeature extracted from the vicinity in each of these 25 fa-cial landmarks, and also using nearest neighbor as classifier.Instead using local feature such as [1] do, [5] choose toextract feature in whole of face. The decision was takenbecause if we put landmarks in advance, final error willdepend on the realbility of the landmarks localization. So theyhave a hypothesis, approach such as [1] may not well whenlandmark are not precisely localized due to occlusion, strongilluminations or pose changes. They test the algorithm withFERET standard test , Result of [5] show better recognitionrates than other algorithm.

Page 11: Committees - Gunadarma

C. Invariant Moments

Invariant moment was introduced by hu in [6], this momentin 2D space on digital image f(x, y) with a size of imageM × N . In [10], the moments described characteristics ofobeject that needed to segment images. The moment shown :mean, variance and skeness form probability density of image[10].

The moment Mpq of an digital image and order geometrics(p+ q) are defined as in Eq 1 [6].

Mpq =∑x

∑y

xpyqf(x, y) (1)

where, p and q is represented the order of moments(0,1,2,...) then f(x,y) the pixel value at the position (x,y) ofan image

III. OVERWIEW OF THE METHOD

The Process is divided into two main process. First is is aTraining/Classification process and second is Recognition Pro-cess. Figure V show the step of classification and Recognitionprocess

Basicly Training/Classification process and RecognitionProcess have same stage, but the different is Classificationprocess write the result of classification to database, whileRecognition process will read the result of classification indatabase and use it as reference to give the decision.

Start with read the image, this process will read the imagefrom storage, and load it in grayscale. The second stage ispreprocessing step that is Histogram Equalization (HE), FaceDetection and Localization (FDL), and then Size Normaliza-tion (SN). The HE as to stretch the contrast in the image,the FDL as to detect face using the haar-like algorithm byViola-Jones [11], then SN as changing into 200× 200 pixels.

The third stage is perfoming feature extraction which usetwo method that is HOG, and Hu Invariant Moment. Beforeextract feature with HOG, the image must deskew or anglenormalization. Then before extract Hu Invariant Moment,the image must be convert it to biner image using otsubinerization. After we get the feature vector from HOGand Hu Invariant Moment, we combine it. The last step isClassification and/or Recognition Process.

A. Database

There are 2 database that use in this work:1) Markus Weber’s database. This database collected by

Markus Weber at California Institute of Technology in1999 [12]. This database containing 450 face imageswith size 896 x 592 pixels, Jpeg Format, with frontalpose. There are 27 or so unique people under withdifferent lighting/ expressions/ backgrounds.

2) ORL Database of Faces. This database contains a set offace images was taken between April 1992 and Aprill1994 [13] . There are 10 different images of each of40 distinct subject. Image were taken at different times,varying the lighting, facial expressions (opem/closed

eyes, smiling/ not smiling) and facial details (glasses / noglasses). All image were taken against a dark homoge-neous background with the subject in an upright, frontalposition (with tolerance for some sife movement).

3) Our Database. This database containing a set of 2 sub-jects with total images per subject 35 image. 8 imagesper subject for learning, and 27 images for testing. Therule of samples taken for learning as follows:

• Subject looking straight ahead in the direction ofthe camera.

• Subject looking straight into right of the camera,with an angle of about 5 degrees.

• Subject looking straight into left of the camera, withan angle of about 5 degrees.

• Subjects looking straight into the camera, with theposition of the face tilted to the right at an angleof about 5 degrees, such as staring italics, on theblackboard.

• Subjects looking straight into the camera, with theposition of the face tilted to the left at an angleof about 5 degrees, such as staring italics, on theblackboard.

• Subjects looking straight into the camera, with eyesclosed.

• Subjects looking straight into the camera, with theexpression of a smile showing teeth.

• Subjects staring straight up at the camera, as iflooking at the clock on the wall.

IV. RESULT

The test was performed using the following three scenarios:

A. First Scenario

Testing was performed using by test in every database with8 samples as training set. the bin size of histogram of orientedgradiend will be changed from 1, 2, 4, 8, 16, 32, 64. Eachdatabase will be test only with person at this database. Theresult as follows:

1) Markus’s Database, This database contained 27 or sounique subject. But just used 19 subject. Subject thatused contained more than 18 samples/subject. 8 samplesused as training set, and 10 samples used as testingset. After tested with this database, the result got bestrecognition rate at 88.42% at the bin size of histogram64 for this database.

2) ORL Database, This dataset contained 40 subject with10 samples image/subject. But just used 18 subject. 8samples will be used as training set, and 2 samples usedas testing set. After tested with this database, the resultgot best recognition rate at 97.22% with 16 and 32 binsize of histogram for this database.

3) Our Database, This database contained a set of 2 sub-jects with total images per subject 35 image. 8 imagesper subject for learn, and 27 images for tested. Aftertesting with this database, the resutl got best recognition

Page 12: Committees - Gunadarma

rate at 92.59% at 8 and 16 bin size of histogram for thisdatabase.

TABLE ITESTING RESULT FIRST SCENARIO

Bin Size Markus’sDatabaseRecognitionRate(%)

ORLDatabaseRecognitionRate(%)

OurDatabaseRecognitionRate

1 32.11 66.67 90.74

2 57.37 83.33 90.74

4 72.11 88.89 75.93

8 84.21 91.67 92.59

16 85.26 97.22 92.59

32 86.32 97.22 90.74

64 88.42 94.44 85.19

B. Second ScenarioIn this scenario, the test was performed by combine all

database as training set. So there is 39 subject as training set.The test was used 39 subject, with 2 samples each subject. BinSize of Histogram of Oriented Gradient also changed from 1,2, 4, 8, 16, 32, 64. The result in table II:

TABLE IITESTING RESULT SECOND SCENARIO

Bin Size Recognition Rate (%)

1 34.62

2 65.38

4 82.05

8 87.18

16 87.18

32 88.46

64 89.74

C. Third ScenarioThe third scenario, the test was performed by combine all

database as training set. So there is 39 subject as training set.But the test was done for each database. Bin Size of Histogramof Oriented Gradient also changed from 1, 2, 4, 8, 16, 32, 64.The result in table III.

1) Markus’s Database This database contained 27 or sounique subject. But just used 19 subjects. Subject thatused contain more than 18 samples/subject. 8 samplesused as training set for each subject, and 10 samplesused as testing set.

2) ORL/ AT&T Database This database contain 40 sub-ject with 10 samples image/subject. But just used 18subjects. Subject that used that all 10 samples of eachsubject detected as face with Haar-Like from Viola-Jones algorithm. 8 samples used as training set, and 2samples used as testing set.

3) Our Database This database contained 2 subject with35 samples image/subject. We used 8 samples as trainingset, and 27 samples used as testing set.

TABLE IIITESTING RESULT THIRD SCENARIO

Bin Size Markus’sDatabaseRecognitionRate(%)

ORLDatabaseRecognitionRate(%)

OurDatabaseRecognitionRate

1 32.11 44.44 38.89

2 57.37 80.56 81.48

4 72.11 88.89 72.22

8 84.21 88.89 92.59

16 85.26 91.67 90.74

32 86.32 91.67 90.74

64 88.42 88.89 85.19

Based on the three tested scenarios, the result: this methodif used the first scenario then got best rocognition rate 97.22% on 16 and 32 Bin Size by use ORL database. By usedthe second scenario (combine three databases) then got bestrocognition rate 89.74 % on 64 Bin Size. Then used the thirdscenario then got best rocognition rate 92.59 % on 8 sizeby used our databases. The number bin of the histogram beaffected both the good and bad of recognition rate.

V. CONCLUSION

Combining HOG and Hu Invariant Moment as Features Ex-traction for Face Recognition has been successed implementedwith Face and can be recognized. By tested on 3 databasewith three scenarios, Overall the Recognition Rate using ourmethod give average 79.82% recognition rate, then best resultof this research give 97.22% recognition rate on bin size 8and 16. By used whole face (Holistic Approach) and usedlocal features (Feature-based Approach) can give good result.For feature work, Althought our method show good result butdatabase that used small database with < 50 subject. To makeit more sure about the result, testing must be trying with bigdatabase that have more than or equal 50 subject.

ACKNOWLEDGMENT

We would like to thank Rector of Gunadarma Universityfor supporting, funding research and giving accomodation toInternational Conference ICIC 2018, Palembang Indonesia.

REFERENCES

[1] A. Albiol, D. Monzo, A. Martin, J. Sastre, and A. Albiol, “Facerecognition using hog–ebgm,” Pattern Recognition Letters, vol. 29,no. 10, pp. 1537–1543, 2008.

[2] E. P. Wibowo, K. Sari, W. S. Maulana, and S. Madenda, “Ocularbiometric system focused on irislocalization and embedded matchingalgorithm,” International Journal of Computer and Electrical Engineer-ing, vol. 2, no. 6, p. 1086, 2010.

[3] A. Behara and M. Raghunadh, “Real time face recognition system fortime and attendance applications,” Int. J. Electr. Electron. Data Commun,vol. 1, no. 4, pp. 2320–2084.

[4] N. Dalal and B. Triggs, “Histograms of oriented gradients for humandetection,” in Computer Vision and Pattern Recognition, 2005. CVPR2005. IEEE Computer Society Conference on, vol. 1. IEEE, 2005, pp.886–893.

[5] O. Deniz, G. Bueno, J. Salido, and F. De la Torre, “Face recognitionusing histograms of oriented gradients,” Pattern Recognition Letters,vol. 32, no. 12, pp. 1598–1603, 2011.

Page 13: Committees - Gunadarma

[6] M.-K. Hu, “Visual pattern recognition by moment invariants,” IREtransactions on information theory, vol. 8, no. 2, pp. 179–187, 1962.

[7] G. Sanchez and M. Rodriguez, “Cattle marks recognition by hu andlegendre invariant moments,” ARPN J. Eng. Appl. Sci, vol. 11, pp. 607–614, 2016.

[8] A. Ramchandra and R. Kumar, “Overview of face recognition systemchallenges,” International Journal Of Scientific & Technology Research,vol. 2, no. 8, 2013.

[9] W. Zhao, R. Chellappa, P. J. Phillips, and A. Rosenfeld, “Face recog-nition: A literature survey,” ACM computing surveys (CSUR), vol. 35,no. 4, pp. 399–458, 2003.

[10] S. Leem, H. Jeong, Y. Lee, and S. Kim, “A robust hand gesturerecognition using combined moment invariants in hand shape,” in 4thInternational Conference on Interdisciplinary Research Theory andTechnology, 2016, pp. 89–94.

[11] P. Viola and M. J. Jones, “Robust real-time face detection,” vol. 57,no. 2. Springer, 2004, pp. 137–154.

[12] “Markus weber’s database.” [Online]. Available:http://www.vision.caltech.edu/html-files/archive.html

[13] “Orl database of faces.” [Online]. Available:http://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html

Fig. 3. Classification and Recognition Process (Research Framework)

Page 14: Committees - Gunadarma