extreme expression of sweating in 3d virtual human

8
Extreme expression of sweating in 3D virtual human Ahmad Hoirul Basori a,b,, Ahmed Zuhair Qasim b a Interactive Media and Human Interface Lab, Department of Informatics, Faculty of Information Technology, Institut Teknologi Sepuluh Nopember Surabaya b Faculty of Computing, Universiti Teknologi Malaysia article info Article history: Available online 29 March 2014 Keywords: 3-D modelling Facial animation Facial expression Extreme expressions Sweating abstract Displaying extreme expressions such as scared until sweating is not easy task that can be accomplished in 3D games and facial animation simulation in real-time. Several difficulties faced by researches such as the complexity of simulation including physical properties of the muscles, emotions, properties of fluids and texture element. The next part is how to control the facial animation with one of the two sweating gen- erators simultaneously. This research present techniques for generating extreme expression in 3D facial animation. The Facial Action Coding System (FACS) is employed to describe and generate facial expres- sions. It breaks down facial actions into minor units known as Action Units (AUs). Facial expressions are generated by combining specific independent Action Units. The generated expressions cover sadness, anger, happy and fear. The first type of sweat is dropping sweat using particle system. The other type of sweat is texture based, when the sweat is stimulated the texture drop will flowing down around surface of the forehead area. The technique presented in this paper is believed able to give more realism to virtual human by providing features of extreme expression. Ó 2014 Elsevier Ltd. All rights reserved. 1. Introduction Virtual human has been developed in various forms of computer applications such as talking head, facial animation and virtual human with realistic hair or skin (Yee, Bailenson, & Rickertsen, 2007). Currently, an avatar is able to interact naturally depending on the considerable progress in artificial intelligence, diverse sensing technology and advanced computer graphics (Lee et al., 2010). Facial animation is used in a large number of critical areas: virtual surgery, military virtual training, etc. It helps to bring humanity facts and representations of expressions to the reality of human, social reality and more significantly in the expression of many computer games most effectively. Because of complex geo- metric details as well as through the better animation, virtual char- acters in computer games and simulations have become very similar to real situations. As a result of capturing the movements and methods for automatic blending, body and facial motions to- gether can be viewed persuasively (Tol & Egges, 2009). Facial ani- mation is one of the effective communication tools in the field of creative animation that produces honest virtual avatars or social agents. It is considered as a very challenging work at a high level because it needs animations on a advanced level for the compo- nent of the face such as facial bones, and muscles of the face and lips synchronization during speech. Construction face of human at a high level accuracy requires great effort and time and skills at animation industry. On the other hand, it is very important be- cause the expressions in the face cover a lot of important informa- tion so we can observe the feelings of the people and their mental states. As the supervisor in Blur Studio’s animated, Jeff Wilson said, ‘‘Generating credible facial animation is very vital because the face is essential to understand the emotions’’ (Liu, 2009). In physiological arousal, emotions have a correlation with changes in the extreme light of physiological processes that occur inside the human body. There are lot of body change inside human body: the changes that occur in the levels of some neurotransmitters inside the brain, changes in metabolism, result of changes in muscle strain and fi- nally, changes will occur on the digestive system as a result of the revised digestion. There are other stimulation of physiological processes that could lead to a change in the color of the face, pilo- erection, facial expressions and as well as other signs that repre- sent emotion (Ettinger, 2008). Finally, motions also contain behavioural responses. Emotions always motivate the human to express his feelings or act out. These expressions perhaps range from screaming or crying, as well as verbal expression, which rep- resents laugh or smile. There are common signs of emotions which include tone of voice as well as the position and other types of physical language (Ettinger, 2008). Emotion and stress is closely http://dx.doi.org/10.1016/j.chb.2014.03.013 0747-5632/Ó 2014 Elsevier Ltd. All rights reserved. Corresponding author at: Interactive Media and Human Interface Lab., Depart- ment of Informatics, Faculty of Information Technology, Institut Teknologi Sepuluh Nopember Surabaya E-mail address: [email protected] (A.H. Basori). Computers in Human Behavior 35 (2014) 307–314 Contents lists available at ScienceDirect Computers in Human Behavior journal homepage: www.elsevier.com/locate/comphumbeh

Upload: ahmed-zuhair

Post on 30-Dec-2016

225 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Extreme expression of sweating in 3D virtual human

Computers in Human Behavior 35 (2014) 307–314

Contents lists available at ScienceDirect

Computers in Human Behavior

journal homepage: www.elsevier .com/locate /comphumbeh

Extreme expression of sweating in 3D virtual human

http://dx.doi.org/10.1016/j.chb.2014.03.0130747-5632/� 2014 Elsevier Ltd. All rights reserved.

⇑ Corresponding author at: Interactive Media and Human Interface Lab., Depart-ment of Informatics, Faculty of Information Technology, Institut Teknologi SepuluhNopember Surabaya

E-mail address: [email protected] (A.H. Basori).

Ahmad Hoirul Basori a,b,⇑, Ahmed Zuhair Qasim b

a Interactive Media and Human Interface Lab, Department of Informatics, Faculty of Information Technology, Institut Teknologi Sepuluh Nopember Surabayab Faculty of Computing, Universiti Teknologi Malaysia

a r t i c l e i n f o

Article history:Available online 29 March 2014

Keywords:3-D modellingFacial animationFacial expressionExtreme expressionsSweating

a b s t r a c t

Displaying extreme expressions such as scared until sweating is not easy task that can be accomplished in3D games and facial animation simulation in real-time. Several difficulties faced by researches such as thecomplexity of simulation including physical properties of the muscles, emotions, properties of fluids andtexture element. The next part is how to control the facial animation with one of the two sweating gen-erators simultaneously. This research present techniques for generating extreme expression in 3D facialanimation. The Facial Action Coding System (FACS) is employed to describe and generate facial expres-sions. It breaks down facial actions into minor units known as Action Units (AUs). Facial expressionsare generated by combining specific independent Action Units. The generated expressions cover sadness,anger, happy and fear. The first type of sweat is dropping sweat using particle system. The other type ofsweat is texture based, when the sweat is stimulated the texture drop will flowing down around surfaceof the forehead area. The technique presented in this paper is believed able to give more realism to virtualhuman by providing features of extreme expression.

� 2014 Elsevier Ltd. All rights reserved.

1. Introduction

Virtual human has been developed in various forms ofcomputer applications such as talking head, facial animation andvirtual human with realistic hair or skin (Yee, Bailenson, &Rickertsen, 2007). Currently, an avatar is able to interact naturallydepending on the considerable progress in artificial intelligence,diverse sensing technology and advanced computer graphics (Leeet al., 2010). Facial animation is used in a large number of criticalareas: virtual surgery, military virtual training, etc. It helps to bringhumanity facts and representations of expressions to the reality ofhuman, social reality and more significantly in the expression ofmany computer games most effectively. Because of complex geo-metric details as well as through the better animation, virtual char-acters in computer games and simulations have become verysimilar to real situations. As a result of capturing the movementsand methods for automatic blending, body and facial motions to-gether can be viewed persuasively (Tol & Egges, 2009). Facial ani-mation is one of the effective communication tools in the field ofcreative animation that produces honest virtual avatars or socialagents. It is considered as a very challenging work at a high level

because it needs animations on a advanced level for the compo-nent of the face such as facial bones, and muscles of the face andlips synchronization during speech. Construction face of humanat a high level accuracy requires great effort and time and skillsat animation industry. On the other hand, it is very important be-cause the expressions in the face cover a lot of important informa-tion so we can observe the feelings of the people and their mentalstates. As the supervisor in Blur Studio’s animated, Jeff Wilson said,‘‘Generating credible facial animation is very vital because the face isessential to understand the emotions’’ (Liu, 2009). In physiologicalarousal, emotions have a correlation with changes in the extremelight of physiological processes that occur inside the human body.There are lot of body change inside human body: the changes thatoccur in the levels of some neurotransmitters inside the brain,changes in metabolism, result of changes in muscle strain and fi-nally, changes will occur on the digestive system as a result ofthe revised digestion. There are other stimulation of physiologicalprocesses that could lead to a change in the color of the face, pilo-erection, facial expressions and as well as other signs that repre-sent emotion (Ettinger, 2008). Finally, motions also containbehavioural responses. Emotions always motivate the human toexpress his feelings or act out. These expressions perhaps rangefrom screaming or crying, as well as verbal expression, which rep-resents laugh or smile. There are common signs of emotions whichinclude tone of voice as well as the position and other types ofphysical language (Ettinger, 2008). Emotion and stress is closely

Page 2: Extreme expression of sweating in 3D virtual human

308 A.H. Basori, A.Z. Qasim / Computers in Human Behavior 35 (2014) 307–314

related, the symptom of sweating can be stimulated by stress thatassociated with scared/fear emotion (Ettinger, 2008).

2. Related work

Facial modelling for humans is one of the research fields that in-cludes many complexities and challenges which has connectionswith other fields such as medical, engineering, animation andcomputer graphic (Gladilin, Zachow, Deuflhard, & Hege, 2004).Furthermore, producing realistic facial animation that can imitatethe real human expression still become challenge that need to besolved by researcher. During the past few years, the establishmentand development of facial animation technology, it is make capableto produce geometry of the human face in detail through the abil-ity to use 3D photometric techniques and scanners. A result oftechnological progress in the areas of modelling of the human faceswe have become less tolerant to ignore the imperfections in theanimation and models that have been happening in the past (Ypsi-los & Ypsilos, 2004). The production of computer facial animationwith appropriate and complex expressions is still difficult workand fraught with problems. The interaction with characters is veryimportant to create effective connection with the user. Therefore,virtual characters use a variety of types of emotions such as facialexpression or diverse body motions styles to perform this connec-tion effectively, but these emotions seems to be affected slightly bythe interaction of the user in some games. For example, a gamethat uses emotion to create a conversation with computer con-trolled characters is Oblivion (2009). This game contains four in-stances of emotions that could represent a character. Howeverthese are represented only by using very fixed expressions. Anexample of this is shown in Fig. 1.

The oblivion character still not realistic enough due to the dif-ferent between facial expression is too small. It has been controlledautomatically according to the game scenario, however there aremore work need to be accomplished if aiming for realistic virtualhuman.

The examples mentioned above are created through geometricanimation only limited for the face area. Therefore, many emotionsin some of the virtual characters in this computer game is stillquite limited. These characters need to support the displays ofexpressions such as loudly laughing, crying, sweating with scarein the best manner, on the contrary, in movies, when using thistype of regular expressions in order to attract the user. Someweakness of facial animation for affective emotion are lacking interm of imitating the real expression of human and human face

Fig. 1. Representation of the four expressions used in th

appearance as well. The process of generating these distinctexpressions in real-time 3D is often become complex task. It needsthe process of skin deformation and precise modelling for themuscles representation of facial animation in many applicationsthrough the method of geometric deformation, often through com-bination with a standard facial animation approach like the MPEG-4 Facial Animation standard. This standard uses a limited set ofparameters to recreate and describe actual expression for the ani-mator (Tol & Egges, 2009). Another system, named FACS (Facial Ac-tion Coding System), works to describe the units of facial actioneffectively on the muscles of the human face. It was developed ini-tially for the purposes of psychology, to perform the task ofdescription and recognition of expressions and a process to linkthem with emotions. Examples of this type of procedures doneby this standard are ‘‘Brow Lowerer’’ or ‘‘Lip Tightener’’. After that,it was adapted to be suitable for use in the movies, to generaterealistic expressions. In these two methods, the work of the stan-dard is deformations of the face and does not take into consider-ation the other emotions, like crying and sweating which resultduring anger or sadness or happiness (Ekman, 1999). The con-struction and animation in a realistic manner in the real three-dimensional human faces are still significant problems in the fieldof computer graphics (Ilie, Negrescu, & Stanomir, 2011). In thefield of facial animation synthesis, the existing technologies arestill not able to build facial expressions, in an effective and morerealistic manner and including the underlying emotional contentsuch as: anger, happy, fear, sadness, surprise and disgust as basicemotions (Zhang, Ji, Zhu, & Yi, 2008). Furthermore, sweating, fear,anger, wrinkles, blushing and tears are a set of expressions in facethat were used as the emotions to express the physiological emo-tions in human aspects (de Melo & Gratch, 2009). Whereas thesimulation of sweat during the occurrence of fear is not represen-tative of the expressive manner of reality compared with their rep-resentation during the 3D movie, which are represented in a verysimilar style to the reality. There are a lot of previous studies suf-fered from weakness because cannot render an extreme expres-sion such as work of Ilie et al. (2011) that capable of producingmorphing 3D of facial animation that generates several facialexpression however still lacking in term of extreme expression likecrying or sweating. The other researcher Zhang et al. (2008) focusmore on synthesizing dynamic facial expression based MP4 stan-dard. The Zhang work still limited to facial expression not complexexpression. Some of the studies have been displayed fluids usingpre processing method, thus not giving real time feedback. Theexample of this simulation can be seen in some 3D movies thathas been setting up before rendering process.

e Oblivion game: Elder Scrolls IV (Oblivion, 2009).

Page 3: Extreme expression of sweating in 3D virtual human

A.H. Basori, A.Z. Qasim / Computers in Human Behavior 35 (2014) 307–314 309

As a result the current research in facial animation still have alot of weakness in the term of simulation fluids in real-time andin the term of being realistic. In some research such as de Meloand Gratch (2009), which present the scared until sweating in fa-cial animation, but the sweating here does not seem to be veryrealistic to the real sweating in virtual human as shown in Fig. 2.The work of de Melo and Gratch (2009) still at the beginning of re-search, therefore they used static texture to show the fluid wadaround forehead area. This work cannot change in real time anduser cannot change the volume of sweat because it predefined.

2.1. Research contribution

This research contribution is to create extreme expression suchas fear, resulting in the sweat, which is the phenomenon in virtualhuman motions, when the character can shows wide variety ofemotions, generate sweating based on texture based and particlesystem and combine the facial animation with one of these twotechniques to achieve the goal of this research. Therefore the facialanimation working together with one of the two techniques forsimulating sweats in real-time.

Fig. 3. Research methodology.

3. Methodology

The methodology in this research describes the approaches thatwere used in order to carry out the techniques using computergraphics for achieve the realism and generate real time sweatingin computer facial animation. Fig. 3 shows the methodology forthis research.

3.1. Facial features of extreme expression

Facial animation alone is not enough to simulate the extremeemotion of virtual human. Extreme expression is a kind of emotionexpression that is highly stimulated by the strong emotion, oneexample of these extreme expressions is scared until sweating.So in order to provide these kinds of features, some additional ele-ments like fluid mechanism and texture based are needed. Thecombination between facial animation and these two elements(texture based and fluid techniques) will be discussed in the fol-lowing sections.

Fig. 2. Expression of sweating (de Melo & Gratch, 2009).

3.2. Fluid generator

This section discusses a technique to make a fluid simulationrealistic as moving sweat due the extreme expressions. To makea small quantity of water, small puddles, a splashing fluid, orrunnels. Therefore, a particle system technique can be an efficientcandidate. The mixture of height field approximations with parti-cle-based fluids can produce a further set of fascinating effects.The next subsections will discuss particle-based methods infurther detail.

3.2.1. Simple particle systemSimple particle system is probably the best choice for generat-

ing splashes, spray or water particles. It is usually not even impor-tant in those simple cases to simulate the interaction of particleswith each other. A simple particle system is a particle systemwithout particle–particle interaction. This system can be appliedefficiently which means that a great amount of particles could besimulated in real-time. It only requires a set of N particles0 6 i < N with masses mi, positions xi, velocities vi and accumu-lated external forces f.

3.2.2. Particle interactionsIf want to simulate small bodies of water with particles, it is

important that the particles feel each other. Or else, the particleswill aggregate at a single spot along an edge or in a corner. Gener-ally, a particle interaction force has the following form, Eq. (1):

f ðxi; xjÞ ¼ Fðjxi � xjjÞ �xi � xj

jxi � xjj0; ð1Þ

Common selections are the Navier–Stokes equations and solv-ing them on the particles, which is usually used in moleculardynamics simulations (Bridson & Müller-Fischer, 2007).

3.2.3. The Navier–Stokes equations for incompressible flowThe fluid flow animators’ interests are dominated by the well-

known incompressible Navier–Stokes equations (Matthias Müller& Gross, 2003). The Navier–Stokes equations are a group of twodifferential equations which describes a fluids’ velocity field (u);

Page 4: Extreme expression of sweating in 3D virtual human

310 A.H. Basori, A.Z. Qasim / Computers in Human Behavior 35 (2014) 307–314

over time. They are named differential equations since they specifythe velocity fields’ derivatives rather than the velocity field itself.The initial equation rises because we are simulating incompress-ible fluids, and is presented as Eq. (2):

r � u ¼ 0: ð2Þ

This equation states that the sum of fluid that flows into any volumein space must be equal to the amount that flows out. The Navier–Stokes second equation is a somehow more complicated. It showshow u changes through time, and is presented by Eq. (3):

@u@t¼ �ðr � uÞu� ðr � uÞu� 1

qrpþ vr2uþ F: ð3Þ

The second of the equations computes the fluids’ motionthrough space, along with any external or internal forces that acton the fluid. The terms of this equation can be defined as Eq. (4):

@u@t: ð4Þ

3.2.3.1. The derivation of velocity with respect of the time. This willbe computed at all grid points that contain fluid at each simulationtime step Eq. (5).

�ðr � uÞu: ð5Þ

3.2.3.2. The convection term. This term rises because of the momen-tum conservation. The fluids’ momentum must be moved throughspace along with the fluid itself, since the velocity field is experi-mented at fixed spatial locations Eq. (6).

� 1qrp: ð6Þ

3.2.3.3. The pressure term. It captures forces created by pressurealterations within the fluid. q is the fluid density, which is con-stantly static and p refers to the pressure. The pressure term shallbe combined with Eq. (3) to ensure the flow remains incompress-ible since incompressible fluids are being simulated Eq. (7).

vr2u: ð7Þ

3.2.3.4. The viscosity term. When it comes to thick fluids, frictionforces make the velocity of the fluid move headed for the neighbor-hood average. The viscosity term is used to capture this relation-ship by using the r operator. The variable v is entitled the fluids’kinematic viscosity and it concludes the thickness of the fluid.

3.2.3.5. F external forces. This term includes any external force, suchas gravity or contact forces with objects. Practically, the externalforce term lets us to set or modify velocities in the grid to anywanted value as a step of the simulation. The fluid at that pointnaturally moves, accordingly external forces have acted on it.

3.3. Texture based

Simulation of sweats involves modelling the water propertiesand its dynamics. As for the prior, the properties of water materialwere defined to possess a very high specular component, a low dif-fuse component (e.g. RGB color of [10,10,10]) and a null ambientcomponent. Then, using bump mapping, the water is rendered bya normal map of a usual pattern of sweats. The alpha channels’ nor-mal map is set to a nonzero value in the sweating zone and set tozero in another place. This channel is used then to place the sweaton top of the virtual human face. Regarding dynamics, our secondapproach for simulating the sweats based on texture, can be

explored by generating random texture sweats in the top of thehead for the virtual human face.

4. Implementation

A study was conducted to evaluate the influence of the sweatingmodel on the perception of fear. The previous section discusseshow to simulate sweats using particle system and texture basedby the combination with Facial Action Coding System techniqueand its AUs. The next step is how to apply these combination intographics rendering language such as XNA to display the facial ani-mation. XNA is chosen because the future application will be de-ployed into Xbox platform.

4.1. Facial Action Coding System (FACS) and its Action Units

Facial Action Coding System is an attempt to model all distin-guishable visual facial actions, as well as actions associated withspeech. As a result of facial muscle movements; FACS describesall facial movements via anatomic framework (Ekman, 1999). AFACS professional has to be familiar with all the facial musclesand their movements’ effect on the facial features. Action Units(AUs) is the term that FACS uses to describe the facial movementhappening due to the movement of one, two, or more muscles. Inthe process of integrating a FACS based expression model into asystem of facial animation; the animator needs to place the entiremuscles at exact locations to yield the wanted facial effects. Fur-thermore, computing the facial deformations that correspond tomuscle movement intensity is a compound mission. Facial ActionCoding System describes the facial muscles and tongue/jaw move-ments resulting from a facial anatomy analysis. It breaks down fa-cial actions into minor units known as Action Units (AUs). Everysingle AU represents an action of a specific muscle, or a small groupof muscles’ action, a single distinguishable facial posture. To sum,FACS categorizes 66 AUs that all together may well produce grad-able and defined facial expressions. Therefore, FACS has been uti-lized in facial animation over the past decade in a wide range toassist animators interpret and make facial expressions that aremore realistic. The FACS describes the group of all potential basicAUs that the human face can perform and Fig. 4 shows the naturalhuman face represented by the Action Units which are responsiblefor changing the appearance of specific area in the face dependingon FACS. Tables 1 and 2 present sample Action Units and the primeexpressions created by the actions units.

Facial expressions are generated by combining independent Ac-tion Units. For instance, combining the AU2 (Outer Brow Raiser),AU1 (Inner Brow Raiser), AU5 (Upper Lid Raiser), AU4 (Brow Low-er), AU20 (Lip Stretcher), AU26 (Jaw Drop) and AU15 (Lip CornerDepressor), generates a fear expression as shown in Fig. 5.

4.2. Extreme expression

To simulate extreme emotion of virtual avatar, additional ele-ments are needed to make this performance more realistic in com-puter facial animation. Extreme expression happen in virtualhuman as a result of some strong emotions which are make the hu-man act on or express his feeling in that time. This study focus onexpresses the scared until sweating. The generation of this type ofexpression cannot be performed by using FACS only. Because it isnot enough to fully simulate the extreme expressions. Thereforein order to provide these kinds of expressions, some additional ele-ments like fluid mechanism and texture based are needed. Thesetwo generators help to make effective performance in 3D game.These two elements will be explored in the two next Sections4.2.1 and 4.2.2.

Page 5: Extreme expression of sweating in 3D virtual human

Fig. 4. Human face classified by Action Units that controls the changing appearance for eye brows, eye cover fold, forehead lower and upper eye lids.

Table 1Representation for the signal sample Action Units.

AU FACS AU FACS AU FACS

6 Check Raiser 2 Outer Brow Raiser 5 Upper Lid Raiser15 Lip Corner Depressor 16 Lower Lip Depressor 12 Lid Corner Puller

4 Brow Lower 14 Dimple 1 Inner Brow Raiser26 Jaw Drop 10 Upper Lid Raiser 17 Chin Raiser20 Lip Stretcher 9 Nose Wrinkle 23 Lip Tightened

Table 2The basic expressions based on AUs combination.

Basic expressions Involved Action Units

Fear AU1, 2, 4, 5, 15, 20, 26Anger AU2, 4, 17, 9, 10, 20, 26Happiness AU1, 6, 12, 14Sadness AU1, 4, 15, 23

Fig. 5. Fear expressions based on FACS classification.

Fig. 6. Fluid rendering using particle system.

A.H. Basori, A.Z. Qasim / Computers in Human Behavior 35 (2014) 307–314 311

4.2.1. Fluid generationThis study provides two kinds of sweat animators. The first kind

of sweat animations is dropping sweats as shows in Fig. 6. The fluidgeneration creates the sweats based on particle system, when thesweat falling to the ground, instead of falling down the skin. Thesweats continually falls as time increase without skin surface,therefore its looks a like generated randomly and withoutcontrolling.

The sweat in previous figure was white and randomly. Fig. 7shows several frames of falling sweats after give the transparency

for some particle to shows the different between the particle inwhite and particle with some transparency, also the fluids herefalls through specific region to make the sweat seems like the nat-ural sweat in terms of color and falling.

4.2.2. Texture basedThe second type of sweat is texture based sweat. This type of

sweat is flowing along the surface of the skin on the forehead areafor the face. The sweat in this kind of generators moves on the skin,but they are individual objects. Fig. 8 shows how to generate thesweat before connect it to face and how it falls without any surfacein specific area .Assuming that the chosen area for generatingsweat in vertex 2 between (330, 460) and (100, 180) randomly asthe following code in Fig. 8.

The code shown at Fig. 9 to generate sweat only for the specifyarea and without combining with any surface. It is looks fallingdown to the ground. In this type of generators the color and size

Page 6: Extreme expression of sweating in 3D virtual human

Fig. 7. Fluid rendering for several different frames using different alpha transpar-ency value.

Fig. 8. Four selected frames of a running cycle for sweat.

Fig. 10. Four selected frames of real-time sweating synthesis in forehead area.

Fig. 11. Generating particle system for fear expression.

312 A.H. Basori, A.Z. Qasim / Computers in Human Behavior 35 (2014) 307–314

of sweat can be changed depending on the requirement for theapplications and the shape that we want to combine with it.

4.2.3. Generating sweats using particle system basedAfter the combination between facial animation and the fluids

generator, the fluid generator will use the particle system to gener-ate the sweats and control it to flows on forehead surface area as aresult of scare expression for virtual avatar. Sweating in humansdifferent kinds and degrees. This study offer three different sce-nario of sweating using particle system.

1st Scenario: the system will generate the fluid in side areas ofthe forehead, because in some cases sweating in humans occursin the side areas of the forehead only. Therefore Fig. 10 presentsa real-time synthesis for this phenomenon in virtual avatar. Whenthe sweats are falling, time increase and the sweatdrops’ shapecontinually change during the time increase. The following codein Fig. 11 implements this type of sweats.

2nd Scenario: in some cases of sweating in humans, heavysweating appears on the sides of the forehead and a little bit inthe middle as a result of fear. For that the sweating generation sys-tem generates sweating in the virtual character similar to the sec-ond type, which has been postulated in this research. Fig. 12 showsthe second type sweating generators based on particle system.

Fig. 9. Generating random sweat in vertex 2 with position (330, 460) and (100,180).

The second type of generating sweating based the particle sys-tem can be implement depending the pseudocode as shown inFig. 13.

3rd Scenario: third and last type that was created in this re-search to generate sweating all over the forehead. In this type willbe generating sweating on the forehead at the same time for fearexpression and real-time to represent the extreme expression forvirtual avatar. Fig. 14(a) shows the normal face. The simulationof sweat in real-time base on the particle system can be shownin Fig. 14(b) until Fig. 14(c).

After the combination between facial animation and the fluidgenerator in third part of sweats simulation can be shows inFig. 14 which consist of the pseudocode for implemented this kindof phenomenon in virtual avatar.

4.2.4. Generating sweats using texture basedThe sweats after that have to become connected smoothly with

the skin surface and then it move down the skin during the passageof time. The procedure for generating texturing sweats that wehave created is as the following steps:

Fig. 12. Real-time synthesis for the type 2 of sweating generated based on particlesystem.

Page 7: Extreme expression of sweating in 3D virtual human

Fig. 13. Pseudocode for creating real-time sweating simulation for second type ofsweating generator.

Fig. 14. Simulation of sweating in real-time base on the particle system.

Fig. 16. The combination between the facial animation and sweats based ontexture.

Fig. 17. Pseudocode for generating random sweat and update on forehead zone.

A.H. Basori, A.Z. Qasim / Computers in Human Behavior 35 (2014) 307–314 313

First: selecting the zone that the rolling sweat will pass throughon the face, and after that specify this region as an individual sur-face. The forehead area has been identified through the translationof the coordinates of the dimensions of the face and split facedepending on the coordinates to know the real location of the fore-head specifically.

Second: through the identification of base surface that willsweat rolling down on it, the construction process of generatingsweat’s objects which will fall on the forehead should start relyingon the combination with the expression that results in generatingsweats in real-time .The following pseudocode in Fig. 15 shows thecombination between the facial animation and sweats.

Third: after the combination between facial animation andsweat generator, the fluid generator will generate the sweats basedon the particle system randomly on the forehead area in a real timewith fear expression. The following pseudocode in Fig. 16 showsgenerating random sweat and update it on forehead zone.

Fig. 15. Pseudocode for implemented the third kind of sweating phenomenon invirtual avatar based on particle system generator.

Fourth: generating sweats with extreme expression should beseems similar to the natural phenomena for the sweating the vir-tual human in the term of speed .Therefore the sweats speed in thisresearch measured to be similar for the natural human, in order toincrease the realism of virtual avatar. The following pseudocode inFig. 17 shows the speed of the fall sweat the in virtual avatar forthis research:

The steps above have been used to generate the sweating forthe virtual avatar. An example of sweat simulation in Fig. 18,where the system shows the combination between facial anima-tions with texture based simulation for sweat (see Fig. 19).

A few frames of sweat simulation can be seen in Fig. 20, wherethe running cycle for sweat generating during the extreme expres-sion is implemented.

The evaluation of the proposed method is done by conductingbenchmarking previous works. Fig. 21 shows the fear emotion

Fig. 18. Pseudocode for controlling the speed of sweat drop.

Page 8: Extreme expression of sweating in 3D virtual human

Fig. 19. An example of a sweat simulation based on texture generator.

Fig. 20. Four frames taken from a sweating simulation using texture based method.

Fig. 21. Fear emotion for alice expression (Balci et al., 2007).

314 A.H. Basori, A.Z. Qasim / Computers in Human Behavior 35 (2014) 307–314

keyframes used for Alice model (Balci, Not, Zancanaro, & Pianesi,2007).

Fig. 21 shows the facial expression of fear from virtual human.However this expression only limited to the regular facial expres-sion where it cannot handle extreme expression. Because extremeexpression involved more parameter such as sweat, tears, and skincolor change. Thus, the proposed technique in this paper has

overcome this problem and provide an avatar that capable toexpress certain extreme expression such as sweating.

5. Conclusions

This paper presenting an innovative approach for generatingsweats which is part of extreme expression. The method and for-mula for generating animated and realistic fluid sweats base onparticle system, which continually falls and change the shape astime increase have been introduced. The procedure and methodfor generating real-time texturing sweats also have been displayed.The method of real-time sweating simulation is integrating twomain approaches: particle system and textured based method.The result shown that user can choose the type of sweating: parti-cle or texture. Furthermore, user also to change the parameter ofsweating freely in real time.

Acknowledgements

This research is supported by Department of Informatics,Institut Teknologi Sepuluh Nopember Surabaya (ITS), Ministry ofScience and Technology (MOSTI) and collaboration with ResearchManagement Center (RMC), Universiti Teknologi Malaysia (UTM).

References

Balci, K., Not, E., Zancanaro, M., & Pianesi, F. (2007). Xface Open Source Project andSMILAgent Scripting Language for Creating and Animating EmbodiedConversational AgentsMM’07, September 23–28, 2007. Augsburg, Bavaria,Germany: ACM.

Bridson, R., & Müller-Fischer, M. (2007). Fluid simulation: SIGGRAPH 2007 coursenotes .Video files associated with this course are available from the citationpage. In Paper presented at the ACM SIGGRAPH 2007 courses, San Diego,California.

de Melo, C., & Gratch, J. (2009). Expression of emotions using wrinkles, blushing,sweating and tears. In Z. Ruttkay, M. Kipp, A. Nijholt, & H. Vilhjálmsson (Eds.).Intelligent virtual agents (Vol. 5773, pp. 188–200). Berlin, Heidelberg: Springer.

Ekman, P. (1999). Basic emotions. The handbook of cognition and emotion. UK: JohnWiley & Sons, Ltd. (pp. 45–60). UK: John Wiley & Sons, Ltd..

Ettinger, R. H. (2008). Emotion and stress. In D. M. Parker (Ed.), Understandingpsychology (2nd ed., pp. 325–361). Horizon Textbook Publishing.

Gladilin, E., Zachow, S., Deuflhard, P., & Hege, H. (2004). Anatomy- and physics-based facial animation for craniofacial surgery simulations. Medical andBiological Engineering and Computing, 42(2), 167–170. http://dx.doi.org/10.1007/bf02344627.

Ilie, M. D., Negrescu, C., & Stanomir, D. (2011). Circular interpolation for morphing3D facial animations. Romanian Journal of Information Science And Technology,14(2), 131–148.

Lee, S., Carlson, G., Jones, S., Johnson, A., Leigh, J., & Renambot, L. (2010). Designingan expressive avatar of a real person.

Liu, C. (2009). An analysis of the current and future state of 3D facial animationtechniques and systems. Thesis Master of Science. Simon Fraser University.

Müller, M., Charypar, D., & Gross, M. (2003). Particle-based fluid simulation forinteractive applications. In Paper presented at the proceedings of the 2003 ACMSIGGRAPH/Eurographics symposium on computer animation, San Diego, California.

Oblivion (2009). The elder scrolls iv:Oblivion, <http://www.elderscrolls.com/games/oblivion/overview.htm>. (accessed 07.09).

Tol, W., & Egges, A. (2009). Real-time crying simulation. In Paper presented at theproceedings of the 9th international conference on intelligent virtual agents,Amsterdam, The Netherlands.

Yee, N., Bailenson, J. N., & Rickertsen, K. (2007). A meta-analysis of the impact of theinclusion and realism of human-like faces on user experiences in interfaces. InPaper presented at the proceedings of the SIGCHI conference on human factors incomputing systems, San Jose, California, USA.

Ypsilos, I. A., & Ypsilos, C. I. A. (2004). Capture and modelling of 3D face dynamics.University of Surrey, Guildford, Surrey GU2 7XH, U.K.

Zhang, Y., Ji, Q., Zhu, Z., & Yi, B. (2008). Dynamic facial expression analysis andsynthesis with MPEG-4 Facial animation parameters. IEEE Transactions onCircuits Systems and Video Technology, 18(10), 1383–1396.