avatar-based sign language training interface for primary

10
Avatar-based Sign Language Training Interface for Primary School Education Rabia Yorganci 1 Ahmet Alp Kındıro˘ glu 2 Hatice Kose 1 1 Istanbul Technical University 2 Bo˘ gazici University 34469 Istanbul, Turkey 34342 Istanbul, Turkey {rabia.yorganci,hatice.kose}@itu.edu.tr, [email protected] ABSTRACT The aim of this study is to present usable educational tools of students with special needs. Hearing impaired students cannot use text and speech based technological, ed- ucational material that is becoming a cru- cial tool for modern education. Like videos of signers, a signing avatar for T ˙ ID (Turk- ish Sign Language) would allow communi- cating information in the form of visual ges- tures, thus becoming usable when the use of text is unfeasible. However, compared to videos, animated avatars offer several ad- vantages like easy reproducible of gesture se- quences, control of point of view, adjusting the speed of the sign and being smaller in storage and bandwidth then videos. For this study, a signing avatar was developed to rep- resent a portion of the social sciences course for Turkish primary education curriculum. To analyze the success and effectiveness of the interface, the performance of the text, and avatar-based interaction in a human- computer interaction scheme using elemen- tary school students have been compared. The results demonstrate that avatar based tutoring was more effective in assessing the child’s knowledge of certain sign language words. Since the aim was to make the sign- ing avatar as comprehensible as possible, the results demonstrate that goal have been. KEYWORDS Human-computer interaction, Sign language synthesis, Children’s education, Language ed- ucation, Avatar 1 INTRODUCTION In the cognitive development process of chil- dren, primary school education plays a cru- cial role. Studies indicate that a lack of ac- cess to primary school education for deaf children causes serious problems. Some of these problems are lack of language skills, presence cognitive impairments, problems in socially engaging with their peers and over- dependence on family members [1]. Un- like hearing children, who mostly learn their spoken language skills before starting pri- mary school, deaf children often start pri- mary school without knowing a spoken or sign based language. This can negatively af- fect age appropriate literacy learning, mak- ing an extra effort for education necessary. One of the common approaches for the edu- cation of hearing-impaired children is teach- ing sign language based communication first and teaching written and spoken languages after, with the help of sign language. Sign languages are the communication medium of the hearing impaired. They are visual languages that rely on the use of the posi- tioning and movements of hands, arms and upper body as well as facial gestures to con- vey meanings of concepts. Being designed to convey meaning rapidly in 3d space, sign languages do not have a well-defined writ- ten form that is practical. For that reason, the creation of educational support mate- rial such as books and worksheets designed to aid and asses learning are mostly done using videos which are hard to create, edit, store and transfer. In primary school education, the influence of classical written materials and techno- logical materials are important as they al- 1

Upload: khangminh22

Post on 21-Apr-2023

2 views

Category:

Documents


0 download

TRANSCRIPT

Avatar-based Sign Language Training Interface for Primary SchoolEducation

Rabia Yorganci1 Ahmet Alp Kındıroglu2 Hatice Kose11Istanbul Technical University 2Bogazici University

34469 Istanbul, Turkey 34342 Istanbul, Turkey{rabia.yorganci,hatice.kose}@itu.edu.tr, [email protected]

ABSTRACT

The aim of this study is to present usableeducational tools of students with specialneeds. Hearing impaired students cannotuse text and speech based technological, ed-ucational material that is becoming a cru-cial tool for modern education. Like videosof signers, a signing avatar for TID (Turk-ish Sign Language) would allow communi-cating information in the form of visual ges-tures, thus becoming usable when the useof text is unfeasible. However, comparedto videos, animated avatars offer several ad-vantages like easy reproducible of gesture se-quences, control of point of view, adjustingthe speed of the sign and being smaller instorage and bandwidth then videos. For thisstudy, a signing avatar was developed to rep-resent a portion of the social sciences coursefor Turkish primary education curriculum.To analyze the success and effectiveness ofthe interface, the performance of the text,and avatar-based interaction in a human-computer interaction scheme using elemen-tary school students have been compared.The results demonstrate that avatar basedtutoring was more effective in assessing thechild’s knowledge of certain sign languagewords. Since the aim was to make the sign-ing avatar as comprehensible as possible, theresults demonstrate that goal have been.

KEYWORDS

Human-computer interaction, Sign language

synthesis, Children’s education, Language ed-

ucation, Avatar

1 INTRODUCTION

In the cognitive development process of chil-dren, primary school education plays a cru-cial role. Studies indicate that a lack of ac-cess to primary school education for deafchildren causes serious problems. Some ofthese problems are lack of language skills,presence cognitive impairments, problems insocially engaging with their peers and over-dependence on family members [1]. Un-like hearing children, who mostly learn theirspoken language skills before starting pri-mary school, deaf children often start pri-mary school without knowing a spoken orsign based language. This can negatively af-fect age appropriate literacy learning, mak-ing an extra effort for education necessary.

One of the common approaches for the edu-cation of hearing-impaired children is teach-ing sign language based communication firstand teaching written and spoken languagesafter, with the help of sign language. Signlanguages are the communication mediumof the hearing impaired. They are visuallanguages that rely on the use of the posi-tioning and movements of hands, arms andupper body as well as facial gestures to con-vey meanings of concepts. Being designedto convey meaning rapidly in 3d space, signlanguages do not have a well-defined writ-ten form that is practical. For that reason,the creation of educational support mate-rial such as books and worksheets designedto aid and asses learning are mostly doneusing videos which are hard to create, edit,store and transfer.

In primary school education, the influenceof classical written materials and techno-logical materials are important as they al-

1

low students to continue the learning andassessment processes without the direct at-tention of the instructor. In addition, byusing technologies in the classroom, it hasbeen shown that student learning and mo-tivation for learning could increase. How-ever, deaf children who cannot use and un-derstand books or computer based materialonly participate in learning when an instruc-tor or parent directly engages with them faceto face. The exception to this is sign lan-guage videos, which are used by instructorsto create materials and replay signs. How-ever, the creation and storage and usage ofvideos all require great effort.

In this study, a signing avatar based toolthat would serve as a method for generatingvideo like materials for the primary schooleducation of hearing impaired children is in-troduced. Compared to video-based materi-als, the avatar would have the advantage ofbeing easier and less costly to create largeamounts of materials, being easier to editcontent, being easier to transfer to studentsallowing them to study at home with theirparents and being easier to store. Contentfrom the social sciences book of the Turkishprimary school education was generated us-ing the avatar and tests from the book wereused to evaluate the system. The avatarwas evaluated in terms of comprehensibleby comparing it to textual input in schoolchildren. Children from grades three tofive were asked to comprehend social stud-ies questions in the form of the avatar basedTID signs textual questions. The studentsthen chose the correct answer from amongmultiple answers in the form of images. Therest of the paper is organized as follows: InSection 2 the related works in the field arepresented. In section 3, the signing avatarand the social science quiz methodology areexplained. Finally, in sections 4 and 5, theexperimental results are given and conclu-sions are presented.

2 RELATED WORKS

Using technological tools in education re-search is the preferred method, especially forresearchers studying with children. Thereare various commercial games for educationand therapy of babies and infants both onmobile devices [2], [3] and desktop comput-ers [4]. Several gaming-based projects havebeen designed for the therapy and educa-tion of children with special needs, too. Asa part of this area, there are different stud-ies developed for hearing-impaired children[5]–[8]. The ICICLE (Interactive ComputerIdentification and Correction of LanguageErrors) project aimed to create an educa-tional system for the deaf children to providethem with individual lectures and guidelinesby computer-aided commands [9]. Thereare various assistive applications for chil-dren with autism focusing on imitation, faceand emotion recognition and speech therapy[10]–[12].Unlike spoken languages which could berepresented through text or speech, signlanguages are complex linguistic structuresthat are composed of complex visual andspatial elements. The representation of mul-tiple parts of the human body and their mo-tion in 3D space around the signer cannotbe represented efficiently in written form.For that reason, generation of novel sign lan-guage material is often a tedious task thatrequires the presence of people performinggestures in front of a camera.While there exist markup languages such asHamnosys or SIGML [13] to represent signlanguages, they are often inefficient both forwriting and for reading. Therefore, to digi-tally generate sign language material and forinteraction purposes, the use of human-likedigital output mediums becomes a necessity.Development of virtual reality human mod-els or avatars for sign language is a relativelynovel research area. With the progress of re-search in computer graphics, it is technolog-ically possible to create human animationscapable of performing sign languages. Incomputer graphics, many types of animated

human characters have been built for differ-ent purposes from automobile and aircraftcrash simulators to motion analysis, gam-ing, and entertainment. In [14], Webberet.al. summarizes the desired properties ofhuman models as being structured like a hu-man, moving like a human, looking like ahuman and having the same size and move-ment restrictions as a human. In the mar-ket for human model development, there ex-ist many modeling tools capable of creatinghigh-quality models. These can be priori-tized according to their category. Blender,Zbrush, 3ds Max, and Maya are more artis-tic means, while Rhino, Solidworks, Au-todesk are more engineering oriented. Themodels created by these tools can be realizedand animated using graphics libraries likeOpenGL and DirectX or used with 3d en-gines such as Unity or Unreal 4. The level ofquality in computer generated human mod-els have long since passed the point thatallows the building interfaces, which canpresent information from computers puttersdeaf via avatars in real time. However, suchsigning avatars are not readily available ineveryday life. The main hurdle for the devel-opment of such software lies with the chal-lenge of providing the applications the rightinstructions to generate linguistically com-prehensible information.

According to Kipp et.al., existing methodsto translate written languages to animationcan be grouped into two broad approachesas concatenative and articulatory methods[15]. In the concatenative approach, move-ments of concepts in sign language are cre-ated individually. Therefore, to represent asentence in Sign Language, some concatena-tion of these individual signs becomes neces-sary with possible smoothing to make tran-sitions natural. There are several exam-ples of avatars using concatenative meth-ods [16]–[18]. As a more natural alterna-tive to concatenative methods, the articula-tory methods involve computationally gen-erating the sequence of signs at runtime.There exist many avatars using articulatory

synthesis such as GesSyCa [19], SignSynth[20]. Articulatory approaches were also usedin the EU projects VISICAST and eSIGNto allow the conversion of written mate-rial to avatar animation for the hearing im-paired. In the VISICAST project [21], thefocus was on converting written materialto sign languages. In that project, Ham-NoSys notation was used to represent signlanguages. In the eSIGN project, which fol-lowed VISICAST the transition from con-catenated to computational methods wasachieved through the utilization of a markuplanguage called SIGML. Motion/Sign Cap-ture tools were used to capture upper bodymovements and facial expressions to rep-resent them using a 3D avatar. In thisproject, text written in English were trans-lated to the British, German and Dutch signlanguages (BSL, GSL, DSL). In the Textand Sign Support Assistant project devel-oped as a part of VISICAST, a post officeapplication was developed using an avatarsigning in BSL [22]. The ESIGN projectwas commissioned with the goal of translat-ing written online materials in English toBSL using an avatar. As an output of theproject, an application was developed witha weather forecast themed vocabulary. Inthe development phase of the application,first, the content of the original web pagewas translated to BSL via linguist special-ists. The corpus generated by this transla-tion was first manually made using the cre-ated avatar and added to a dictionary. Thefinal output of the website was presented bydisplaying the corresponding content foundin the dictionary of the avatar on the webpage. Another EU project following in thefootsteps of VISICAST and ESIGN was Dic-taSign [23]. This project made use ofhuman’s video recordings instead of avatargenerated synthetic videos. It recorded cor-pora in Greek, British, German and FrenchSign Languages and focused on the conver-sion of these to one another and their corre-sponding written languages.

3 SOCIAL SCIENCE QUIZ

The aim of this project is creating avatar-based interactive software for the educationof deaf children. To understand their atti-tude toward interaction with other peopleespecially in their educational environmentand to ensure that the truth of the anima-tion, consultants whose native languages areTID were consulted and each step of theproject proceeded with their guidance andapproval.

3.1 Research Questions and Hypoth-esis

Avatar based education tools and assessingmethods were designed as a part of sign lan-guage training of deaf children at early ages.The interface was customized specifically forusers who are hearing impaired and havereading difficulties. These experiments weredesigned to find the answers to two majorresearch questions;

• The use of facial expressions in TID:Differentiating non-verbal componentsof signs from facial expressions that areindependent of the performed sign isan important part of corpus genera-tion. Distinguishing the minimum re-quired modalities of a sign from userspecific additions allows the construc-tion of an avatar that is concise. Thisprocess minimizes the confusion of chil-dren when they observe signs performedby an avatar.

• Effectiveness of TID in primary schooleducation: Deaf children learn spokenlanguages slower than sign languages.In the elementary education, teachingtools and curriculum are designed withthe spoken language in mind. There-fore, when learning novel content, deafchildren have to learn it in a languagethat they are not comfortable with.The use of a signing avatar enables in-structors to teach their course to deafchildren without the language barrier

they would face when following thesame curriculum using books.

3.2 Animation Generation

In this project, a free character for non-commercial usage, Mery1, is used. She wascreated in Maya2. Maya was developed for3D animation, modeling, simulation, andrendering software by Autodesk. The soft-ware offers an advanced modeling tool tocreate scenes and animations. Animationscould be created either manually and auto-matically through a motion capture device.Maya supports Python and MEL script lan-guages to develop animation systematically.Animations made for mobile and web-basedapplications were created using Maya. Be-sides Maya, using MotionBuilder3, anothersoftware from Autodesk, is an option to cre-ate automatic animations.

3.3 Implementation on the Avatar

To generate the signs on the avatar, Mery,experiments were performed with both au-tomatic and manual approaches. The pri-mary software used for the generation of theavatar is Maya. In the studies prepared forthis paper, manual implementation methodhas been used.

3.3.1 Manual generation of TID sen-tences using Maya software

Maya lets the user create animations frameby frame. For upper body joints exceptface (such as eyes, mouth, nose) rotation (X,Y, and Z axis) values are used for creatinganimations. For face gestures, translation(mostly Y axis) values are used. .In addition, by using a software called TraxEditor, a library of animation clips couldbe created for a character set and the clipscould be used on different signs, wheneverthey were needed. This tool made creatingsentence manually method easier on Maya.

1www.meryproject.com/2www.autodesk.com/products/maya/3www.autodesk.com/products/motionbuilder/

Besides the Trax Editor, different pluginswere available. StudioLibrary4 was usedduring the implementation phase.

3.3.2 Sign Library

To prevent the need to create the same handshapes, arm postures and facial expressionsfor different signs, a library which includesposes and animations were created usingStudio Library on Maya. The library forsign language is divided into three main sec-tions; Animation (includes torso and armanimations, Fig. 1(a)), Face (poses, Fig.1(b)) and Hand (poses, Fig. 1(c)). Theseposes and animations could be combined inany order. To create a sign or sign sentenceusing a number of previously created poses,the necessary body parts and time frame areselected. Interpolations between poses arecalculated for each body part and anima-tions for the selected joints are generated.Using this method, the process of creatingnew signs or sentences were greatly acceler-ated.

(a) Animation

(b) Face

(c) Hand

Figure 1. Examples from the Sign Library

3.4 Preliminary Test Scenarios

To evaluate usability and acceptability ofMery in a Sign Language Education envi-ronment, two test cases which targeted the

4www.studiolibrary.com

torso and the face were designed. Duringthe preparation phase of this tests, manualgeneration of signs using Maya software wasused.The first test included just the torso [24].These tests were designed to teach signs tonon-signers and evaluate the comprehensi-bility of those TID gestures without facialexpressions. Selected words were preparedon Maya, and a questionnaire was drawn upon the framework whose details are given in[7].The second set of tests aimed to increase thecomprehensibility of the avatar by addingfacial expressions. In sign language, facegestures include facial expression of the sixprimary emotions. After observing the signlanguage translation of the primary schoolwritten educational material, it was con-cluded that there are also other facial ex-pressions frequently used in TID, such asbored, and question. These expressions wereimplemented on Mery with using Maya tocombine torso gestures. [25]

3.5 Interaction Scenario

Based on the results of the isolated signbased preliminary tests on face and torso,sentence tests were created. These sentenceswere generated with the aid of the Sign Li-brary. The trials began with demographicinformation questions; name, age, gender,and level of TID knowledge. To evaluate thesuccess of Mery, three test versions were cre-ated which only had text, text and human’svideo and text and Mery’s video. With thistest, the aim was to observe the effectivenessof the signing avatar as an educational tool(only-text vs. text with videos) and to mea-sure the comprehensibility of the animatedcharacter (human vs. Mery). An exampletest question is given in Fig. 2.

4 EXPERIMENTS AND RESULTS

4.1 Experimental Setup

A Social Science Course test was designedfor primary school children. First, ques-

Figure 2. Example test question on Social ScienceQuiz

tions were selected from the class-book of-fered by the primary school curriculum ofTurkey. These sentences were performed bya sign language instructor in front of a videocamera. In addition, skeleton coordinates ofall words in the book were collected withKinect 2 sensor as isolated signs.

Selected sentences were created on Mery us-ing Maya. After the creation phase, web-based questionnaires were prepared. Threeversion of the questionnaire were designed,which differed by question format; 1. Onlytext, 2. Text and human’s video, and 3.Text and Mery’s video. In the experiments,the first and the third formats were tested.

The experiments with primary school chil-dren were conducted in a room at the schoolfor the hearing impaired. One experimenterhad an assistive role of explaining the rulesto each child one-by-one. Children com-pleted the test one child at one time. There-fore they were not exposed to the ques-tions before the test. Images that werethe answers of the questions were copiedfrom directly from the social studies class-book. 25 students age of 9 – 12 (averageage: 10.12±1.37) from Dost Eller PrimarySchool for Hearing Impaired Children com-pleted the tests. Of the 25 attendees, 13were girls and 12 were boys. 15 of attendeesspecified that they were familiar to TID.

4.2 Results

In this study, questions with avatar basedsign videos with texts were compared withtext-only questions and preliminary resultswere obtained. Questions on this experi-ment were multiple choice questions mean-ing any question could have more than oneright answer. Number of correct and wronganswers per question are given on Table 1.

Table 1. Number of correct and wrong answers ofthe sentence based test

# Correct Wrong

1 2 2

2 2 2

3 2 1

4 1 1

5 1 1

6 2 1

7 2 2

8 1 2

9 1 2

10 1 2

Total 15 16

Every participant completed the test ses-sion without any interruption or help fromtheir instructor. The questions in the ex-periments were taken from the first-gradesocial studies book. However, since hear-ing impaired children obtain the necessarycommunication skills for the book at thethird to fifth grades, attendees of the ex-periment were selected from those classes.Overall accuracy ratios of the experimentsare presented on Table 2. As can be ob-served from the table, the ratio of the cor-rect answers was higher for the experimentswith the avatar based sign videos and lowerfor the text only version (Fig. 3).

Figure 3. Question-based true and false answers

Table 2. Overall accuracy ratios

Correct Wrong

Text-only 45.33% 32.50%

Text&Mery 66.11% 27.08%

4.3 Discussion

The social science test with avatar was pre-pared on three phases; word-based signsvideo test includes arms and torso [24], facegesture tests [25] and sentence-based testto evaluate the effect of supported materialto textbooks for primary school students.First, two phases were initial steps of thefinal test.

Hearing-impaired children have adaptationproblems on a regular curriculum which isusually based on the text. Learning liter-acy is a challenging obstacle for these chil-dren. Although most students are learningit in the first year of the primary school, theaverage age of learning is higher for these

special children. Because of that, hearing-impaired children lag behind their peers ineducation. According to teachers, parents ofdeaf children refuse sign language trainingin schools. This opinion postpones sign lan-guage courses in primary schools. Despitethe age of participants is approximately 10,almost half of them (Table 3) could notspeak in TID. Also, most of these childrenhave not ability on spoken Turkish. Evenif some of the students claimed that theyknow TID, lack of their abilities is observedduring the test.

The experimenter children did not knowsome fundamental concepts like natural dis-asters and abstract concepts such. Whilethey were taking the test, some of themasked the meaning of “earthquake”, “hiber-nation”, etc. both in spoken Turkish andTID. The curriculum assumes these con-cepts are included in the first grade whichcauses a greater problem for deaf kids com-pared to their hearing counterparts.

Fourth question (“Please find Anıtkabir, the

Table 3. TID knowledge level of the participants

Text-only Text&Human Text&Mery Overall

Do not know 2 0 1 3

I know a little 3 1 3 7

I know 5 2 8 15

resting place of Ataturk”) has highest ac-curacy rate on both scenarios. Besides theoverall accuracy ratio, the gap between ac-curacy ratios of the first and the fifth ques-tions (“Which tools do you think the kidsin the picture require to paint a painting?Please check the appropriate boxes.” and“Which of the children shown below has notbeen inoculated?”) with the other questionsshow that the children did not know of theconcepts present in these questions.

5 CONCLUSIONS

Avatar-based Sign Language Training In-terface is a part of an ongoing nationallyfunded project which aims to create a TIDbased educational tool and dictionary spe-cialized on the primary school curriculum.As preliminary experiments, usability testsfor the arms and facial expressions were con-ducted. Prior to getting the responses fromthe participants, sentences selected from theSocial Science Exercise Book of the first-year student of primary school were created.The book was used for usability studies ofthe avatar with the primary level educationkids. The test was designed with the aimof comparing the educative effectiveness ofthe avatar compared to textual educationaltools.

The results indicate that avatar based tu-toring was more effective in assessing thechild’s knowledge of certain sign languagewords. One major source of the problem inboth avatar and textual based assessmentwas that the children were not familiar withthe curriculum. For example, they did notknow what an earthquake was. These ex-periments demonstrated that signing avatar

was a viable alternative to textual educa-tional materials for hearing-impaired chil-dren.

An overall look at the interaction experi-ment scheme demonstrates that the use ofvirtual non-textual tools is useful in thelearning process of children. Avatar tech-nology, which has been shown to be a closeand successful imitation of videos may beused to fill that gap, as they may provide amore readily available and cheaper supportto the education of primary school childrenwith special needs.

6 FUTURE WORKS

One of the limitations of the work in thisthesis was that the corpora were limited toa small vocabulary size and interactions andlearning experiments with children did notexceed a limited time. While that was suffi-cient to prove the hypothesis of this project,the completion of the overall project corporawill allow researchers to examine the effectof using technological aids for the period ofan entire semester. Such a work would re-quire a significant amount of collaborationand monitor with randomized tests to keeptrack of student improvement at different in-tervals.

One step that will be completed towardsachieving the overall project corpora is theimplementation of fully automatic genera-tion of avatar based content using Kinectcamera and leap motion sensors. By reduc-ing Maya based involvement to a minimum,avatar videos will be produced much fasterby linguists without a need for designer in-volvement.

Acknowledgement

This study is a part of an ongoing projectsupported by the Scientific and Technolog-ical Research Council of Turkey under thecontract TUBITAK FATIH 114E263.

REFERENCES

[1] C. Vaccari and M. Marschark,Communication between parentsand deaf children: implicationsfor social-emotional development,1997. doi: 10 . 1111 / j . 1469 -

7610.1997.tb01597.x.

[2] Lingling language learning game.

[3] Mybabydrum android application. [On-line]. Available: http://www.dooet.com / android / My - baby - drum -

3376814129.

[4] Little mozart 2 - lirec project. [Online].Available: http://mozart.imagina.pt/.

[5] N. Adamo-Villani, “A virtual learn-ing environment fordeaf children:design and evaluation.,” IJASET-International Journal of AppliedScience, Engineering,and Technology,vol. 16, pp. 18–23, 2006.

[6] P. Uluer, N. Akalin, R. Yorganci, H.Kose, and G. Ince, “A new roboticplatform as sign language tutor,” inIEEE/RSJ International Conferenceon Robots and Intelligent Systems(IROS) International Workshop onAssistance and Service Robotics in aHuman Environment, 2013.

[7] A. Ozkul, H. Kose, R. Yorganci, andG. Ince, “Robostar: an interactiongame with humanoid robots for learn-ing sign language,” in IEEE Inter-national Conference on Robotics andBiomimetics (ROBIO), IEEE, 2014,pp. 522–527.

[8] H. Kose, P. Uluer, N. Akalin, R.Yorganci, A. Ozkul, and G. Ince,“The effect of embodiment in signlanguage tutoring with assistive hu-manoid robots,” International Jour-nal of Social Robotics, vol. 7, no. 4,pp. 537–548, 2015.

[9] C. Greenbacker and K. McCoy,“The icicle project: anoveriew.,” In:First Annual Computer Science Re-searchDay, Department of Computerand InformationSciences, Universityof Delaware, Tech. Rep., 2008.

[10] Autismspeaks mobile application.

[11] K. Dautenhahn, C. L. Nehaniv, M. L.Walters, B. Robins, H. Kose-Bagci,N. A. Mirza, and M. Blow, “Kas-par - a minimally expressive humanoidrobot for human-robot interaction re-search,” Special Issue on ”HumanoidRobots”, Applied Bionics and Biome-chanics, vol. 6, no. 3, pp. 369–397,2009.

[12] S. Powell, Helping Children withAutism to Learn. London: David Ful-ton Publishers, 2000.

[13] R. Elliott, J. Glauert, R. Kennaway,and K. Parsons, “Visicast deliver-able d5-2: sigml definition,” ViSi-CAST project, Tech. Rep., 2001.

[14] B. L. Webber, C. B. Phillips, andN. I. Badler, “Simulating humans:computer graphics, animation, andcontrol,” Center for Human Modelingand Simulation, p. 68, 1993.

[15] M. Kipp, A. Heloir, and Q. Nguyen,“Sign language avatars: animationand comprehensibility,” in IntelligentVirtual Agents, 2011, pp. 113–126.

[16] J. A. Bangham, S. J. Cox, R. El-liott, J. R. W. Glauert, I. Marshall,S. Rankov, and M. Wells, “Virtualsigning: capture, animation, storageand transmission-an overview of thevisicast project,” in IEEE Seminaron Speech and Language Processing

for Disabled and Elderly People, IET,2000, pp. 1–6.

[17] M. Verlinden, C. Tijsseling, and H.Frowein, “A signing avatar on thewww,” in Gesture and Sign Lan-guage in Human-Computer Interac-tion, Springer, 2001, pp. 169–172.

[18] H. Sagawa, M. Ohki, T. Sakiyama,E. Oohira, H. Ikeda, and H. Fuji-sawa, “Pattern recognition and syn-thesis for a sign language translationsystem,” Journal of Visual Languagesand Computing, vol. 7, no. 1, pp. 109–127, 1996. doi: 10.1006/jvlc.1996.0007.

[19] T. Lebourque and S. Gibet, “Highlevel specification and control of com-munication gestures: the gessyca sys-tem,” in Computer Animation, 1999.Proceedings, IEEE, 1999, pp. 24–35.

[20] A. B. Grieve-smith, “Signsynth : asign language synthesis applicationusing web3d and perl,” Synthesis,pp. 134–145, 2002, issn: 16113349.doi: 10.1007/3-540-47873-6_14.

[21] Visicast at uea. [Online]. Available:http : / / www . visicast . cmp . uea .

ac.uk (visited on 04/30/2016).

[22] S. Cox, M. Lincoln, and J. Tryggva-son, “Tessa, a system to aid communi-cation with deaf people,” Proceedingsof the fifth international ACM confer-ence on Assistive technologies. ACM,2002.

[23] E. Efthimiou, S. E. Fontinea, T.Hanke, J. Glauert, R. Bowden, A.Braffort, C. Collet, P. Maragos, andF. Goudenove, “Dicta-sign–sign lan-guage recognition, generation andmodelling: a research effort with ap-plications in deaf communication,” inProceedings of the 4th Workshop onthe Representation and Processing ofSign Languages: Corpora and SignLanguage Technologies, 2010, pp. 80–83.

[24] R. Yorganci, N. Akalin, and H. Kose,“Avatar tabanlı etkilesimli isaret dilioyunları,” in Uluslararası EngelsizBilisim 2015 Kongresi, Manisa, 2015.

[25] E. Aklan and N. Tarlakazan, “Avatarbased interaction studies gestures andfacial expressions in sign languagewith virtual avatars,” Istanbul Techni-cal University, Tech. Rep., 2016, Grad-uation Project.