the haptic deictic system\u0026#x02014;hds: bringing blind students to mainstream classrooms

12
The Haptic Deictic System—HDS: Bringing Blind Students to Mainstream Classrooms Francisco CMB Oliveira, Francis Quek, Member, IEEE, Heidi Cowan, and Bing Fang Abstract—Mathematics instruction and discourse typically involve two modes of communication: speech and graphical presentation. For the communication to remain situated, dynamic synchrony must be maintained between the speech and dynamic focus in the graphics. In sighted people, vision is used for two purposes: access to graphical material and awareness of embodied behavior. This embodiment awareness keeps communication situated with visual material and speech. Our goal is to assist students who are blind or visually impaired (SBVI) in the access to such instruction/communication. We employ the typical approach of sensory replacement for the missing visual sense. Haptic fingertip reading can replace visual material. We want to make the SBVI aware of the deictic gestures performed by the teacher over the graphic in conjunction with speech. We employ a haptic glove interface to facilitate this embodiment awareness. We address issues from the conception through the design and implementation to the effective and successful use of our Haptic Deictic System (HDS) in inclusive classrooms. Index Terms—Haptic glove, multimodal interaction, discourse, gaming, embodied skill, assistive technology. Ç 1 INTRODUCTION M ATHEMATICS instruction and discourse typically involve two modes of communication: speech and graphical presentation. For the communication to remain situated, dynamic synchrony must be maintained between the speech and dynamic focus in the graphics. For individuals with typical sight, vision is used for two purposes: access to graphical material and awareness of embodied behavior of the instructor. This embodiment awareness keeps commu- nication situated between visual material and speech. Our goal is to assist students who are blind or have visual impairments (SBVI) in the access to such instruction/ communication. We employ the typical approach of sensory replacement for the missing visual sense by using haptic fingertip reading to replace visual material. We propose that the use of haptic gloves paired with computer vision-based tracking will help the SBVI maintain reading focus on a raised-line representation of a print graphic to which the instructor points while speaking. We performed perception related experiments [1] that ascertained that: 1. The glove is able to convey sense of direction; 2. The glove does not interfere in fingertip reading; 3. A person can navigate with the help of this system while listening to a story and 4. It is possible to fuse the information received from both senses. Our studies suggest that assistive technology use must become embodied and automatic before it can support dynamic fluent instructional discourse. We investigated a strategy where participants were able to develop skills in using our technology in a fun and challenging way through a computer game. Skills developed through this training game were shown to be transferable to a complex multi- modal situated discourse condition. Subsequently, our system was employed in mathematics classes attended both by SBVI and students with typical sight. For instructors, the technology allowed them to: 1) Adjust the pace of the lecture to ensure that all students were following them; 2) Better understand the students signs of confusion and act upon them to ensure their understanding; 3) Act more naturally as they did not have to think of how to verbalize the information displayed on the graphs. Overall, instructors agree that the use of the technology improved the quality of instruction. For the SBVI, the system made teachers pay more attention to them and did not make them more tired than not using it. For the sighted students, the system: 1) Improved lecture fluidity; 2) Made the SBVI more participative in classroom discussions, and 3) Did not make the instructors pay less attention to them. These studies and findings are detailed in the following sections. 2 THE HAPTIC-DEICTIC SYSTEM (HDS) ARCHITECTURE AND DESIGN Fig. 1 is a collage of pictures of the instruction scenario that contextualizes our ensuing discussion. Fig. 1A shows a classroom scene with the instructor pointing to a graphic displayed on a poster and a pair of seated students (one SBVI, and one sighted) receiving instruction. The instructor’s pointing gestures (with a wand in the figure, but the system is capable of tracking an 172 IEEE TRANSACTIONS ON HAPTICS, VOL. 5, NO. 2, APRIL-JUNE 2012 . F. CMB Oliveira is with the Ceara´ State University, Rua Vicente Leite, 1700 Apt 1300, Fortaleza, CEP: 170-151, Ceara´-Brazil. E-mail: [email protected]. . F. Quek and B. Feng are with the Virginia Tech, 2202 Kraft Dr. (1130 KWII), Blacksburg, VA 24060. E-mail: {quek, fangb}@cs.vt.edu. . H. Cowan is with the Wright State University, Student Union E186, 3640 Colonel Glenn Hwy., Dayton, OH 45768. E-mail: [email protected]. Manuscript received 10 June 2010; revised 4 Apr. 2011; accepted 19 Apr. 2011; published online 14 July 2011. Recommended for acceptance by R. Klatzky. For information on obtaining reprints of this article, please send E-mail to: [email protected], and reference IEEECS Log Number TH-2010-06-0032. Digital Object Identifier no. 10.1109/ToH.2011.35. 1939-1412/12/$31.00 ß 2012 IEEE Published by the IEEE CS, RAS, & CES

Upload: independent

Post on 16-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

The Haptic Deictic System—HDS: BringingBlind Students to Mainstream Classrooms

Francisco CMB Oliveira, Francis Quek, Member, IEEE, Heidi Cowan, and Bing Fang

Abstract—Mathematics instruction and discourse typically involve two modes of communication: speech and graphical presentation.

For the communication to remain situated, dynamic synchrony must be maintained between the speech and dynamic focus in the

graphics. In sighted people, vision is used for two purposes: access to graphical material and awareness of embodied behavior. This

embodiment awareness keeps communication situated with visual material and speech. Our goal is to assist students who are blind or

visually impaired (SBVI) in the access to such instruction/communication. We employ the typical approach of sensory replacement for

the missing visual sense. Haptic fingertip reading can replace visual material. We want to make the SBVI aware of the deictic gestures

performed by the teacher over the graphic in conjunction with speech. We employ a haptic glove interface to facilitate this embodiment

awareness. We address issues from the conception through the design and implementation to the effective and successful use of our

Haptic Deictic System (HDS) in inclusive classrooms.

Index Terms—Haptic glove, multimodal interaction, discourse, gaming, embodied skill, assistive technology.

Ç

1 INTRODUCTION

MATHEMATICS instruction and discourse typically involvetwo modes of communication: speech and graphical

presentation. For the communication to remain situated,dynamic synchrony must be maintained between the speechand dynamic focus in the graphics. For individuals withtypical sight, vision is used for two purposes: access tographical material and awareness of embodied behavior ofthe instructor. This embodiment awareness keeps commu-nication situated between visual material and speech. Ourgoal is to assist students who are blind or have visualimpairments (SBVI) in the access to such instruction/communication. We employ the typical approach of sensoryreplacement for the missing visual sense by using hapticfingertip reading to replace visual material. We propose thatthe use of haptic gloves paired with computer vision-basedtracking will help the SBVI maintain reading focus on araised-line representation of a print graphic to which theinstructor points while speaking.

We performed perception related experiments [1] thatascertained that:

1. The glove is able to convey sense of direction;2. The glove does not interfere in fingertip reading;3. A person can navigate with the help of this system

while listening to a story and

4. It is possible to fuse the information received fromboth senses.

Our studies suggest that assistive technology use mustbecome embodied and automatic before it can supportdynamic fluent instructional discourse. We investigated astrategy where participants were able to develop skills inusing our technology in a fun and challenging way througha computer game. Skills developed through this traininggame were shown to be transferable to a complex multi-modal situated discourse condition.

Subsequently, our system was employed in mathematicsclasses attended both by SBVI and students with typical sight.For instructors, the technology allowed them to: 1) Adjust thepace of the lecture to ensure that all students were followingthem; 2) Better understand the students signs of confusionand act upon them to ensure their understanding; 3) Act morenaturally as they did not have to think of how to verbalize theinformation displayed on the graphs. Overall, instructorsagree that the use of the technology improved the quality ofinstruction. For the SBVI, the system made teachers pay moreattention to them and did not make them more tired than notusing it. For the sighted students, the system: 1) Improvedlecture fluidity; 2) Made the SBVI more participative inclassroom discussions, and 3) Did not make the instructorspay less attention to them. These studies and findings aredetailed in the following sections.

2 THE HAPTIC-DEICTIC SYSTEM (HDS)ARCHITECTURE AND DESIGN

Fig. 1 is a collage of pictures of the instruction scenario thatcontextualizes our ensuing discussion.

Fig. 1A shows a classroom scene with the instructorpointing to a graphic displayed on a poster and a pair ofseated students (one SBVI, and one sighted) receivinginstruction. The instructor’s pointing gestures (with a wandin the figure, but the system is capable of tracking an

172 IEEE TRANSACTIONS ON HAPTICS, VOL. 5, NO. 2, APRIL-JUNE 2012

. F. CMB Oliveira is with the Ceara State University, Rua Vicente Leite,1700 Apt 1300, Fortaleza, CEP: 170-151, Ceara-Brazil.E-mail: [email protected].

. F. Quek and B. Feng are with the Virginia Tech, 2202 Kraft Dr.(1130 KWII), Blacksburg, VA 24060. E-mail: {quek, fangb}@cs.vt.edu.

. H. Cowan is with the Wright State University, Student Union E186,3640 Colonel Glenn Hwy., Dayton, OH 45768.E-mail: [email protected].

Manuscript received 10 June 2010; revised 4 Apr. 2011; accepted 19 Apr.2011; published online 14 July 2011.Recommended for acceptance by R. Klatzky.For information on obtaining reprints of this article, please send E-mail to:[email protected], and reference IEEECS Log Number TH-2010-06-0032.Digital Object Identifier no. 10.1109/ToH.2011.35.

1939-1412/12/$31.00 � 2012 IEEE Published by the IEEE CS, RAS, & CES

unadorned hand) are tracked by the camera in the iMac.Fig. 1B shows the two students with the SBVI reading araised-line version of the graphic on the poster. A down-looking camera tracks the student’s reading hand (a frameof the down-looking camera video is pictured in Fig. 1E).Fig. 1F shows the internal detail of the haptic glove worn bythe SBVI. Fig. 1D shows the screen of the iMac in which avideo of the instructor is shown with the reading location ofthe SBVI highlighted as a green dot. During instruction, aniMac camera constantly tracks the instructor’s pointingbehavior, and the student’s down-looking camera tracks theposition of the reading hand. The system determines thedisparity between where the instructor is pointing andwhere the student’s reading hand is positioned. Then, itcomputes the direction (north, northeast, etc.) the studentneeds to move her hand to read the correct location on herraised-line graphic. Finally, a vibrating actuator array in theglove activates in the appropriate pattern to guide thestudent to where the instructor is pointing. The systemassumes that the student’s palm is down, as shown inFig. 1E. If not, the system will not “see” the marker on topof the student’s reading finger and will stop sendingdirectional signals. In essence, our Haptic Deictic Systemprovides the student with awareness of the instructor’spointing behavior. Simultaneously, the iMac screen givesthe instructor awareness of where the student is readingand allows him to adapt instruction to the SBVI’s point ofattention the way he would adapt to the gaze behavior ofsighted students.

The color-based tracking is preferred because it eliminatesnoise, is cheap in computational terms and its calibrationprocedure increases system portability. Before its first use ina given environment, a special program is run to capture“snapshots” of both teacher and student pointing and tobuild a color model of the wand tip, and a marker on top ofthe student’s reading finger. No special camera or otherequipment is required.

2.1 The Haptic Glove

Fig. 2 is a collage of pictures showing relevant aspects of ourhaptic glove.

Fig. 2A shows the ideal placement of the actuators on thepalm to convey each of the cardinal and ordinal directions.Fig. 2B through 2D displays the glove from differentvantage points. One can observe that the fingertips arefree, some bending is possible and the glove almost coversthe entire palm. This is the result of placing the actuators asfar apart from each other as possible. The glove is built inlayers to increase its robustness and facilitate maintenance(Fig. 2E). The external layer of the glove provides for highrobustness. This is needed because of the extra wear placedon the glove when it is pulled on and off of the hand andadjusted for comfort. The internal layer, where the wiresand motor connections reside, is protected from externalmanipulation. For maintenance, all one has to do is removethe outer layer to access the wiring and motors. Thecylinder shown in Fig. 2F is a cooper tube in which thevibrating motors are placed. The copper tubes, containing

OLIVEIRA ET AL.: THE HAPTIC DEICTIC SYSTEM—HDS: BRINGING BLIND STUDENTS TO MAINSTREAM CLASSROOMS 173

Fig. 1. The Haptic Deictic System—HDS.

Fig. 2. Our haptic glove.

the vibrating motors, are kept within pockets to facilitateaccess to one motor without compromising the wiring. Alsoin Fig. 2F, one can see a piece of hard plastic coming out ofthe black cloth. This plastic has two main functions. Thefirst is to serve as an extra protection to the wires and theirconnections to the motors. The second is to provide asurface to be used for a gentle increase on the pressurebetween the motors and the palm. The hard plastic, inconjunction with the straps, helps the user adjust thepressure of the motors on the hand. The sponge (Fig. 2G) isto allow the glove to conform to the individual differencesin hand anatomy. As the user pulls the straps, the spongehelps the glove internal surface to mold to the palm. Themotors are mounted on the side of the glove in contact withthe palm. The wires that come out of the motors passthrough the sponge and are soldered on the other side,protected by the hard plastic.

Each motor or actuator (Jameco part number 256241,100 leads, 5 mm diameter) has a unique address, which allowsindividual control of vibration time and intensity. Theactuators are controlled by a PIC18F452-I/P microprocessor.The PIC18F452-I/P has four independent 8-pin outputcontrol registers for simultaneous control of up to 32 devices.We employ the pulse width control mechanism of themicrocontroller to produce a sense of varying intensity. Witha duty cycle of 10 msec, the microcontroller sets the motor tovibrate at its highest intensity (11,000 RPM). Shorter dutycycles produce less intense vibrations. Accelerometers arenormally used to measure vibration. However, we cannotput accelerometers on the gloves once they are assembled.Therefore, we were not able to quantify the actual vibrationthe glove user experiences.

Because each actuator can be controlled individually, wecan create and test different signaling patterns to convey aparticular direction. Patterns are stored in data filesallowing easy maintenance. A pattern is stored as a list of3-tuples (ID, V, td) representing an “activation command.”ID specifies the actuator ID; V the intensity of vibration(duty cycle in msec), and td the time delay beforeperforming the next command. A time delay of zeromilliseconds represents simultaneous activation. For exam-ple, “5 10 0, 7 10 30, 3 8 0, 4 8 10, 5 0 0, 7 0 30, 3 0 0, 4 0 0” tellsa glove to fire up actuators 5 and 7 to intensity 10simultaneously, wait 30 msec and then activate actuators3 and 4 at intensity 8, wait 10 msec and stop actuators 5 and7, and wait 30 msec then stop actuators 3 and 4.

3 LITERATURE REVIEW

3.1 Multimodal Discourse, Mathematics Instruction,and the Blind

Human discourse is embodied, in that when we speak, weemploy our physical resources of gesture, gaze, headorientation, and body to convey meaning not available inspeech [2, p. 128], and to situate discourse within theinterlocutor’s physical environment [3], [4, pp. 43-46].Visual-spatial reasoning is essential to understand andproduce mathematics and science discourse. In “TheEmperor’s New Mind,” Roger Penrose, mathematicianand physicist wrote: “almost all my mathematical thinkingis done visually and in terms of nonverbal concepts,

although the thoughts are quite often accompanied byinane and almost useless verbal commentary, such as ‘thatthing goes with that thing and that thing goes with thatthing’” [5, p. 424]. Penrose is arranging his ideas in somekind of conceptual space, which is indexed by deicticexpressions [6, p. 40]. The use of gestures and embodiedbehavior in mathematics discourse has been well docu-mented [2, pp. 164-168]. Although SBVI do not have theopportunity to observe gestures, it has been shown thatthey use gestures themselves. Iverson and Goldin-Meadow[7] showed that congenitally blind speakers gesture at thesame rate as the sighted. Goldin-Meadow [8] also foundthat speakers who are blind gesture routinely even thoughthey themselves have never seen gestures. For McNeill, thefact that congenitally blind can gesture is evidence of thespeech-gesture bond [6, p. 26]. Hence, we posit that animpediment to SBVI is the barrier to participation inmathematics and science imposed by their lack of accessto the salient content in multimodal instructional discourse.

By multimodal instruction, we mean that the student hasto fuse three information streams: the spoken speech stream,the tactile rendering of some graphic that accompanies theinstruction, and the deictic gestures of the instructor thatsituates the speech with the graphic both temporally andspatially (i.e., pointing that synchronizes with the vocalutterance). Of the three information sources, providing theSBVI access to the teacher’s gestural actions poses and themost significant research challenge.

3.2 Sensory Substitution

Vision and touch use the same strategies to observe thegeometry of surfaces which can be flat or curved [9, p. 9]. Invision, Gibson [10] argues, variations of colors and contrastsgive us clues to identify objects by delineating their edges.Touchers also employ the “edge strategy” to identifyobjects. Edges appear in line drawings as they do in vision[9, p. 6]. For Bach-y-Rita [11], edges seem to conveyinformation crucial to pattern recognition. D’Angiulli et al.[12] showed that congenitally blind and blindfolded sightedindividuals have similar performance when recognizingtactile images if they are properly guided. This suggests thatSBVI should not be excluded solely on the grounds ofinability to perceive graphical instructional media. Inaddition, it has been shown that congenitally blind adultsprocess spatial images in a similar way to the sighted [13].However, this processing requires slightly less time amongthe sighted [14]. The strategy of substituting haptic foraccess to inherently visual media poses significant chal-lenges that manifest themselves in inevitable delays inteaching SBVI as compared with the sighted [15]. Con-tributing to these challenges are the facts that vision is ahighly parallel sense [16] while touch is sequential [10], andthat the skin has a far poorer resolution than the retina [17].

3.3 Haptic Systems and Teaching Mathematics tothe SBVI

In this section, we start by summarizing other hapticsystems designed to help SBVI learn mathematics. At itsend, we compare our solution to the ones we just presented.

McGookin and Brewster [18] discuss the work they haddone on the MultVis—a project to allow visually impaired

174 IEEE TRANSACTIONS ON HAPTICS, VOL. 5, NO. 2, APRIL-JUNE 2012

users to build and explore mathematical graphs. Theydeveloped a software system, the graph builder, that worksin conjunction with SensAble technologies’ PHANTOMhaptic device [19]. PHANTOM is a force feedback devicethat makes it possible for users to touch and manipulatevirtual objects. The graph builder programs the PHANTOMso that the SBVI can explore graphs through kinestheticinteraction. The authors, however, were not encouraged bythe study results: “. . . it appears they (the SBVI) do notperceive the same benefit in using graphs as would asighted person.”

Talking Tactile Tablet—T3 [20] is a tablet on which araised line document can be placed. Users can explore thegraph in much the same way they do with the raised linepaper alone. The difference is the interaction: the user canpress on the graphic, which plays a prerecorded explana-tion of that area of the graph.

VTPlayer [21] mouse has a collection of pins that raiseaccording to the values of the pixels directly surroundingthe mouse pointer. VTPlayer, the Talking Tactile Table andthe PHANTOM were object of comparison made by Walland Brewster [22], [23]. The authors reported that partici-pants that used T3 to explore graphs had the bestperformance. They were followed by those who usedPHANTOM. VTPlayer users confounded the mouse withrefreshable braille displays. They did not like the displaysize and its low resolution made it almost impossible tounderstand the image they were investigating [23]. Janssonand Pedersen also found blind subjects having difficultieswith the haptic mouse [24].

Manshad and Manshad [25] put actuators on the fingertipsof the users with visual impairments to guide them whileexploring graphics plotted on a touch screen. The partici-pants were asked to locate a line, a circle, and a parabola. Theauthors also asked the participants to find similar functions,but this time they used sound cues. Those who navigatedwith the help of the glove located the graphs faster.

All haptic systems discussed above provide means ofexploring graphical content. However, this is not enough.Teaching/Learning is a cooperative process. The mostimportant difference between HDS and the other hapticsystems designed to teach mathematics to the SBVI is thatours enables teacher/student real time collaboration. Topromote this collaboration, HDS gives teacher and SBVIsimultaneous access to the same illustration content, towhich both parties can make deictic references. Theimportance of such shared representation in a collaborativetask involving blind and sighted was identified earlier byWinberg and Bowers [26]. This is corroborated by Lohse[27] who found that different graph representations canhave a large impact on time and effort to extract informa-tion, even if they represent the same information.

3.4 The Choice for the Hands for Conveying TactileStimuli

Hands, along with face, have the advantage of being theskin regions with lowest frequency detection of the humanbody [29]. This is important because it gives the opportunityto convey information (e.g., distance) by varying frequency.

The two point discrimination threshold (TPDT) isthe oldest and simplest measure of tactile spatial acuity

[30, pp. 384-386]. It is usually defined as the minimaldistance at which two simultaneous stimuli are distin-guishable from a single stimulus [31]. The areas of thebody with the lowest TDPT are: Middle Finger (2.5 mm),Index finger (3.0 mm), Thumb (3.5 mm), Upper lip(5.5 mm), Nose (8.0 mm), and Palm (11.5 mm) [30,p. 386]. This is important because glove directionalsignals should be clearly perceived even by childrenwearing small gloves.

3.5 Other Haptic Gloves

In a comprehensive survey of glove-based systems, DiPietroet al. [32] discuss a plethora of applications in differentfields. Several of the systems described are commerciallyavailable, like the CyberGlove [33], Human Glove [34], and5DT data glove [35]. Most are data acquisition devices andcan capture different aspects of hand movement. In ourcase, we use the glove as a display. We found only tworesearch projects using haptic gloves for that objective: one,by Zelek [34] and other by Manshad and Manshad [25].

Zelek’s [34] glove is part of a portable navigation systemthat receives input from a camera. Once the obstacles orother objects of interest are identified, their positions anddistances are signaled to the user through vibrationstransmitted by the glove. Zelek placed the glove’s actuatorson the dorsal part of the fingers. Acknowledging that thedorsal part is less sensitive than the ventral, he argues thatthe fingertips need to be free to read braille. Vibration onthe pinky means presence of an obstacle to the left; on theindex finger, an obstacle to the front; on the thumb, anobstacle to the right. The stronger the vibration, the closerthe obstacle is.

The Manshads’ glove is part of a system designed to helpSBVI in a multimodal exploration of graphs. As Zelek’s,Manshads’ glove has its vibrating motors placed on thedorsal part of the fingers. The signaling patterns were alsosimple and similar to Zelek’s scheme, with the difference thatthere was no change in vibration to convey target proximity.

Both gloves were successful in conveying directionsignals to their users while performing their tasks. Thescenario/task in which our glove will be used is different inthat it involves instructor/student interaction during con-cept conveyance. The student will have to fuse informationshe receives from the instructor’s discourse and from thehaptic exploration of raised line instruction material. This isa complex multimodal multitask setting.

4 ON TO PERCEPTION

In this section, we discuss how previous research influ-enced us considering alternatives of conveying the Dis-parity Vector (DV)—Direction and distance—through ourhaptic glove.

Tan et al. [35] designed a 3-by-3 vibro-tactile array thatwas sewn between two supporting layers of fabric so theycan be draped over the back of an office chair. With thisconfiguration, eight directions could be sent: east, west,southeast, southwest, northeast, and northwest. In theirexperiments, participants were able to distinguish thedirectional signals without being briefed on them. VanErp [36] also employed the “eight directions strategy” tohis waist vest and equally reports good results. In

OLIVEIRA ET AL.: THE HAPTIC DEICTIC SYSTEM—HDS: BRINGING BLIND STUDENTS TO MAINSTREAM CLASSROOMS 175

addition, Kaczmarek and Webster [31] found that themapping from the sensory experience to the directionitself can be easily learned.

The strategies discussed above seem simple and areproven to work. However, we need to be cautious. It is adifferent device, to be worn in a different body part.Furthermore, it will be used in a different task. It isimportant that the haptic aided navigation not compete forattentional resources with attending the lecture. For that, weneed maximum perceptual salience and that is worth adeeper investigation.

In his Guidelines for the use of vibro-tactile displays in humancomputer interaction, Van Erp [37] suggests the examinationof four parameters related to vibro-tactile perception:magnitude, frequency, timing, and location. We shallconsider each parameter under the light of our specificdevice/task.

Magnitude is related to the intensity level of the vibration.Craig [38] showed that it is possible for the user to perceivedifferent levels of vibration and therefore using suchdifferences to convey information seems feasible. Van Erp[37], on the other hand, suggests caution in the use of thisparameter. He advocates the use of no more than fourdifferent levels of vibration. A potential use for theparameter is to signal target distance. Van Erp used it inhis waist vest [36] but his participants did not benefit fromit. Tan and colleagues did not use different vibrationintensities. During our pilot testing, we could distinguishseveral levels of vibration intensity. However, we decidedto follow Van Erp’s advice and be very economical on thismatter: Only one level was used—the highest. This isbecause, according to Van Erp’s experiment, the differentlevels were useless when it came to signaling distance. Weneed the maximum signal salience to facilitate discrimina-tion and improve task performance. As Van Erp posited,direction is more important than distance.

It is hard to distinguish frequency from magnitude invibro-tactile stimulus [39]. It is nearly impossible to makesuch distinctions using the type of motors we use. In ourcase, it all comes down to the time electric current is sent tothe motor. If it is long enough, the motor will spin at its fullspeed. That means that at top speed, the magnitude will beat its peak and so will the frequency. In all cases, bothparameters go hand in hand. We therefore make nodistinction between them. From this point on, we refer toboth parameters as intensity level meaning how the humanuser perceives the level of magnitude/frequency.

When it comes to temporal sensitivity, we found, duringpilot testing, that if cell phone vibrating motors receiveelectrical current for as short as 30 ms, they producedetectable vibration. This is below the 50 ms delay Jay et al.[40] found as capable of affecting task performance.Furthermore, humans can perceive two vibro-tactile stimulias two, as long as they are separated by an interval as shortas 10 ms [41], [42]. We began testing different ways ofsignaling the DV based on these parameters.

Temporal enhancement occurs when a stimulus on adifferent part of the body replaces the current one. Whensuch replacement happens, the second stimulus is per-ceived with a greater magnitude than the “old” one [43].Such phenomenon can benefit the student. During naviga-

tion, the second signal will always indicate the newdirection to move the hand. The sooner the student isaware of that, the better.

Still, there is the temporal summation phenomenon. Whenthe vibrotactile stimulus time increases, the threshold drops[44], increasing the possibility of detection. To benefit fromthis finding, we decided to keep the glove vibrating at alltimes. However, the user can always stop and resume thevibration whenever she wants. To stop the vibration, all theuser has to do is move her hand out the tracking area.Conversely, moving the hand back into the area will causethe vibration to resume.

Another interesting phenomenon in tactile signaling isthe sensory saltation—first observed at the PrincetonCutaneous Communication Laboratory. The researchersput three mechanical tactors on the participant’s forearmand delivered three short pulses to three different positionson the arm. The first position was closer to the wrist, andthe third position was closer to the elbow. The secondposition is midway between the first and third positions.The participants felt as if a little rabbit was hopping on theirarms from wrist to elbow, reports Geldard [45]. Kirman [46]also observed apparent movement increases as a powerfunction of increasing stimulus duration.

Considering the discussion above, we tried severaldifferent patterns for each direction to find one thatproduces a strong, clear, and short signal. We wanted toincrease signal salience, and therefore reduce memory load,[47, p. 41]. The less memory load the better our chances toenable multimodal interaction. The direction was clearlyperceived when signaled through the saltatory pattern.However, this approach costs precious milliseconds tocompletely send a directional signal. The start-stop intervalbetween the vibrations has to be long enough to allow thesequencing to be perceived. This pattern, although veryeasily perceived, was abandoned because of the longduration of the signal. To keep the signal short, we decidedto use only one actuator per direction. For the vibrationduration, it was empirically set to 30 milliseconds. Less thanthat, the motors will not spin at full speed, making thevibration less perceivable.

4.1 First Formal Studies

The tasks and findings (all statistically significant) presentedin this section are detailed in [48]. We went through fourcycles of research and development until we arrived at theglove shown in Fig. 2. In the first generation, we hadactuators assembled in three different configurations(round, rectangle, and square). Twenty-five sighted partici-pants wore those original glove models either on theirdominant or nondominant hand in a between subjects study(each participant with a particular glove/hand configura-tion). Sighted participants helped in these initial studiesmainly due to recruiting and experimental design issues. Itis much easier to recruit sighted participants in numbers thatwould give us statistical power to reach significant conclu-sions. Such an approach was backed by the fact that it is aconsensus that any advantage of the blind is due not toheightened sensitivity, but rather to the development andrefining of perceptual skills with practice [49]. Sightedparticipants were used only in these early studies.

176 IEEE TRANSACTIONS ON HAPTICS, VOL. 5, NO. 2, APRIL-JUNE 2012

We devised three perception related tasks. In the first task,a random signal (north, northeast, etc.) was sent through theglove and participants were required to tap on the arrowcorresponding to the direction perceived, as quickly andaccurately as possible (Fig. 3a). One-Way Anova ð� ¼ 0:05Þfound a main effect on glove configuration, Fð2; 21Þ ¼10:2170; p ¼ 0:0008. The rectangular configuration yieldedbetter results: Mean hit 79.16 percent ðn ¼ 8, Lower95 percent: 67.25 percent, Upper 95 percent: 89.00 percent,Std Error : 5.22). The analysis found no effect of handdominance, Fð1; 22Þ ¼ 0:8033.

The second task was designed to test if the vibrationon the palm would somehow impair tactile reading(Fig. 3b). Blindfolded participants successfully read theinformation from a tactile board while the glove wasvibrating. One-Way Anova ð� ¼ 0:05Þ found no significantdifference between glove configurations, F ð2; 24Þ ¼1:1053; p ¼ 0:3496. More importantly, the hit rate waswell above chance, with the worst case being the roundconfiguration with a mean hit of 87.50 percent ðn ¼ 8,lower} 95 percent: 75.14 percent, upper 95 percent:99.86 percent). Furthermore, no loss of sensibility wasreported.

On the third task, stories were told to the blindfoldedparticipants. A typical sentence in the stories would be:My friend lives in a . . . bedroom apartment. The missinginformation (the number of bedrooms) should be acquiredfrom reading the number embossed on a tactile board(Fig. 3b). The haptic glove helped the participants tonavigate to the number while they were listening to thestory. Four questions were asked about the story rightafter they were told. Over 80 percent of the questionswere correctly answered ð� ¼ 3:21 out of four questionsper story, n ¼ 113; � ¼ 0:8603). To give the right answer,participants had to use information acquired from bothhearing the story and tactile exploration of the numberspresented on the board.

Although we did not find any statistically significantdifferences between the glove configuration in tasks II andIII, the round one was always lagging behind and therectangular always ahead. Hence, we decided to evolveonly the latter configuration and produce new designsaimed to increase robustness and signal salience.

5 ON TO DISCOURSE

Prior experiments established the efficacy of the device/system in passive interaction situations. However, success

of complex combinations of perception using haptics forsensory replacement is not obvious [50]. Our approachrequires students and instructors to engage in dynamicinteractive discourse. Discourse engages a broad band ofcognitive resources. Multimodal discourse requires tacitfusion of information while engaging in comprehensionand discourse maintenance. More than discourse, learningis only possible through dialog between tutor and appren-tice. In a dialog, there is social pressure to keep theconversation flowing. Furthermore, the burden is heavieron the listener side [51, p. 140], which in our case is typicallythe student. Wilson [52] suggested that “when confrontedwith novel cognitive or perceptuo-motor problems, humanspredictably fall apart under time pressure.” Therefore,support for fluent instructional discourse needed to bedemonstrated.

For that, we devised a game we call the phrase charadegame that probes the capacity for such active engagement.

5.1 Charade Game and Study Design

The charade game is played in pairs: A sighted guide anda follower who is blind. The guide points at letters on aposter placed at the front of the room (Fig. 4). Thefollower has a Braille version of the letter grid on top ofher desk. The guide points to a letter and the follower,using the guidance from the glove’s vibrations, has to findand read the letter. Once the letter is found, the guidepoints to the next letter. The follower has to put the letterstogether and form words. Words that make up a “cluephrase” are hidden in the grid as horizontal, vertical, ordiagonal letter chains that can run in any direction. Cluephrases have some association with a catch phrase (thecharade’s solution). For instance, the clue phrase “Blinkblink small sun,” is associated with the familiar catchphrase “Twinkle twinkle little star.” The cognitive activityof the discourse dynamic is represented by a catch phrasethat the followers have to discover from the clue phrase.HDS was deployed to facilitate guide-follower dialogue.

5.2 First Charade Study

Four Wright State University (WSU) undergraduates, allwith no usable vision, answered our recruitment efforts andparticipated in the first charade study. Two females

OLIVEIRA ET AL.: THE HAPTIC DEICTIC SYSTEM—HDS: BRINGING BLIND STUDENTS TO MAINSTREAM CLASSROOMS 177

Fig. 3. Perception related task.

Fig. 4. The Phase Charade game.

(N—Blind at the age of 16 and M—blind at 2) and two males(R—blind from birth and G—blind at 13) played the role offollowers. Two sighted WSU graduate students (one maleand one female) with teaching experience were recruited asguides. Each guide worked with a male and female follower.Each guide-follower pair was given instruction on the HDSand game. They were permitted to familiarize themselveswith the technology and charade by playing a practice game.One pair of catch/clue phrase was randomly selected andused for the practice. Each pair then played the threeremaining catch/clue phrase games.

5.2.1 Data Coding and Analysis

We recorded the exchange and coded the speech, gesture,Braille reading activity and the synchronous activity usingthe MacVisSTA [53] system. The discourse was coded for“reference chains” at three levels: Object level, Metalevel,and Para level [54]. In object level, the referential focus ofthe dialog is the conversants’ joint project [55]—takingdirect actions to solve the charade, whereas in metalevel,the subject is the discourse itself. Turns related todiscourse repair were also coded as meta level. Para levelutterances relate to the individual experience and refer-ences to people and objects vividly in the speaker’senvironment. Words of encouragement like: “There yougo,” “take your time,” etc., were coded as belonging to thepara level reference chain. This coding allows us tounderstand the joint problem solving activity of theparticipants, the fluency of the discourse (whether therewas excessive metalevel conversation to repair the dis-course), and the degree to which the discourse is focusedon the problem or on the technology itself. A usabilitypostquestionnaire was also completed by participants.

5.2.2 Study Results

After the trials, we asked the followers 23 questions and 19to the guides. The questions took the form of a set ofstatements to which the participants responded on anagreement Likert scale (1-strongly disagree to 5-stronglyagree, 3 being no opinion). We shall now discuss how theyperceived the experience.

All followers strongly agreed that they could performbetter with practice, and that they are willing to participatein future experiments because they believe this technologywill help SBVI. All but one follower said they would ratheruse the system than have someone physically moving theirhand to indicate particular places on the raised linediagrams. All but one follower reported that they couldread the letters while talking to the guide and that usingthe system did not interfere on their thinking of thecharade solution.

As for the guides, they all agreed (or strongly agreed)that: The follower could access where they were pointing to,they used pointing to clarify themselves and avoidmisunderstandings, they were able to speak less and bemore precise, the followers’ pointing was precise (allstrongly agreed), the instructor’s display helped them tomatch their speech to the followers pace.

While all participants were able to complete the study, wefound the discourse rather cumbersome. An overall mean of60.30 percent ðn ¼ 12; � ¼ 7:309Þ of the conversational turns

were coded as belonging to the Object Level thread. Withsuch a low percent age, we judged the system to have failed,in the first charade study, to facilitate fluent instructionaldiscourse. Much of the nonObject Level discourse wascentered on the use of the glove. This is particularly troublingfor an assistive devise because research shows that whenconversants feel that the technology is “getting in the way” oftheir interaction, they are likely to quit using it [56].

5.3 Embodied Skill

Analysis of the first charade study shows that decipheringdirectional signals is performed by participants as asecondary task. Such task needs to become automatic inorder for the system to become practical for student use.The acquisition of embodied skill [57] may be conceptualizedas the automatizing of tasks that were previously underconscious control. The difference, according to Wickens andHollands [47, p. 175], is that the controlled task demandsattentional resources whereas the automatic one does not.According to the same author, extensive perceptualexperience and consistency of responses are necessaryingredients for a task to become automatic [47, p. 169].Extensive training also helps to eliminate decrements insensitivity [58]. Furthermore, training ameliorates taskperformance by improving tactile discrimination increasingactivation of the somatosensory cortical areas representingthe stimulated body part [59]. Such training can be achievedby game playing as long as it is engaging.

5.3.1 Designing Challenging Activity

The skill training game is designed to produce a set ofgraduated challenges that engages the subject at successivelevels of difficulty as their skill improves. To aid in thisdesign, we employed the flow concept of optimal experiencedeveloped by Csikszentmihalyi [60]. Csikszentmihalyi’smodel has been adapted to the design of computer games[61]. Flow is a feeling of “complete and energized focus inan activity, with a high level of enjoyment and fulfillment”[60]. When we are in the flow zone, we are so engaged andfocused that we lose notion of time and worries [61].

We modeled our game in a fantasy context patternedloosely on the television and movie series Mission Impossible.Table 1 shows the game setup that is read to the player whois blind using Text-to-Speech when the game starts. Wemake extensive use of speech and sound effects as a meansto promote engagement by providing a rich multimodalexperience. The subject has to move her hand over a“gameboard” under the guidance of the HDS. When herhand comes within some level-defined distance from the

178 IEEE TRANSACTIONS ON HAPTICS, VOL. 5, NO. 2, APRIL-JUNE 2012

TABLE 1Game Instructions for the First Phase

target, the “bomb” is disarmed. Target coordinates arerandomly assigned. Direct and immediate feedback isprovided by the tracking and the haptic glove. The needfor concentration is clear: There is a goal to be accomplishedwithin a time frame, the player has to focus on the task orshe will fail.

The game comprises three levels (Fig. 5). Each level lasts2 minutes. The player must find the codes and disarm anumber of warheads at each level within that time. In levelone, 10 targets must be found, in levels two and three thenumber of targets to be found are 20 and 40, respectively.As the level of game play increases, the required distance ofthe hand from the target point decreases successively.When the player gets to the target, the system emits a soundof a small detonation indicating the destruction of the targetdevice. Together with the Mission Impossible theme music,this audio feedback is designed to enhance the sense ofimmersion in the game.

The faster the player navigates, the more points sheearns. The program records the highest score each playeraccumulates, along with the most advanced level attained.To add a degree of competition, the system speaks the nameand score of the top player before articulating the gameinstructions. Five students, each with no usable vision,participated in the game studies, three females (A, N, andO) and two males (R and G). N, R, and G also participatedin the first charade study.

5.3.2 Methodology and Study Results

Each participant was required to play the game for three one-hour long sessions on different days. We detected importantperformance gains in all participants. The mean times totarget were: 6,205.94, 5,029.79, and 4,685.09 milliseconds fortrials 1, 2, and 3, respectively. The difference from day 1 today 2 was statistically significant (paired Student’s t,n ¼ 1;334, df ¼ 2, t ¼ 1:801, p-value < 0:0001). Our glovecan either convey eight different directional signals (North,Northeast, East, etc.), or only four (N, E, S, W). We thereforewanted to know which signaling pattern would yield betterperformance. Among the three trials, the participants wererequired to test both signaling patterns. They could do twotrials with one pattern and one with the other. When theglove is signaling eight directions, participants reachedtargets in less time (One-way Anova, � ¼ 0:05;Fð1; 1549Þ ¼5:4304; p ¼ 0:0199Þ.

All participants reached level 2 in their first trial andthree of them reached level 3. Interestingly, some of thetrials lasted more than one hour because they wanted to

break the current record. No signs of desensitization orfatigue were observed. A complete discussion of the gamestudy can be found in [1].

5.4 Revisiting the Phrase Charade

Our results show performance gains for participantsexposed to the training game. However, we needed toknow whether the gains were transferable to the lecture-liketask scenario. To answer this, we ran the charade experi-ment again. We conducted this second charade study in thelate spring of 2009.

Among those who participated in the second charadestudy, three of participants (R, N, and G) were also presenton the first charade. We had one dropout since the firstcharade: M. M was not invited for the following studies(game and second charade) because she has other dis-abilities which cause loss of sensation on her palm. On theother hand, we had two new participants joining ourstudies starting from the game (A, O). Therefore, we shallfirst compare the pre- and postgame performances of R, N,and G. We report our analysis of the overall performancetimes, general discourse content, and game experience fromthe postcharade questionnaires.

5.5 Performance Gains and Task Focus

We compared the overall completion times of the phrasecharade performed by our three subjects from the spring of2007 with that of our second phrase charade study. Eachparticipant solved three charades in each of our studies,giving us nine sets of results in our 2007 study, andpotentially 15 sets of results from our 2009 study. One dataset from a repeat subject in the second charade had to beexcluded because of a system malfunction during that study.This yields eight sets of data from the repeat participants andsix from the new participants in the 2009 study.

The completion times were markedly decreased in thesecond charade. For our repeat subjects, the new meancompletion time was 238.50 seconds as opposed to443.88 seconds from the 2007 study. This is a 1.86-fold timedifference (paired Student’s t, df ¼ 8; t ¼ 2:1650, p-value ¼0:0020). When we include all five subjects in the 2009 study,we have a mean completion time of 251.92 seconds. Ourobservations of the discourse content in the 2009 studyshowed virtually no references to the technology. Almost allthe speech was dedicated to solving the charade. In otherwords, the mean of conversational turns devoted to solvingthe problem was 96.68 percent among the repeat subjects. Inthe first charade trial, the mean was 66.70 percent, andsignificantly different (paired Student’s t, df ¼ 8, t ¼ 2:058,p-value < 0:0001). When we include all participants, wehave a mean of 97.14 percent of the turns dedicated toproblem solving. Fig. 6 shows the percent ages of problemsolving (object level) turns of the two charade studies(before and after playing the game).

6 ON TO INSTRUCTION

It is important to notice that apart from training the SBVI,both the system as a whole and the haptic device wereimproved. A much more stable tracking algorithm wasimplemented and new glove models were introduced. We

OLIVEIRA ET AL.: THE HAPTIC DEICTIC SYSTEM—HDS: BRINGING BLIND STUDENTS TO MAINSTREAM CLASSROOMS 179

Fig. 5. The Mission Impossible game.

had participants who were able to use the system in deixis-laden discourse, however, instruction discourse is differentfrom a regular day to day conversation. The main differenceis in the rate of new higher level information beingtransferred from instructor to student and the need, fromthe student, to use it right away to understand thedynamically unfolding instructional discourse. We neededto test the system in actual instructional situations.

6.1 Inclusive Classrooms

We understand that our solution can be used to promoteinclusive classrooms, where students with disabilities attendclasses with their nondisabled peers. Inclusive classroomsare beneficial, in many respects, for all students [15], [62].Furthermore, such inclusive instruction is required by law(e.g., The Individuals with Disabilities Education ActAmendments (IDEA, 1997), and the No Child Left BehindAct (NCLB, 2001)). Therefore, the discussion is not ifinclusive classrooms are good or not, it is how to makethem work. The reduction in the achievement gap betweentypical students and those with disabilities requires effectiveeducational methods for all students [63]. One of the mostpromising practices for helping students with disabilitiessucceed in the classroom is the use of technology [64].

6.2 Study Design

Our five trained SBVI (denoted by b1...5 in Table 2), 23 sightedWSU undergraduates (s1...23 in Table 2), who were majoring innonmathematics related programs, constituted sevengroups. Five of them had one SBVI and three sightedstudents. The other two with only sighted functioned asstatus-quo control. All groups attended two three-lessonmathematics minicourses on trigonometry (course A) andplanar geometry (course B). Two instructors (t1; t2 in Table 2)taught the courses (one instructor per course). Course Ainstructor was a mid-1920s WSU mathematics graduatestudent with little teaching experience. Course B was taughtby a high school mathematics teacher with more than 30 yearsof experience. In a counterbalanced design, the groups whoattended course A with the system took course B without it;while those who attended course A without the system, tookcourse B classes with the system. Consequently, instructorsalso had a chance to teach the same course to inclusive

classrooms both with and without the system. Studentparticipants took oral exams on the courses’ subjects bothbefore and after classes. The audio was unidentified andgiven to an independent grader. Upon course completion, allparticipants were interviewed. The questions took the formof a set of statements to which they responded on anagreement Likert scale (1-strongly disagree to 5-stronglyagree, 3 being no opinion). They were also encouraged tocomment on their choices.

Although designed and performed with scientific rigor,due to the small number of trained SBVI, differentcurricula and instructor’s background, this study must beseen as exploratory.

6.3 Instruction Study Results

Both instructors strongly agreed that the HDS raised theirawareness of the SBVIs’ behavior. They mentioned thatsuch awareness gave them the opportunity to: 1) Adjust thepace of the lecture to ensure that all students were followingthem; 2) Better understand the students signs of confusionand therefore act upon them to ensure their understanding;3) Act more naturally as they did not have to think of howto verbalize the information displayed on the graphs.Instructors also noted that the system afforded morefluidity in the lecture because they did not have to walkup to the SBVI and physically replace her reading hand toensure she was attending to the area of the graph underdiscussion. Both instructors agreed (or strongly agreed) thatthe use of the system should have a positive affect inlearning due to the better instruction it afforded.

The SBVI were able to navigate faster than originallyexpected. They might have benefited from coherent multi-modal cues: Glove directional signals, teachers’ discourse,and the raised line drawings. All the SBVI agreed (orstrongly agreed) that the use of the system did not makethem more tired than not using it and none saw the systemas an impediment to keeping up with the lecture. They allagreed (or strongly agreed) that instructors paid moreattention to them in classes where the system was used.Three of them reported that due to the system they losttrack of the instructor fewer times, while the other two hadno opinion. The same three found it easier to understandmathematics concepts when the system was used and theothers were undecided. One of the “undecided” said thatthe charade experiment helped them get ready for instruc-tion only up to some degree, that attending to class is

180 IEEE TRANSACTIONS ON HAPTICS, VOL. 5, NO. 2, APRIL-JUNE 2012

Fig. 6. Percent age of conversational turns dedicated to solving thecharade—Before and after playing the Mission Impossible game.

TABLE 2Instruction Study Design

different and attending to more (than the three lessons)classes would help.

The sighted students agreed (or strongly agreed) that thesystem: 1) Improved lecture fluidity; 2) Made the SBVI moreparticipative in classroom discussions, and 3) Did not makethe instructors pay less attention to them.

Both instructors, all SBVI and all sighted studentsagreed (or strongly agreed) that the technology will beuseful in the classroom.

Despite the congruent and positive experience of in-structors and students, we could not reach a conclusion onlearning. When we compared the grades from oral examsno statistically significant difference was reached. Thismight be due to the study limitations discussed in theprevious section. However, we understand the introductionof the HDS made more perceptual evidence available forboth instructor and SBVI to understand each other’sbehavior. This lowered instructor/SBVI communicativeeffort, leaving more cognitive resources available for theconveyance and understanding of concepts.

6.4 Other Relevant Observations

Instructors need to understand the system’s limitations andthe impact of their actions on the SBVI’s understandingwhen the HDS is used. For instance, in this study, HDS wasset to track a wand instructors used to point at relevantportions of the figures. In some occasions, instructorsmotioned with the wand while performing gestures otherthan pointing. The HDS tracked and delivered thosegestures, confusing the students.

It also became clear that the use of the HDS demandscareful lecture planning. Curricula and lecture notes(including figures) were prepared by our research team.During debriefing sessions, instructors suggested the use ofmore figures. In the “all sighted” lectures, instructors drewseveral figures on the board to convey specific concepts. Inthe inclusive trials, those concepts were conveyed usingspecific areas of the figures given. The “ad hoc” figureswere simpler and showed only the graphical aspect relevantto the concept being conveyed. This is corroborated by aninstructor’s opinion: “I would have prepared much moregraphs.” Thus, future curricula should have more andsimpler figures. Another important issue is related tocalculations performed during lecture. Obviously, the SBVIcould not follow the instructor’s writing on the board.SBVI’s class notes should include all the calculations theinstructor will perform during lecture. Furthermore, whencalculations make references to elements of a figure, thatfigure must remain accessible throughout all the steps ofthat calculation.

7 CONCLUSIONS

We have presented our work on supporting instructionaldiscourse where the verbal component is situated within agraphical presentation with deixis. We overviewed oursystem design and the results of our perception-action teststhat indicated that the technology was able to providedirection in conjunction with speech and fingertip reading.When we applied the system to a real discourse situationwith heavier cognitive requirements represented by ourphrase charade game with our participants who are blind,

we found that the discourse was labored, and that there wasinordinate attention paid to the technology, and insufficientresources were dedicated to the substance of the discourse.

We posited that the use of the assistive technology mustbecome fully embodied and automatic before it can supportdynamic fluent instructional discourse. We advance theidea of employing a skill training game, to encourage thedevelopment of the level of skill necessary. The results ofour skill training game studies suggest that this strategy iseffective in encouraging skill development within a frame-work of fun.

We performed a second phrase charade study todetermine if the skill acquired by our skill training gamewould transfer into a discourse environment. The results ofour second charade study show an almost twofold increasein speed over our first charade study before the skilltraining game. Furthermore, commitment of discourse todiscussing the technology practically disappeared.

Subsequently, our system was employed in mathe-matics classes attended both by SBVI and students withtypical sight. For instructors, the technology allowed themto: 1) Adjust the pace of the lecture to ensure that allstudents were following them; 2) Better understand thestudents’ signs of confusion and act upon them to ensuretheir understanding; 3) Act more naturally as they didnot have to think of how to verbalize the informationdisplayed on the graphs. Overall, instructors agree thatthe use of the technology improved the quality ofinstruction. For the SBVI, the system made teachers paymore attention to them and did not make them moretired than not using it. For the sighted students, thesystem: 1) Improved lecture fluidity; 2) Made the SBVImore participative in classroom discussions, and 3) Didnot make the instructors pay less attention to them.

One might conjecture if the amount of training requiredfor the SBVI to benefit from the system in inclusiveclassrooms pays off. We argue it does because once theembodied skill is developed, it frees cognitive resources andthe benefit is permanent [65]. Therefore, the earlier thetraining (in terms of school years) the better. Furthermore,we saw that such skill can be acquired in one context (thegame) and used in another (mathematics instruction). Thus,it is reasonable to assume that such a system is helpful inteaching any subject that makes use of graphical illustrations.

The studies reported in this paper should be followed bya longitudinal one, where more SBVI should take fullcourses with and without the HDS. Such courses will bepreceded by careful curricula and class notes preparationalong with instructor training. The game and charadestrategies should also be used as means to develop thenecessary skills to ensure effective use of the technologyduring instruction.

The SBVI should be able to revisit relevant parts of thelecture as she prepares for exams. Future HDS versionswill store lecture audio and instructor’s tracking coordi-nates on a database. It will allow the SBVI to create markersto points of interest in the lecture simply by clicking amouse or striking a key on a one-hand keyboard. Upon the“click,” HDS will store the time elapsed from the beginningof that lecture to moment of the “click.” An HDS sistersystem will be developed for study-time. It will let the SBVIaccess the lectures database and revisit the points shemarked as relevant. Such a system will also require the

OLIVEIRA ET AL.: THE HAPTIC DEICTIC SYSTEM—HDS: BRINGING BLIND STUDENTS TO MAINSTREAM CLASSROOMS 181

glove and the tracking of the SBVI reading hand as it will

both direct the SBVI to where the instructor was pointing

and play the audio of that particular point in the lecture.

Such a feature should help the SBVI recreate some of the

perceptual experience she had during lecture and might

help the memorization process. The SBVI would also be

able to add her own audio note to the lecture database.

REFERENCES

[1] F. Oliveira, H. Cowan, B. Fang, and F. Quek, “Fun to DevelopEmbodied Skill: How Games Help the Blind to UnderstandPointing,” Proc. Third Int’l Conf. Pervasive Technologies Related toAssistive Environments (Petra ’10), 2010.

[2] D. McNeill, Hand and Mind: What Gestures Reveal about Thought.Univ. of Chicago Press, 1992.

[3] H.H. Clark and C.R. Marshall, “Definite Reference and MutualKnowledge,” Psycholinguistics: Critical Concepts in Psychology,p. 414, 2002.

[4] H.H. Clark, Using Language. Cambridge Univ. Press, 1996.[5] R. Penrose, The Emperor’s New Mind. Oxford Univ. Press, 1989.[6] D. McNeill, Gesture and Thought. Univ. of Chicago Press, 2005.[7] J.M. Iverson and S. Goldin-Meadow, “Why People Gesture as

They Speak,” Nature, vol. 396, p. 228, 1998.[8] S. Goldin-Meadow, “The Role of Gesture in Communication and

Thinking,” Trends in Cognitive Sciences, vol. 3, no. 11, pp. 419-429,1999.

[9] J.M. Kennedy, Drawing and the Blind. Yale Press, 1993.[10] J.J. Gibson, “Observations on Active Touch,” Psychological Rev.,

vol. 69, no. 6, pp. 477-491, 1962.[11] P. Bach-y-Rita and Brain, Mechanisms in Sensory Substitution.

Academic Press, Inc., 1972.[12] A. D’Angiulli, J. Kennedy, and M. Helle, “Blind Children

Recognizing Tactile Pictures Respond like Sighted Children GivenGuidance in Exploration,” Scandinavian J. Psychology, vol. 39, no. 3,pp. 187-190, 1998.

[13] M. Jeannerod, “The Representing Brain: Neural Correlates ofMotor Intention and Imagery,” Behavioral and Brain Sciences,vol. 17, no. 2, pp. 187-245, 1994.

[14] N. Kerr, “The Role of Vision in ‘Visual Imagery’ Experiments:Evidence from the Congenitally Blind,” J. Experimental PsychologyGeneral, vol. 112, no. 2, pp. 265-77, 1983.

[15] T. Dick and E. Kubiak, “Issues and Aids for Teaching Mathe-matics to the Blind,” Math. Teacher, vol. 90, no. 5, pp. 344-349, 1997.

[16] Y. Hatwell, A. Streri, and E. Gentaz, Touching for Knowing, pp. 67-83. John Benjamins Publishing Co., 2000.

[17] H. Segond, D. Weiss, and E. Sampaio, “Human Spatial Navigationvia a Visuo-Tactile Sensory Substitution System,” Perception,vol. 34, pp. 1231-1249, 2005.

[18] D. McGookin and S. Brewster, “MultiVis: Improving Access toVisualizations for Visually Impaired People,” CHI: Proc. ExtendedAbstracts on Human Factors in Computing Systems, 2006.

[19] Sensable, “Products and Services,” http://www.sensable.com/,Last Checked: 3/19/2009.

[20] L. Wells and S. Landau, “Merging of Tactile Sensory Input andAudio Data by Means of the Talking Tactile Tablet,” Proc.Eurohaptics, 2003.

[21] VTPlayer. http://vtplayer.sourceforge.net/, Last Checked:01/04/2010.

[22] S.A. Wall and S. Brewster, “Sensory Substitution Using Tactile PinArrays: Human Factors, Technology and Applications,” SignalProcessing, vol. 86, no. 12, pp. 3674-3695, 2006.

[23] S.A. Wall and S.A. Brewster, “Tac-Tiles: Multimodal Pie Charts forVisually Impaired Users,” Proc. the Fourth Nordic Conf. Human-Computer Interaction: Changing Roles, 2006.

[24] G. Jansson and P. Pedersen, “Obtaining Geographical Informationfrom a Virtual Map with a Haptic Mouse,” Proc. 22nd Int’lCartographic Conf. (ICC ’05), 2005.

[25] M.S. Manshad and A.S. Manshad, “Multimodal Vision Glove forTouchscreens,” Proc. 10th Int’l ACM SIGACCESS Conf. Computersand Accessibility, 2008.

[26] F. Winberg and J. Bowers, “Assembling the Senses: Towards theDesign of Cooperative Interfaces for Visually Impaired Users,”Proc. ACM Conf. Computer Supported Cooperative Work, pp. 332-341,2004.

[27] G.L. Lohse, “Models of Graphical Perception,” Handbook ofHuman-Computer Interaction, Elsevier 1997.

[28] E. Mynatt and G. Weber, “Nonvisual Presentation of GraphicalUser Interfaces,” CHI: Proc. ACM SIGCHI Conf. Human Factors inComputing, 1994.

[29] R. Verrillo and G. Gescheider, “Perception via the Sense ofTouch,” Tactile Aids for the Hearing Impaired, pp. 1-36, John Wileyand Sons 1992.

[30] R.J. Christman, Sensory Experience. Harper and Row Publishers,1979.

[31] K. KACZMAREK and J. WEBSTER, “Electrotactile and Vibrotac-tile Displays for Sensory Substitution Systems,” IEEE Trans.Biomedical Eng., vol. 38, no. 1, pp. 1-16, Jan. 1991.

[32] L. Dipietro, A.M. Sabatini, and P. Dario, “A Survey of Glove-Based Systems and Their Applications,” IEEE Trans. Systems, Man,and Cybernetics, Part C: Applications and Rev., vol. 38, no. 4, pp. 461-482, July 2008.

[33] A. Mulder and S. Fels, “Sound Sculpting: Manipulating Soundthrough Virtual Sculpting,” Proc. Western Computer Graphics Symp.,1998.

[34] J.S. Zelek, “A Haptic Glove as a Tactile-Vision Sensory Substitu-tion for Wayfinding,” J. Visual Impairment & Blindness, vol. 97,no. 10, pp. 1-24, 2003.

[35] H. Tan, A. Lim, and R. Traylor, “A Psychophysical Study ofSensory Saltation with an Open Response Paradigm,” Proc. NinthInt’l Symp. Haptic Interfaces for Virtual Environment and TeleoperatorSystems, 2000.

[36] J.B.F. Van Erp et al., “Waypoint Navigation with a VibrotactileWaist Belt,” ACM Trans. Applied Perception, vol. 2, no. 2, pp. 106-117, 2005.

[37] J.B.F. van Erp, “Guidelines for the Use of Vibro-Tactile Displays inHuman Computer Interaction,” Proc. Eurohaptics, 2002.

[38] J. Craig, “Difference Threshold for Intensity of Tactile Stimuli,”Perception & Psychophysics, vol. 11, no. 2, pp. 150-152, 1972.

[39] G. Goff, “Differential Discrimination of Frequency of CutaneousMechanical Vibration,” J. Experimental Psychology, vol. 74, no. 2,pp. 294-299, 1967.

[40] C. Jay, M. Glencross, and R. Hubbold, “Modeling the Effects ofDelayed Haptic and Visual Feedback in a Collaborative VirtualEnvironment,” ACM Trans. Computer-Human Interaction, vol. 14,p. 8, 2007.

[41] G. Gescheider, S. Bolanowski, and R. Verrillo, “TemporalRelations in Cutaneous Stimulation,” Proc. Conf. Cutaneous Comm.Systems and Devices, pp. 33-37, 1974.

[42] L. Petrosino and D. Fucci, “Temporal Resolution of the AgingTactile Sensory System,” Perceptual and Motor Skills, vol. 68, no. 1,pp. 288-290, 1989.

[43] R. Verrillo and G. Gescheider, “Enhancement and Summation inthe Perception of two Successive Vibrotactile Stimuli,” PerceptPsychophysics, vol. 18, no. 2, pp. 128-136, 1975.

[44] R. Verrillo, “Temporal Summation in Vibrotactile Sensitivity,” TheJ. the Acoustical Soc. Am., vol. 37, no. 5, pp. 843-846, 1965.

[45] F.A. Geldard, Sensory Saltation: Metastability in the Perceptual World.Lawrence Erlbaum Assoc., 1975.

[46] J. Kirman, “Tactile Apparent Movement—The Effects of Inter-stimulus Onset Interval and Stimulus Duration,” Perception andPsychophysics, vol. 15, no. 1, pp. 1-6, 1974.

[47] C.D. Wickens and J.G. Hollands, Engineering Psychology andHuman Performance. Prentice Hall, 2001.

[48] F. Oliveira and F. Quek, “A Multimodal Communication with aHaptic Glove: On the Fusion of Speech and Deictic over a RaisedLine Drawing,” Proc. Petra—The First Int’l Conf. PErvasiveTechnologies Related to Assistive Environments, 2008.

[49] K. Sathian, “Practice Makes Perfect: Sharper Tactile Perception inthe Blind,” Neurology, vol. 54, no. 12, pp. 2203-2204, 2000.

[50] C. Spence, M. Nicholls, and J. Driver, “The Cost of ExpectingEvents in the Wrong Sensory Modality,” Perception & Psychophy-sics, vol. 63, no. 2, pp. 330-336, 2001.

[51] H.H. Clark, Arenas of Language Use. Univ. of Chicago Press, 1992.[52] M. Wilson, “Six Views of Embodied Cognition,” Psychnomic Bull.

and Rev., vol. 9, no. 4, pp. 625-636, 2002.[53] R.T. Rose, F. Quek, and Y. Shi, “MacVisSTA: A System for

Multimodal Analysis,” Proc. Sixth Int’l Conf. Multimodal Interfaces,2004.

[54] D. McNeill, “Gesture, Gaze, and Ground,” Proc. Second Int’lWorkshop Machine Learning for Multimodal Interaction, 2006.

182 IEEE TRANSACTIONS ON HAPTICS, VOL. 5, NO. 2, APRIL-JUNE 2012

[55] A. Monk, Common Ground in Electronically Mediated Communica-tion, pp. 265-286. Morgan & Claypool Publishers, 2003.

[56] D.G. Tatar, G. Foster, and D.G. Bobrow, “Designing forConversation: Lessons From Cognoter,” Int’l J. Man-MachineStudies, vol. 34, no. 2, pp. 185-209, 1991.

[57] P. Dourish, Where the Action Is: The Foundations of EmbodiedInteraction. MIT Press, 2001.

[58] A.D. Fisk and A. Schneider, “Controlled and Automatic Proces-sing During Tasks Requiring Sustained Attention,” Human Factors,vol. 23, no. 6, pp. 737-750, 1981.

[59] A. Hodzic et al, “Improvement and Decline in Tactile Discrimina-tion Behavior After Cortical Plasticity Induced by Passive TactileCoactivation,” J. Neuroscience, vol. 24, no. 2, pp. 442-446, 2004.

[60] M. Csikszentmihalyi, Flow: The Psychology of Optimal Experience.Harper & Row, 1990.

[61] J. Chen, “Flow in Games (and Everything Else),” Comm. ACM,vol. 50, no. 4, pp. 31-34, 2007.

[62] D. Staub and C. Peck, “What Are the Outcomes for NondisabledStudents?,” Educational Leadership, vol. 52, no. 4, pp. 36-40, 1994.

[63] E. Baker, M. Wang, and H. Walberg, “The Effects of Inclusion onLearning,” Educational Leadership, vol. 52, no. 4, pp. 32-35, 1994.

[64] R. Baear, R.W. Flexer, and R.K. McMahan, “Transition Models andPromising Practices,” Transition Planning for Secondary Studentswith Disabilities, Pearson/Merrill Prentice Hall, 2005.

[65] C. Smith, A. Walton, A. Loveland, G. Umberger, R. Kryscio, andD. Gash, “Memories that Last in Old Age: Motor Skill Learningand Memory Preservation,” Neurobiology of Aging, vol. 26, no. 6,pp. 883-890, 2005.

Francisco CMB Oliveira received the BSdegree in computer science from UniversidadeEstadual do Ceara, Brazil (1990), master’sdegree in applied informatics from Universi-dade de Fortaleza, Brazil (2002), and the PhDdegree in computer science from VirginiaTech (2010). His research interests includehaptics, multimodal interaction, electronicallymediated human-to-human communication, as-sistive technologies, and computer vision. He

has vast experience in industry, especially managing large scaleprojects.

Francis Quek received both the BSE summacum laude (1984) and MSE (1984) degrees inelectrical engineering from the University ofMichigan and the PhD degree in computerscience at the same university in 1990. He is aprofessor in the Center for Human ComputerInteraction (CHCI) at Virginia Tech. The CHCI isa “university-level” center drawing faculty fromacross the Virginia Tech campus. There arecurrently 25 faculty members in the CHCI with a

broad portfolio of research. He also directs the Vision Interfaces andSystems Laboratory at the CHCI. He is a member of the IEEE and ACM.

Heidi Cowan is a faculty member at WrightState. Her research interests are in theteaching and learning of school mathematicsfor students with visual impairments. She hasworked on several US National ScienceFoundation (NSF) grants which aim to makescience education accessible to students withdisabilities. This work has included the posi-tion of assistant director of Creating Labora-tory Access for Science Students, an NSF

initiative through Wright State University.

Bing Fang received the BS degree in computerscience from Tsinghua University at Beijing,China (2003), and started the PhD degree incomputer science and applications at VirginiaTech in 2004. His research interests are: imageprocessing, computer vision and application,and human computer interaction.

. For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

OLIVEIRA ET AL.: THE HAPTIC DEICTIC SYSTEM—HDS: BRINGING BLIND STUDENTS TO MAINSTREAM CLASSROOMS 183