handbook of human factors and ergonomics (salvendy/handbook of human factors 4e) || virtual...

26
CHAPTER 36 VIRTUAL ENVIRONMENTS* Kay M. Stanney Design Interactive, Inc. Oviedo, Florida Joseph V. Cohn Office of Naval Research Arlington, Virginia 1 INTRODUCTION 1031 2 SYSTEM REQUIREMENTS 1031 2.1 Hardware Requirements 1032 2.2 Software Requirements 1037 3 DESIGN AND IMPLEMENTATION STRATEGIES 1038 3.1 Cognitive Aspects 1038 3.2 Content Development 1040 3.3 Product Liability 1040 3.4 Usage Protocols 1040 4 HEALTH AND SAFETY ISSUES 1041 4.1 Cybersickness, Adaptation, and Aftereffects 1041 4.2 Social Impact 1042 5 VIRTUAL ENVIRONMENT USABILITY ENGINEERING 1042 5.1 Usability Techniques 1042 5.2 Sense Of Presence 1042 6 APPLICATION DOMAINS 1043 6.1 VE as a Selection and Training Tool 1043 6.2 VE as an Entertainment and Education Tool 1045 6.3 VE as a Medical Tool 1046 7 CONCLUSIONS 1048 ACKNOWLEDGMENTS 1048 REFERENCES 1048 1 INTRODUCTION Virtual environments (VEs) aim to immerse users in realistic settings, allowing them to engage in an intu- itive and intimate manner with their digital universe. Even a decade ago, VE technologies and their appli- cations were immature and few in number. This has changed dramatically, with a recent analysis of the virtual simulation training market revealing that today commercial and customized virtual simulation and train- ing products abound (King, 2009). Virtual environment technologies have many advantages, including the abil- ity to provide adaptable, modest cost, deployable, and safe training solutions, offer rehabilitation and medical applications that reach far beyond the conventional, and create learning and game-based virtual experiences that would otherwise be impossible to explore. Yet limita- tions do exist, in that virtual experiences cannot fully replace the benefits of real experiences, there is not yet a full understanding on how best to use this technology, * This chapter represents a modified version of K. M. Stanney and J. V. Cohn, “Virtual Environments,” in Handbook of Human–Computer Interaction (2012, 3rd Edition), J. Jacko, Ed., Taylor & Francis. and the technology does not always meet the expecta- tions of its users due to issues such as cybersickness and lack of presence. The future looks bright, however, as the gaming industry is pushing the realm of the possible and making it ever more feasible to “learn by doing,” “train like we fight,” and “involve me and I understand.” This chapter reviews the current state-of-the-art in VE technology, provides design and implementation strate- gies, discusses health and safety concerns and potential countermeasures, and presents the latest in VE usability engineering approaches. Current efforts in a number of application domains are reviewed. The chapter should enable readers to better specify design and implementa- tion requirements for VE applications and prepare them to use this advancing technology in a manner that min- imizes health and safety concerns. 2 SYSTEM REQUIREMENTS A VE is a computer-generated immersive environment that can simulate both real and imaginary worlds, oftentimes in three dimensions. Current VE applications are primarily intriguing visual and auditory experiences, 1031 Handbook of Human Factors and Ergonomics, Fourth Edition Gavriel Salvendy Copyright © 2012 John Wiley & Sons, Inc.

Upload: gavriel

Post on 08-Dec-2016

218 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

CHAPTER 36VIRTUAL ENVIRONMENTS*

Kay M. StanneyDesign Interactive, Inc.Oviedo, Florida

Joseph V. CohnOffice of Naval ResearchArlington, Virginia

1 INTRODUCTION 1031

2 SYSTEM REQUIREMENTS 1031

2.1 Hardware Requirements 1032

2.2 Software Requirements 1037

3 DESIGN AND IMPLEMENTATIONSTRATEGIES 1038

3.1 Cognitive Aspects 1038

3.2 Content Development 1040

3.3 Product Liability 1040

3.4 Usage Protocols 1040

4 HEALTH AND SAFETY ISSUES 1041

4.1 Cybersickness, Adaptation, and Aftereffects 1041

4.2 Social Impact 1042

5 VIRTUAL ENVIRONMENT USABILITYENGINEERING 1042

5.1 Usability Techniques 1042

5.2 Sense Of Presence 1042

6 APPLICATION DOMAINS 1043

6.1 VE as a Selection and Training Tool 1043

6.2 VE as an Entertainment and Education Tool 1045

6.3 VE as a Medical Tool 1046

7 CONCLUSIONS 1048

ACKNOWLEDGMENTS 1048

REFERENCES 1048

1 INTRODUCTION

Virtual environments (VEs) aim to immerse users inrealistic settings, allowing them to engage in an intu-itive and intimate manner with their digital universe.Even a decade ago, VE technologies and their appli-cations were immature and few in number. This haschanged dramatically, with a recent analysis of thevirtual simulation training market revealing that todaycommercial and customized virtual simulation and train-ing products abound (King, 2009). Virtual environmenttechnologies have many advantages, including the abil-ity to provide adaptable, modest cost, deployable, andsafe training solutions, offer rehabilitation and medicalapplications that reach far beyond the conventional, andcreate learning and game-based virtual experiences thatwould otherwise be impossible to explore. Yet limita-tions do exist, in that virtual experiences cannot fullyreplace the benefits of real experiences, there is not yeta full understanding on how best to use this technology,

* This chapter represents a modified version of K. M. Stanneyand J. V. Cohn, “Virtual Environments,” in Handbook ofHuman–Computer Interaction (2012, 3rd Edition), J. Jacko,Ed., Taylor & Francis.

and the technology does not always meet the expecta-tions of its users due to issues such as cybersickness andlack of presence. The future looks bright, however, asthe gaming industry is pushing the realm of the possibleand making it ever more feasible to “learn by doing,”“train like we fight,” and “involve me and I understand.”This chapter reviews the current state-of-the-art in VEtechnology, provides design and implementation strate-gies, discusses health and safety concerns and potentialcountermeasures, and presents the latest in VE usabilityengineering approaches. Current efforts in a number ofapplication domains are reviewed. The chapter shouldenable readers to better specify design and implementa-tion requirements for VE applications and prepare themto use this advancing technology in a manner that min-imizes health and safety concerns.

2 SYSTEM REQUIREMENTS

A VE is a computer-generated immersive environmentthat can simulate both real and imaginary worlds,oftentimes in three dimensions. Current VE applicationsare primarily intriguing visual and auditory experiences,

1031Handbook of Human Factors and Ergonomics, Fourth Edition Gavriel SalvendyCopyright © 2012 John Wiley & Sons, Inc.

Page 2: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1032 PERFORMANCE MODELING

Hardware Requirements Software Requirements

Tracking Devices

DisplayDevices

InteractionTechniques

ModelingSoftware

AutonomousAgents

CommunicationNetworks

Figure 1 Hardware and software requirements for virtual environment generation.

with a smaller number incorporating additional sensorymodalities, such as haptics and smell. These worlds aredriven by hardware, which provides the hosting platformand multimodal presentation, which allows for physicalinteraction and tracks the whereabouts of users as theytraverse the virtual world, as well as software to modeland generate the virtual world and their autonomousagents and support communication networks that linkmultiple users (see Figure 1).

More specifically, hardware interfaces consist pri-marily of:

• Interface devices used to present multimodalinformation and sense the VE

• Tracking devices used to identify head and limbposition and orientation

• Interaction techniques that allow users to navi-gate through and interact with the virtual world

Software interfaces include:

• Modeling software used to generate VEs• Autonomous agents that inhabit VEs• Communication networks used to support mul-

tiuser virtual environments

2.1 Hardware Requirements

Virtual environments require very large physical mem-ories, high-speed processors, high-bandwidth massstorage capacity, and high-speed interface ports forinteraction devices (Durlach and Mavor, 1995). Theserequirements are easily met by today’s high-speed, high-bandwidth computing systems, many of which havesurpassed the gigahertz barrier. The future looks evenbrighter, with promises of massive parallelism in mul-ticore and many-core processor architectures (Holmeset al., 2010), which will allow tomorrow’s computingsystems to be exponentially faster than their ancestors.With the rapidly advancing ability to generate complex

and large-scale virtual worlds, hardware advances inmultimodal input/output (I/O) devices, tracking systems,and interaction techniques are needed to support genera-tion of increasingly engaging virtual worlds. In addition,the coupling of augmented cognition and VE technolo-gies can lead to substantial gains in the ability to eval-uate their effectiveness.

2.1.1 Multimodal I/Os

To present a multimodal VE (see Chapter 14), multipledevices are used to present information to VE users.In terms of VE projection systems, the one that hasreceived the greatest attention, both in hype and disdain,is almost certainly the head-mounted display (HMD).One benefit of HMDs is their compact size, as an HMDwhen coupled with a head tracker can be used to providea similar visual experience as a multitude of bulkydisplays associated with spatially immersive displays(SIDs) and desktop solutions. In addition, HMDs aresuggested to enhance situation awareness, enable correctdecision making, and reduce workload by allowingusers to turn their head and eyes to fully perceive theenvironment, decreasing multimodal clutter, providingan intuitive means of presenting spatialized multimodalwarnings and alerts, and redundantly coding criticalcues (e.g., external threats, navigational waypoints), forexample, by using audio cues to direct visual attention(Melzer and Rash, 2009).

There are three main types of HMDs: monocular(e.g., one image source is viewed by a single eye), bioc-ular (e.g., one image source viewed by both eyes), andbinocular (e.g., stereoscopic viewing via two image gen-erators, with each eye viewing an independent imagesource) (Melzer et al., 2009). A monocular HMD designis best when projecting moving maps or text informa-tion that must be read on the move (e.g., dismountedWarfighter) or to allow viewing of imagery with the sim-plest, lightest (in terms of head-supported weight/mass),and least costly (both monetarily and in terms of powerconsumption) solution. The downside of monocular dis-plays is that they have a small field of view (FOV),

Page 3: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1033

convey no stereoscopic depth information, have thepotential for a laterally asymmetric center of mass (CM),and may have issues associated with focus, eye domi-nance, binocular rivalry, and ocular-motor instability.For a wide FOV, more effective target recognition,and a more comfortable viewing experience, a biocu-lar or binocular solution is needed. Biocular solutionspresent no interocular rivalry and are lighter, easierto adjust, and less expensive than binocular solutions.The disadvantages of biocular displays are that theyare heavier, more complex to align, focus, and adjust,and have reduced luminance as compared to monoc-ular displays. Binocular displays have a symmetricalCM and can present stereo viewing (via field-sequentialsingle-screen displays with shutter glasses, single-screenpolarized displays, or dual-screen HMDs), which pro-vides for better depth information than monocular andbiocular solutions. On the downside, binocular solu-tions are heavy, require more complex alignment, focus,and adjustments than monocular, and are expensive.Biocular and binocular solutions are particularly wellsuited when creating fully immersive VEs for gamingor training systems, as their large FOV provides a morecompelling sense of immersion.

When coupled with tracking devices, HMDs can beused to present three-dimensional (3D) visual scenes thatare updated as a user moves his or her head about a vir-tual world. Although this often provides an engagingexperience, due to poor optics, sensorial mismatches,and slow update rates, these devices are also often asso-ciated with adverse effects such as eyestrain and nausea(Stanney and Kennedy, 2008). In addition, while HMDshave come down substantially in weight, rendering themmore suitable for extended wear, they are still hinderedby cumbersome designs, obstructive tethers, suboptimalresolution, and insufficient FOVs. These shortcomingsmay be the reason behind why, in a review of HMDdevices, approximately a third had been discontinuedby their manufacturers (Bungert, 2007). Nevertheless,of the HMDs available, there are several low- to mid-cost models, which are relatively lightweight and pro-vide a horizontal FOV and resolution far exceedingpredecessor systems.

Low-technology stereo viewing VE display optionsinclude anaglyph methods, where a viewer wears glasseswith distinct color-polarized filters, usually with the left-image data placed in the red channel of an electronicdisplay and the right-image data in the blue channel;parallel or cross-eyed methods, in which right and leftimages are displayed adjacently (parallel or crossed),requiring the viewer to actively fuse the separate imagesinto one stereo image; parallax barrier displays, in whichan image is made by interleaving columns of twoimages from a left- and right-eye perspective image ofa 3D scene; polarization methods, in which the imagesfor the left and right eyes are projected on a planethrough two orthogonal linearly polarizing filters (e.g.,the right image is polarized horizontally; the left ispolarized vertically) and glasses with polarization filtersare donned to see the 3D effect; Pulfrich methods, inwhich an image of a scene moves sideways across theviewer’s FOV and one eye is covered by a dark filter so

that the darkened image reaches the brain later, causingstereo disparity; and shutter glass methods in whichimages for the right and left eyes are displayed in quickalternating sequence and special shutter glasses are wornthat “close” the right or left eye at the correct time(Konrad and Halle, 2007; Vince, 2004). All of theselow-technology solutions are limited in terms of theirresolution, the maximum number of views that they candisplay, and clunky implementation; they can also beassociated with pseudoscopic images (e.g., the depth ofan object can appear to flip inside out).

Other options in visual displays include SIDs(e.g., displays that surround viewers physically withpanoramic large FOV imagery generally projected viafixed front or rear projection display units; Konrad andHalle, 2007; Majumder, 2003), desktop stereo displays,and volumetric displays that fill a volume of space witha “floating” image (Konrad and Halle, 2007). Examplesof SIDs include the Cave Automated Virtual Environ-ment (CAVE) (Cruz-Neira et al., 1993), Blue-c, Imm-ersaDesk, PowerWall, Infinity Wall, and VisionDome(Majumder, 1999). Issues with SIDs include a stereoview that is correct for only one or a few viewers, notice-able overlaps between adjacent projections, and imagewarp on curved screens. Blue-c addresses some of theseconcerns by combining simultaneous acquisition of mul-tiple 3D video streams with advanced 3D projectiontechnology (Gross et al., 2003). Desktop display systemshave advantages over SIDs because they are smaller,easier to configure in terms of mounting cameras andmicrophones, easier to integrate with gesture and hapticdevices, and more readily provide access to conven-tional interaction devices, such as mice, joysticks, andkeyboards. Issues with such displays include stereo thatis only accurate for one viewer and a limited-displayvolume. Volumetric displays provide visual accommo-dation depth cues and vertical parallax, which are par-ticularly useful for scenes that require viewing froma multitude of viewing angles, generally without theneed for goggles; however, they do not maintain accu-rate occlusion cues (often considered the strongest depthcues) for all viewers (Konrad and Halle, 2007). Per-specta is an example of a swept-volume display thatuses a flat, double-sided screen with a rotating pro-jected image to sweep out a hemispherical image vol-ume (Favalora, 2005). DepthCube is an example of astatic-volume display that uses electronically address-able elements [i.e., a digital micromirror device (DMD)imaging system] to scan out the image volume (Sulli-van, 2004). Issues with volumetric displays include lowresolution and the tendency for transparent images tolose interposition cues. Also, view-independent shad-ing of objects is not possible with volumetric displays,and current solutions do not exhibit arbitrary occlusionby interposition of objects (Konrad and Halle, 2007).The way of the future seems to be direct virtual retinaldisplays, where images are projected directly onto thehuman retina with a low-energy laser or liquid crystaldisplays (LCDs) (McQuaide et al., 2003), as well as dis-plays that represent the physical world around us, suchas autostereoscopic omnidirectional light field displays,

Page 4: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1034 PERFORMANCE MODELING

which present interactive 3D graphics to multiple simul-taneous viewers 360◦ around the display (Jones et al.,2007). If designed effectively, these next-generationdevices should eliminate the tethers and awkwardness ofcurrent designs while enlarging the FOV and enhancingresolution.

When virtual environments provide audio (seeChapter 9), the interactive experience is generallygreatly enhanced (Shilling and Shinn-Cunningham,2002). Audio can be presented via spatialized or non-spatialized displays. Just as stereo visual displays area defining factor for VE systems, so are “interactive”spatialized audio displays (e.g., those with “on-the-fly”positioning of sounds). VRSonic’s SoundScape3D(http://www.vrsonic.com/), Firelight’s FMod (http://www.fmod.org/), and AuSIM3D (http://ausim3d.com/)are examples of positional 3D audio technology. Therehave been promising developments in new sound mod-eling paradigms (e.g., VRSonic’s ViBe technology) andsound design principles that will hopefully lead to a newgeneration of tools for designing effective spatial–audioenvironments (Fouad, 2004; Fouad and Ballas, 2000;Jones et al., 2005).

Developers must decide if sounds should be pre-sented via headphones or speakers. For nonspatializedaudio, most audio characteristics (e.g., timbre, rela-tive volume) are generally considered to be equivalentwhether projected via headphones or speakers. This isnot so for spatialized audio, in which the presentationtechnique impacts how audio is rendered for the dis-play and presents the developer with important designchoices.

While in the past headphone spatialization requiredexpensive, specialized hardware to achieve real-timerates, modern multicore processors as well as the avail-ability of powerful graphics processing units (GPUs)have made it possible to render complex audio envi-ronments over headphones using general-purpose com-puters. With binaural rendering, a sound can be placedin any location, right or left, up or down, near or far,via the use of a head-related transfer function (HRTF) torepresent the manner in which sound sources change asa listener moves his or her head (Begault, 1994; Butler,1987; Cohen, 1992). For optimal results, however, theHRTFs used for rendering must be personalized for eachindividual user. One method of doing this is to actuallymeasure each user’s HRTF for use in rendering. Thisapproach generally involves a fairly lengthy measure-ment procedure using specialized hardware. Recently,there have been efforts to develop fast and low-costapproaches to HRTF measurements (Zotkin et al., 2006)that may, in the future, make personalized HRTF render-ing practical for general use. An alternative approach tomeasured HRTFs is to use a best-fit HRTF selection pro-cess in which one finds the nearest matching HRTF ina database of candidate HRTFs by either comparing thephysiological characteristics of stored HRTFs to thoseof a target user (Algazi et al., 2001) or using a subjec-tive selection process to find the best-fit HRTF (Seeber,2003). Other considerations that should be taken intoaccount when choosing headphone rendering are that,for immersive displays, head trackers must be used to

achieve proper relative positioning of sound sources.Also, rendering spatial audio for groups of users overheadphones may not be practical for more than a fewusers.

An alternative approach to headphone spatializationis the use of loudspeaker arrays (Ballas et al., 2001).Loudspeaker arrays can range in size from relativelysmall surround-sound configurations with 2, 4, 5, 7, or10 loudspeakers up to hundreds of loudspeakers. Thedifferentiating factors among loudspeaker arrays are thespeaker layouts, number of loudspeakers comprising thearray, and algorithms used to render spatial audio. Gen-erally speaking, increasing the number of loudspeakersin the array results in more accurate spatialization. Themanner in which loudspeakers are laid out in the listen-ing area is closely related to the size of the array. Planarloudspeaker configurations require a smaller number ofloudspeakers but are only capable of creating a 2D soundfield. Volumetric configurations, on the other hand, cancreate a 3D sound field but require a larger numberof loudspeakers and a more elaborate setup. Recently,VRSonic introduced a spherical loudspeaker array sys-tem called the AcoustiCurve. It provides a volumetricarray in a spherical configuration around the listeningspace.

The rendering algorithm used for spatialization isalso closely tied to the loudspeaker array size and con-figuration. Pairwise panning algorithms are the simplestform of spatialization and create a positional soundsource by manipulating the amplitude of the signal arriv-ing at two adjacent loudspeakers in the array (Mouba,2009). An extension to this idea is vector base amplitudepanning (VBAP), where the source is panned amongthree loudspeakers forming a triangle in a volumetricarray (Pulkki, 1997). Another spatialization algorithmthat is gaining popularity is wave field synthesis (WFS),a technique based on Huygens’s principle (Spors andAhrens, 2010). The WFS technique creates a positionalsource within the listening space by re-creating the inci-dent wave front of a virtual source using a loudspeakerarray. The advantage of WFS is that it does not sufferfrom the “sweet spot” problem so listeners can get anaccurate impression of the synthesized sound field at anylocation within the listening space; this is not the casewith pairwise panning (Shilling and Shinn-Cunningham,2002). The primary drawback of WFS is that itrequires a large number of loudspeakers and consider-able processing power to re-create the incident wavefront.

Whether using headphones or loudspeaker arrays,spatialization is only one component of simulating asound field and developers should carefully consider thelevel of fidelity required by the application when choos-ing an audio rendering system. Properly synthesizing avirtual soundscape requires modeling the full propaga-tion path of sound, including source model, spreadingloss, air absorption, material absorption, and materialreflection. Accurately modeling the full propagation pathin real time is beyond the capabilities of current com-puters. There is, however, promising research in the useof GPU processors to achieve real-time rates using raycasting methods (Jedrzejewski, 2004).

Page 5: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1035

While not as commonly incorporated into VEsas visual and auditory interfaces, haptic devices (seeChapter 10) can be used to enhance aspects of touchand movement of the hand or body segments whileinteracting with a virtual environment. Haptic deviceshave been classified as passive (unidirectional, e.g., key-board, mouse, trackball) versus active (bidirectional,thereby supporting two-way communications betweenhuman and interactive system; Hale and Stanney, 2004;e.g., force reflecting robotic arm), grounded (e.g., joy-stick) versus ungrounded (e.g., exoskeleton-type hapticdevices), net-force (e.g., PHANTOM device or texturedsurfaces) versus tactile devices (e.g., tactile pin arrays),and impedance control (i.e., user’s input motion is mea-sured and an output force is returned) versus admit-tance control (e.g., user’s input forces are measured andmotion is fed back to the user) (Basdogan and Loftin,2008). In general, haptic displays are effective at alert-ing people to critical tasks (e.g., warning), providing aspatial frame of reference within one’s personal space,and supporting hand–eye coordination tasks. Texturecues, such as those conveyed via vibrations or varyingpressures, are effective as simple alerts and may speedreaction time and aid performance in degraded visualconditions (Akamatsu, 1994; Biggs and Srinivasan,2002; Massimino and Sheridan, 1993; Mulgund et al.,2002). Kinesthetic devices are advantageous when tasksinvolve hand–eye coordination (e.g., object manipula-tion), where haptic sensing and feedback are key to per-formance. Currently available haptic interaction devicesinclude static displays (e.g., convey deformability orBraille); vibrotactile, electrotactile, and pneumatic dis-plays (e.g., convey tactile sensations such as surfacetexture and geometry, surface slip, surface temperature);force feedback systems (e.g., convey object position andmovement distances); and exoskeleton systems (e.g.,enhance object interaction and weight discrimination)(Hale and Stanney, 2004). Minamizawa et al. (2008)suggest that to provide natural haptic feedback, suchinterfaces should be bimanual and wearable and aim toenhance the existence and operability of virtual objectswhile not disturbing the motion and behavior of users.Currently, there are several wearable haptic displays thatcan be used in virtual environments, such as Cyber-Glove Systems’ CyberGlove, CyberTouch, CyberGrasp,and CyberForce (http://www.cyberglovesystems.com/)and Immerz’s KOR-fx (Kinetic Omnidirectional Res-onance effect) acousto-haptic technology, the latter ofwhich translates the audio signals from an interactiveenvironment into vibrations that can be felt throughoutthe body and experienced as the sensation of rain, wind,weight shift, and G-forces (www.Immerz.com). Beyondsupporting hand–eye coordination tasks and conveyingsimple alerts, haptics can be used to communicate gram-mar structured strings of tactile symbols (Fuchs et al.,2008). Such a tactile language has been used at a conceptlevel to support urban military operations, specificallyin support of unit coordination and room clearing tasks(Johnston, Hale, & Axelsson, 2010). Beyond communi-cating a command-based vocabulary, haptics can also beused to provide exteroceptive feedback, for example, bypresenting tactile cues to enhance situation awareness

or optimize human performance. It has been suggestedthat such a solution could more closely couple oper-ators with unmanned aerial systems (Johnston et al.,2010). The future may bring volumetric haptic displays,which project a touch-based representation of a sur-face onto a 3D volumetric space and allow users tofeel the projected surface with their hands (Acosta andLiu, 2007) through haptic rending techniques (Basdoganet al., 2008), tearables that allow users to experience thereal sense of tearing paper (Maekawa et al., 2009), andother such interactive tactile solutions.

The “vestibular system can be exploited to create,prevent, or modify acceleration perceptions” in virtualenvironments (Lawson et al., 2002, p. 137). For exam-ple, by simulating acceleration cues, a person can bepsychologically transported from his or her veridicallocation, such as sitting in a chair in front of a com-puter, to a simulated location, such as the cockpit ofa moving airplane. While vestibular cues can be stim-ulated via many different techniques in VEs, three ofthe most promising methods are physical motion ofthe user (e.g., motion platforms), wide FOV visualdisplays that induce vection (e.g., an illusion of self-motion), and locomotion devices that induce illusionsof self-motion without physical displacement of the userthrough space (e.g., walking in place, treadmills, pedal-ing, foot platforms) (Hettinger, 2002; Hollerbach, 2002;Lawson et al., 2002). Of these options, motion platformsare probably the most advanced. For example, Sterlinget al. (2000) integrated a small motion-based platformwith a VE designed for helicopter landing training andfound it to be comparable to a high-cost, large-scalehelicopter simulator in terms of training effectiveness.Motion platforms are generally characterized via theirrange of motion/degrees of freedom (DOF) and actu-ator type (Isdale, 2000). In terms of range of motion,motion platforms can move a person in many combi-nations of translational (e.g., surge-longitudinal motion,sway-lateral motion, heave-vertical motion), and rota-tional (e.g., roll, pitch, yaw) DOF. A single-DOF transla-tional motion system might provide a vibration sensationvia a “seat shaker.” A common 6 DOF configurationis a hexapod, which consists of a frame with six ormore extendable struts (actuators) connecting a fixedbase to a movable platform. In terms of actuators, elec-trical actuators are quiet and relatively maintenance free;however, they are not very responsive and they cannothold the same load as can hydraulic or pneumatic sys-tems. Hydraulic and pneumatic systems are smoother,stronger, and more accurate; however, they require com-pressors, which may be noisy. Servos are expensiveand difficult to program.

Olfaction could be added to VE systems to stimu-late emotion or enhance recall (Basdogan and Loftin,2008). There have been several efforts made to sup-port advances in olfactory interaction (Gutierrez-Osuna,2004; Jones et al., 2004; Washburn and Jones, 2004;Washburn et al., 2003). One example of an olfactory sys-tem is the Scent Pallet (http://www.enviroscent.com/),which is a computer peripheral, universal serial bus(USB) device that uses up to eight scent cartridges,fans, and an air compressor to deliver different types

Page 6: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1036 PERFORMANCE MODELING

of scents. This system has been incorporated into theFull Spectrum Virtual Iraq/Afghanistan PTSD TherapyApplication to provide the smell of rubber, cordite,garbage, body odor, smoke, diesel fuel, gunpowder, andother scents of the battlefield (S. Rizzo et al., 2006).These scents can be used as direct stimuli (e.g., scent ofburning rubber) or as general cues to increase immersion(e.g., ethnic food cooking). The Scent Pallet was usedto present vanilla, pizza, coffee, whiskey, beer, brandy,tequila, gin, scotch, red wine, white wine, cigarettesmoke, and pine tree scents in an alcohol cue reactivityassessment system, which was found to be highly effec-tive in stimulating subjective alcohol cravings (Bordnicket al., 2008). While several have mentioned the incor-poration of gustatory stimulation, there are currently nofunctioning systems (Basdogan and Loftin, 2008).

2.1.2 Tracking Systems

Tracking systems allow determination of users’ head orlimb position and orientation or the location of hand-held devices in order to allow interaction with virtualobjects and traversal through 3D computer-generatedworlds (Foxlin, 2002). Tracking is what allows thevisual scene in a VE to coincide with a user’s pointof view, thereby providing an egocentric real-timeperspective. Tracking systems must be carefully coupledwith the visual scene, however, to avoid unacceptablelags (Kalawsky, 1993). Advances in tracking technologyhave been realized in terms of drift-corrected gyroscopicorientation trackers, outside-in optical tracking formotion capture, and laser scanners (Foxlin, 2002). Thefuture of tracking technology is likely hybrid trackingsystems (http://www.intersense.com/hybrid_technology.aspx), such as optical-inertial, GPS-inertial, magnetic-inertial, digital acoustic-inertial, and optical-magnetichybrid solutions.

Tracking technology also allows for gesture recogni-tion, in which human position and movement are trackedand interpreted to recognize semantically meaningfulgestures (Turk, 2002). Gestures can be used to specifyand control objects of interest, direct navigation, manip-ulate the environment, and issue meaningful commands.Gesture tracking devices that are worn (e.g., gloves,bodysuits) are currently more advanced than passivetechniques (e.g., computer vision), yet the latter holdmuch promise for the future, as they can provide morenatural, noncontact, and less obtrusive solutions thanthose that must be worn; limitations need to be overcomein terms of accuracy, processing speed, and generality(Erol et al., 2007).

2.1.3 Interaction Techniques

While one may think of joysticks and gloves whenconsidering VE interaction devices, there are manytechniques that can be used to support interaction withand traversal through a virtual environment. Interactiondevices support traversal, pointing and selection ofvirtual objects, tool usage (e.g., through force andtorque feedback), tactile interaction (e.g., through hapticdevices), and environmental stimuli (e.g., temperature,humidity) (Bullinger et al., 2001).

Supporting traversal throughout a VE, via motioninterfaces, is of primary importance (Hollerbach, 2002).Motion interfaces are categorized as either active (e.g.,locomotion) or passive (e.g., transportation). Active-motion interfaces require self-propulsion to move abouta virtual environment (e.g., treadmill, pedaling device,foot platforms). Passive-motion interfaces transportusers within a VE without significant user exertion (e.g.,inertial motion, as in a flight simulator, or noninertialmotion, such as in the use of a joystick or gloves).The utility, functionality, cost, and safety of locomotioninterfaces beyond traditional options (e.g., joysticks)have yet to be proven. In addition, beyond physicaltraining, concrete applications for active-motion inter-faces have yet to be clearly delineated. There are, how-ever, some example applications, such as Arch-Explore,which is a real walking user interface that adapts redi-rected walking to allow exploration of large-scale virtualmodels of architectural scenes in a room-sized virtualenvironment (Bruder et al., 2009).

Another interaction option is speech control (seeChapters 8 and 35). Continuous speech recognition sys-tems are currently under development, such as Para-keet (Vertanen and Kristensson, 2009), PocketSphinx(Huggins-Daines et al., 2006), and PocketSUMMIT(Hetherington, 2007). For these systems to provideeffective interaction, however, additional advances areneeded in acoustic and language-modeling algorithms toimprove the accuracy, usability, and efficiency of spokenlanguage understanding; such systems are still a waysaway from offering conversational speech.

To support natural and intuitive interaction, a varietyof interaction techniques can be coupled. For example,combining speech interaction with nonverbal gesturesand motion interfaces can provide a means of interactionthat closely captures real-world communications.

2.1.4 Augmented Cognition Techniques

Augmented cognition is an emerging computingparadigm in which users and computers are tightly cou-pled via physiological gauges that measure the cognitivestate of users and adapt interaction to optimize humanperformance (Stanney et al., 2009). If incorporated intoVE applications, augmented cognition could providea means of evaluating their validity and compellingnature. For example, neuroscience studies have estab-lished that differential aspects of the brain are engagedwhen learning different types of materials and the areasin the brain that are activated change with increasingcompetence (Carroll et al., 2010a; Kennedy et al., 2005).Thus, if VE users were immersed in an educationalexperience, augmented cognition technology could beused to gauge if targeted areas of the brain were beingactivated and dynamically modify the content of a VElearning curriculum if desired activation patterns werenot being generated. Physiological measures could alsobe used to detect the onset of cybersickness (see Section4.1) and to assess the engagement, awareness, andanxiety of VE users, thereby potentially providing muchmore robust measures of immersion and presence (seeSection 5.2). Such techniques could prove invaluable toentertainment VE applications (cf. Badique et al., 2002)

Page 7: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1037

that seek to provide the ultimate experience, militarytraining VE applications (cf. Knerr et al., 2002) thatseek to emulate the “violence of action” found duringcombat, medical training applications (Wiecha et al.,2010) that seek to enhance traditional lab-based andclassroom training practices, and therapeutic VEapplications (cf. North et al., 2002; Strickland et al.,1997) that seek to overcome disorders such as fearof heights or flying.

2.2 Software Requirements

Software development of VE systems has progressedtremendously, from proprietary and arcane systems todevelopment kits that run on multiple platforms (e.g.,general-purpose operating systems to workstations). Vir-tual environment system components have become mod-ular and distributed, thereby allowing VE databases(e.g., editors used to design, build, and maintain virtualworlds) to run independently of visualizers and othermultimodal interfaces via network links. Standard APIs(application program interfaces) (e.g., OpenGL, OpenInventor, Direct3D, Mesa3D) allow multimodal com-ponents to be hardware independent. Virtual environ-ment programming languages are maturing, with APIs,libraries (OpenGL Performer), and scripting languages(e.g., JavaScript, Lua, Linden, Mono, Perl, Python,Ruby) allowing nonprogrammers to develop virtualworlds (Stanney and Zyda, 2002). Advances are alsobeing made in modeling of autonomous agents and com-munication networks used to support multiuser virtualenvironments.

2.2.1 Modeling

A VE consists of a set of geometry, the spatial rela-tionships between the geometry and the user, and thechange in geometry invoked by user actions or the pas-sage of time (Kessler, 2002). Generally, modeling startswith building the geometry components (e.g., graphi-cal objects, sensors, viewpoints, animation sequences)(Kalawsky, 1993). These are often converted fromcomputer-aided design (CAD) data. These componentsthen get imported into the VE modeling environmentand rendered when appropriate sensors are triggered.Color, surface textures, and behaviors are applied dur-ing rendering. Programmers control the events in a VEby writing task functions, which become associated withthe imported components.

A number of 3D modeling languages and toolkitsare available that provide intuitive interfaces and run onmultiple platforms and renderers (e.g., 3D Studio Max,AC3D, ZBrush, modo 401, Nexus, AccuRender, 3dACIS Modeler, Ashlar-Vellum’s Argon/Xenon/Cobalt,Carrara, CINEMA 4D, DX Studio, EON Studio, solid-Thinking) (Ultimate 3D Links, 2010). In addition, thereare scene management engines (e.g., OpenSceneGraph,NVIDIA’s SceniX) and game engines (e.g., Real Vir-tuality) that allow programmers to work at a higherlevel, defining characteristics and behaviors for moreholistic concepts (Karim et al., 2003; Menzies, 2002).There have also been advances in photorealistic render-ing tools (e.g., EI Technology’s Amorphium), which are

evolving toward full-featured physics-based global illu-mination rendering systems (e.g., RenderPark). Takentogether, these advances in software modeling allow forthe generation of complex and realistic VEs that canrun on a variety of platforms, permitting access to VEapplications by both small- and large-scale applicationdevelopment budgets.

2.2.2 Autonomous Agents

Autonomous agents are synthetic or virtual human enti-ties that possess some degree of autonomy, social ability,reactivity, and proactiveness (Allbeck and Badler, 2002;also see Chapter 15). There are several types of agents(Serenko and Detlor, 2004), including user agents (i.e.,assist users by interacting with them, knowing their pref-erences and interests, and acting on their behalf), serviceagents (i.e., seamlessly collaborate with different partsof a system and perform more general tasks in the back-ground, unbeknownst to users), embedded agents (i.e.,interact with user and system to hide task complexityand make the overall user experience more exciting andenjoyable), and stand-alone agents (i.e., employ lead-ing edge technologies and lay down the foundation fornew architectures, standards, and innovative formats ofagent-based computing). Autonomous agents can havemany forms (e.g., human, animal), which are renderedat various levels of detail and style, from cartoonishto physiologically accurate models, and the form ofthe agent has been found to influence behavior bothduring and post VE exposure (i.e., the Proteus effect,where people infer their expected behaviors and atti-tudes from observing the appearance of their avatar; Yeeet al., 2009). Such agents are a key component of manyVE applications involving interaction with other entities,such as adversaries, instructors, or partners (Stanneyand Zyda, 2002). Considerable work is being done toenhance the believability of such agents. For example,Heylen et al. (2008) found that when humanlike eyegaze behavior was incorporated into agents, that userscommunicated with such agents more effectively, and ofutmost importance, human performance was also foundto be enhanced with the more lifelike agents. As ourunderstanding of how best to design autonomous agentsevolves, such principles will be important to incorporateinto their design to enhance the overall engagement andeffectiveness of virtual worlds.

There has been significant research and developmentin modeling embodied autonomous agents. As withobject geometry, agents are generally modeled off-lineand then rendered during real-time interaction. Whilethe required level of detail varies, modeling of hair andskin adds realism to an agent’s appearance (Allbeckand Badler, 2002). There are a few toolkits availableto support agent development, with one of the mostnotable offered by Boston Dynamics, Inc. (BDI) (http://www.bostondynamics.com/bd_diguy.html/), a spin-offfrom the MIT Artificial Intelligence Laboratory.BDI’s DI-Guy allows VE developers to quicklyintegrate humans into their VEs, providing artificialintelligence to the characters, thereby enabling agentsto autonomously navigate and react to their changingenvironment. Another option is ArchVision’s 3D Rich

Page 8: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1038 PERFORMANCE MODELING

Photorealistic Content (RPC) People (http://www.archvision.com/RPCPeople.cfm).

2.2.3 NetworksDistributed networks allow multiple users at diverselocations to interact within the same virtual environ-ment. Improvements in communication networks arerequired to allow realization of such shared experiencesin which users, objects, processes, and autonomousagents from diverse locations interactively collabo-rate (Durlach and Mavor, 1995). Yet the foundationfor such collaboration has been built within Internet2(http://www.internet2.edu/), a next-generation InternetProtocol (IP) that delivers production network servicesfor research and education institutions. This opticalnetwork could meet the high-performance demands ofVEs, as it allows user-based allocation of high-capacitydata circuits over a fiber-optic network. In addition,the Large Scale Networking (LSN) Coordinating Group(http://www.nitrd.gov/subcommittee/lsn.aspx) aims todevelop leading-edge networking technologies and ser-vices, including programs in network security, new net-work architectures, heterogeneous networking (optical,mobile wireless, sensornet, etc.), federation across net-working domains, grid and collaboration networkingtools and services, with a goal of assuring that the next

generation of the Internet will be scalable, trustwor-thy, and flexible. There are additional novel networktechnologies, including IP multicasting (i.e. a routingtechnique to prioritize one-to-many communication overan IP infrastructure in a network), quality of service(i.e., resource reservation control mechanisms), and IPv6[i.e., also called IPng (or IP Next Generation), is a nextgeneration IP addressing system] that could support dis-tributed VE applications, which can leverage the specialcapabilities (e.g., high bandwidth, low latency, low jit-ter) of these advancing network technologies to provideshared virtual worlds.

3 DESIGN AND IMPLEMENTATIONSTRATEGIES

While many conventional HCI techniques can be usedto design and implement VE systems, there are uniquecognitive, content, product liability, and usage protocolconsiderations that must be addressed (see Figure 2).

3.1 Cognitive Aspects

The fundamental objective of VE systems is to providemultimodal interaction or, when sensory modalitiesare missing, perceptual illusions that support human

Cognitive

MultimodalInteraction

Design

PerceptualIllusions

Navigation and Wayfinding

Products Liability

Cybersickness

Postural Instability &other Aftereffects

Content Development

Usage Protocols

1. Design VE stimulus to minimize adverse effects2. Quantify stimulus intensity3. Identify individual capacity of target users4. Set exposure duration and intersession interval5. Educate users regarding potential risks of VE exposure6. Etc.

Training

MedicalGame-based Learning

Figure 2 VE design and implementation strategies.

Page 9: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1039

information processing in pursuit of a VE application’sgoals, which could range from training to entertainment.Ancillary yet fundamental to this goal is to minimizecognitive obstacles, such as navigational difficulties, thatcould render a VE application’s goals inaccessible.

3.1.1 Multimodal Interaction Design

Virtual environments are designed to provide users withimmersive experiences that allow for direct manipula-tive and intuitive interaction with multisensory stimu-lation (Bullinger et al., 2001). The goals of providingthis multimodal interaction within a VE are to achievehuman–human communication/human–system interac-tion that is as natural as possible and to increase therobustness of this interaction by using redundant or com-plementary cues (Reeves et al., 2004). If designed effec-tively, engagement in such immersive multimodal VEexperiences can lead to high levels of situation aware-ness and in turn high levels of human performance; how-ever, the multimodal interaction within the VE must beappropriately designed to lead to this enhanced aware-ness. Specifically, the number of sensory modalitiesstimulated and the quality of this multisensory interac-tion are critical to the immersiveness and potential effec-tiveness of VE systems (Popescu et al., 2002). There aresome emerging guidelines in the design of such multi-modal interaction. For example, Stanney et al. (2004)provided a set of preliminary cross-modal integrationrules. These rules consider aspects of multimodal inter-action, including (a) temporal and spatial coincidence,(b) working memory capacity, (c) intersensory facilita-tion effects, (d) congruency, and (e) inverse effective-ness. When multimodal sensory information is providedto users, it is essential to consider such rules govern-ing the integration of multiple sources of sensory feed-back. VE users have adapted their perception–actionsystems to “expect” a particular type of informationflow in the real world; VEs run the risk of breakingthese perception–action couplings if the full range ofsensory is not supported or if it is supported in a man-ner that is not contiguous with real-world expectations.Such pitfalls can be avoided through consideration ofthe coordination between sensing and user commandand the transposition of senses in the feedback loop.Specifically, command coordination considers user inputas primarily monomodal and feedback to the user asmultimodal. Designers need to consider which inputmodalities are most appropriate to support execution of agiven task within the VE, if there is any need for redun-dant user input, and whether or not users can effectivelyhandle such parallel input (Stanney et al., 1998a, 2004).Additional multimodal design guidelines have been pro-vided by Hale et al. (2009), who have outlined howa number of sensory cues may effectively be used toenhance specific situation awareness (SA) components(i.e., object recognition, spatial, temporal) within a VE,with the goal of optimizing SA development.

A limiting factor in supporting multimodal sensorystimulation in VEs is the current state of interfacetechnologies. With the exception of the visual modality,current levels of technology simply cannot even beginto reproduce virtually those sensations, such as haptics

and olfaction, which users expect in the real world. Onesolution to current technological shortcomings, senso-rial transposition, occurs when a user receives feedbackthrough senses other than those expected, which mayoccur because a command coordination scheme hassubstituted available sensory feedback for those thatcannot be generated within a virtual environment.Sensorial substitution schemes may be one for one (e.g.,visual for force) or more complex (e.g., visual for forceand auditory; visual and auditory for force). If designedeffectively, command coordination and sensory substi-tution schemes should provide multimodal interactionthat allows for better user control of the virtual envi-ronment. On the other hand, if designed poorly, thesesolutions may in fact exacerbate interaction problems.

3.1.2 Perceptual Illusions

When sensorial transpositions are used, there is anopportunity for perceptual illusions to occur. With per-ceptual illusions, certain perceptual qualities perceivedby one sensory system are influenced by another sen-sory system (e.g., “feel” a squeeze when you see yourhand “grabbing” a virtual object). Such illusions couldsimplify and reduce the cost of VE development efforts(Storms, 2002). For example, when attending to a visualimage coupled with a low-quality auditory display,auditory–visual cross-modal perception allows for anincrease in the perceived quality of the visual image.Thus, in this case if the visual image is the focus of thetask, there may be no need to use a high-quality auditorydisplay.

There are several types of perceptual illusions thatcan be used in the design of virtual environments(Steinicke and Willemsen, 2010). Visual illusions canbe used to substitute for missing proprioceptive andvestibular senses, as vision usually dominates thesesenses. For example, vection (i.e., a compelling illusionof self-motion throughout a virtual world) is known tobe enhanced via a number of visual display factors,including a wide field of view and high spatial frequencycontent (Hettinger, 2002), as well as visual jitter(Kitazaki et al., 2010). In addition, change blindness(i.e., failing to notice alterations in a visual scene) canbe used to apply subtle manipulations to the geometry ofa VE and direct movement behavior, such as redirectinga user’s walking path throughout a virtual environment(Suma et al., 2010). Other such illusions exist and couldlikewise be leveraged if perceptual and cognitive designprinciples are identified that can be used to trigger andcapitalize on these illusory phenomena. For example,acoustic illusions (e.g., a fountain sound; Riecke et al.,2009) could also be used to create a sense of vectionin a VE, even when no such visual motion is provided.In addition, haptic illusions (Hayward, 2008) could beused to provide users with the impression of actuallyfeeling virtual objects when they are in fact touchingreal-world props or traveling along a trajectory paththat may even vary in size, shape, weight, or surfacefrom their virtual counterparts without users perceivingthese discrepancies (e.g., feel an illusory bump whenactually touching a flat surface [Robles-De-La-Torreand Hayward, 2001]; feel an illusory sharp edge when

Page 10: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1040 PERFORMANCE MODELING

hand actually travels along a smooth trajectory [Portillo-Rodrıguez et al., 2006]).

3.1.3 Navigation and WayfindingEffective multimodal interaction design and use of per-ceptual illusions can be impeded if navigational com-plexities arise. Navigation is the aggregate of wayfinding(e.g., cognitive planning of one’s route) and the physicalmovement that allows travel throughout a virtual envi-ronment (Darken and Peterson, 2002). A number of toolsand techniques have been developed to aid wayfindingin virtual worlds, including maps, landmarks, trails, anddirection finding. These tools can be used to displaycurrent position, current orientation (e.g., compass), logmovements (e.g., “breadcrumb” trails), demonstrate oraccess the surround (e.g., maps, binoculars), or pro-vide guided movement (e.g., signs, landmarks) (Chenand Stanney, 1999). For example, Burigat and Chittaro(2007) found 3D arrows to be particularly effective inguiding navigation throughout an abstract virtual envi-ronment. Darken and Peterson (2002) provided a num-ber of principles concerning how best to use these tools.If effectively applied to VEs, these principles shouldlead to reduced disorientation and enhanced wayfindingin large-scale virtual environments.

3.2 Content DevelopmentContent development is concerned with the designand construction of the virtual objects and syntheticenvironment that support a VE experience (Isdale et al.,2002). While this medium can leverage existing HCIdesign principles, it has unique design challenges thatarise due to the demands of real-time, multimodal,collaborative interaction. In fact, content designers arejust starting to appreciate and determine what it means tocreate a full sensory experience with user control of bothpoint of view and narrative development. Aestheticsis thought to be a product of agency (e.g., pleasureof being), narrative potential, presence and co-presence(e.g., existing in and sharing the virtual experience), aswell as transformation (e.g., assuming another persona)(Murray, 1997). Content development should be aboutstimulating perceptions (e.g., sureties, surprises) as wellas contemplation over the nature of being (Isdale et al.,2002).

Existing design techniques, for example, from enter-tainment, video games, and theme parks, can be used tosupport VE content development (see Chapters 43 and46). Game development techniques that can be lever-aged in VE content development include but are notlimited to providing a clear sense of purpose, emotionalobjectives, perceptual realism, intuitive interfaces, mul-tiple solution paths, challenges, a balance of anxiety andreward, as well as an almost unconscious flow of inter-action (Isdale et al., 2002). From theme park design,content development suggestions include (a) having astory that provides the all-encompassing theme of theVE and thus the “rules” that guide design, (b) providinglocation and purpose, (c) using cause and effect to leadusers to their own conclusions, and (d) anchoring usersin the familiar (Carson, 2000a, 2000b). While these sug-gestions provide guidelines for VE content development,

considerable creativity is still an essential component ofthe process.

While the content incorporated into the virtual worldsof today is mostly quite separate from the real world, inrecent years life and technology have been more tightlycoupled, the result being that computers are starting tohave an awareness of themselves as well as the peoplewho interact with them in 3D virtual spaces that areevolving into a “second life.” Virtual worlds are in factpenetrating our native space and content developmentfor future generations will likely aim to allow us toseamlessly use our own native language, with its widerange of verbal and physical gestures and emotions,thereby more fully entwining our first and second(virtual) lives (Rolston, 2010).

3.3 Product Liability

Those who implement VE systems must be cognizantof potential product liability concerns. Exposure to aVE system often produces unwanted side effects thatcould render users incapable of functioning effectivelyupon return to the real world. These adverse effects mayinclude nausea and vomiting, postural instability, visualdisturbances, and profound drowsiness (Stanney et al.,1998b). As users subsequently take on their normal rou-tines, unaware of these lingering effects, their safetyand well-being may be compromised. If a VE productoccasions such problems, liability of VE developers orsystem administrators could range from simple account-ability (e.g., reporting what happened) to full legal liabil-ity (e.g., paying compensation for damages) (Kennedyand Stanney, 1996; Kennedy et al., 2002). In orderto minimize their liability, manufacturers and corporateusers should design systems and provide usage protocolsto minimize risks, warn users about potential afteref-fects, monitor users during exposure, assess users’ risk,and debrief users after exposure.

3.4 Usage Protocols

To minimize product liability concerns, VE usage pro-tocols should be carefully designed. A comprehensiveVE usage protocol will involve the following activities(see Stanney et al., 2005):

1. Designing VE stimulus to minimize adverseeffects by minimizing lags and latencies, opti-mizing frame rates, and providing an adjustableinterpupillary distance on visual display.

2. Quantifying stimulus intensity of a VE systemusing the Simulator Sickness Questionnaire(Kennedy et al., 1993) or other means andcomparing the outcome to other systems (seeStanney et al., 2005). If a given VE systemis of high intensity (say the 50th or higherpercentile) and is not redesigned to lessen itsimpact, significant dropouts can be expected.

3. Identifying individual capacity of target userpopulation to resist adverse effects of VEexposure via the Motion History Questionnaire(Kennedy and Graybiel, 1965) or other means.

Page 11: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1041

4. Setting exposure duration and intersessioninterval to minimize adverse effects by lim-iting the duration of initial exposures, settingintersession exposure intervals two to five daysapart, and moderating the stimulus intensity ofvirtual experiences (see Stanney et al., 2005).

5. Educating users regarding potential risks of VEexposure (e.g., inform users they may experi-ence nausea, malaise, disorientation, headache,dizziness, vertigo, eyestrain, drowsiness, fa-tigue, pallor, sweating, increased salivation,and vomiting).

6. Educating users regarding potential adverseaftereffects of VE exposure (e.g., inform usersthey may experience disturbed visual function-ing, visual flashbacks, and unstable locomo-tor and postural control for prolonged periodspost-exposure).

7. Instructing users to terminate VE interaction ifthey start to feel ill.

8. Providing adequate air flow and comfortablethermal conditions.

9. Adjusting equipment to minimize fatigue.10. For strong VE stimuli, warning users to avoid

extraordinary maneuvers (e.g., flying backwardor experiencing high rates of linear or rota-tional acceleration) during initial interaction.

11. Providing an attendant to monitor users’ behav-ior and ensure their well-being.

12. Specifying amount of time post-exposure thatusers must remain on premises before driv-ing or participating in other such high-risk activities. Do not allow individuals whofail post-exposure tests or experience adverseaftereffects to conduct high-risk activities untilthey have recovered (e.g., have someone elsedrive them home).

13. Calling users the next day or having them callto report any prolonged adverse effects.

Regardless of the strength of the stimulus or the sus-ceptibility of the user, following a systematic usage pro-tocol can minimize the adverse effects associated withVE exposure.

4 HEALTH AND SAFETY ISSUESThe health and safety risks associated with VE exposurecomplicate usage protocols and lead to product liabilityconcerns. It is thus essential to understand theseissues when utilizing VE technology. There are bothphysiological and psychological risks associated withVE exposure, the former being related primarily tosickness and aftereffects and the latter primarily beingconcerned with the social impact.

4.1 Cybersickness, Adaptation,and AftereffectsMotion-sickness-like symptoms and other aftereffects(e.g., balance disturbances, visual stress, altered

hand–eye coordination) are unwanted byproducts ofVE exposure (Stanney and Kennedy, 2008). The sick-ness related to VE systems is commonly referred to as“cybersickness” (McCauley and Sharkey, 1992). Someof the most common symptoms exhibited include dizzi-ness, drowsiness, headache, nausea, fatigue, and generalmalaise (Kennedy et al., 1993). More than 80% of userswill experience some level of disturbance, with approx-imately 12% ceasing exposure prematurely due to thisadversity (Stanney et al., 2003). Of those who drop out,approximately 10% can be expected to have an emeticresponse (e.g., vomit), however, only 1–2% of all userswill have such a response. These adverse effects areknown to increase in incidence and intensity with pro-longed exposure duration (Kennedy et al., 2000). Whilemost users will experience some level of adverse effects,symptoms vary substantially from one individual toanother as well as from one system to another (Kennedyand Fowlkes, 1992). These effects can be assessed viathe Simulator Sickness Questionnaire (Kennedy et al.,1993), with values above 20 requiring due caution(e.g., warn and observe users) (Stanney et al., 2005).

To overcome such adverse effects, individuals gener-ally undergo physiological adaptation during VE expo-sure. This adaptation is the natural and automaticresponse to an intersensorily imperfect VE and is eliciteddue to the plasticity of the human nervous system(Welch, 1978). Due to technological flaws (e.g., slowupdate rate, sluggish trackers), users of VE systems maybe confronted with one or more intersensory discor-dances (e.g., visual lag, a disparity between seen and feltlimb position). In order to perform effectively in the VE,they must compensate for these discordances by adapt-ing their psychomotor behavior or visual functioning.Once interaction with a VE is discontinued, these com-pensations persist for some time after exposure, leadingto aftereffects.

Once VE exposure ceases and users return totheir natural environment, they are likely unaware thatinteraction with the VE has potentially changed theirability to effectively interact with their normal physicalenvironment (Stanney and Kennedy, 1998). Severaldifferent kinds of aftereffects may persist for prolongedperiods following VE exposure (Welch, 1997). Forexample, hand–eye coordination can be degraded viaperceptual–motor disturbances (Kennedy et al., 1997;Rolland et al., 1995), postural sway can arise (Kennedyand Stanney, 1996), as can changes in the vestibulo-ocular reflex (VOR) or one’s ability to stabilize an imageon the retina (Draper et al., 1997). The implications ofthese aftereffects are:

1. VE exposure duration may need to be mini-mized.

2. Highly susceptible individuals or those fromclinical populations (e.g., those prone toseizures) may need to avoid or be banned fromexposure.

3. Users should be closely monitored during VEexposure.

Page 12: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1042 PERFORMANCE MODELING

4. Users’ activities should be closely monitored fora considerable period of time post-exposure toavoid personal injury or harm.

4.2 Social Impact

Virtual environment technology, like its ancestors (e.g.,television, computers), has the potential for negativesocial implications through misuse and abuse (Kallman,1993; also see Chapter 61). Yet violence in VE isnearly inevitable, as evidenced by the violent contentof popular video games. Such animated violence isa known favorite over the portrayal of more benignemotions such as cooperation, friendship, or love(Sheridan, 1993; also see Chapter 4). The concern is thatusers who engage in what seems like harmless violencein the virtual world may become desensitized to violenceand mimic this behavior in the look-alike real world.

Currently, it is not clear whether or not such violentbehavior will result from VE exposure; early research,however, is not reassuring. Calvert and Tan (1994)found VE exposure to significantly increase the physio-logical arousal and aggressive thoughts of young adults.Perhaps more disconcerting was that neither aggressivethoughts nor hostile feelings were found to decreasedue to VE exposure, thus providing no support forcatharsis. Such increased negative stimulation may thensubsequently be channeled into real-world activities.The ultimate concern is that VE immersion may poten-tially be a more powerful perceptual experience thanpast, less interactive technologies, thereby increasingthe negative social impact of this technology (Calvert,2002). A proactive approach is needed which weighsthe risks and potential consequences associated withVE exposure against the benefits. Waiting for the onsetof harmful social consequences should not be tolerated.Koltko-Rivera (2005) suggests that a proactive approachwould involve determining (1) types and degree of VEcontent (e.g., aggressive, sexual), (2) types of individu-als or groups exposed to this content (e.g., their mentalaptitude, mental conditioning, personality, worldview),(3) circumstances of exposure (e.g., private experience,family, religion, spiritual), and (4) effects of exposureon psychological, interpersonal, or social function.

5 VIRTUAL ENVIRONMENT USABILITYENGINEERING

Most VE user interfaces are fundamentally differentfrom traditional graphical user interfaces, with uniqueI/O devices, perspectives, and physiological interac-tions. Thus, when developers and usability practitionersattempt to apply traditional usability engineering meth-ods to the evaluation of VE systems, they find few ifany that are particularly well suited to these environ-ments (for notable exceptions see Gabbard et al., 1999;Hix and Gabbard, 2002; Stanney et al., 2000). Thereis a need to modify and optimize available techniquesto meet the needs of VE usability evaluation as wellas to better characterize factors unique to VE usability,including sense of presence.

5.1 Usability Techniques

Assessment of usability for VE systems must gobeyond traditional approaches, which are concernedwith the determination of effectiveness, efficiency, anduser satisfaction (Bowman et al., 2002; see Chapter 55).Evaluators must consider whether multimodal inputand output are optimally presented and integrated,navigation is supported to allow the VE to be readilytraversed, object manipulation is intuitive and simple,content is immersive and engaging, and the systemdesign optimizes comfort while minimizing sicknessand aftereffects. The affective elements of interactionalso become important when evaluating VE systems(see Chapter 58). It is an impressive task to ensure thatall of these criteria are met.

Gabbard et al. (1999) have developed a taxonomy ofVE usability characteristics that can serve as a foun-dation for identifying and evaluating usability crite-ria particularly relevant to VE systems. Stanney et al.(2000) used this taxonomy as the foundation on whichto develop an automated system, MAUVE (Multicrite-ria Assessment of Usability for Virtual Environments),which assesses VE usability in terms of how effec-tively each of the following are designed: (a) navigation,(b) user movement, (c) object selection and manipula-tion, (d) visual output, (e) auditory output, (f) hapticoutput, (g) presence, (h) immersion, (i) comfort, (j)sickness, and (k) aftereffects. MAUVE can be used tosupport expert evaluations of VE systems, similar tothe manner in which traditional heuristic evaluationsare conducted. Due to such issues as cybersickness andaftereffects, it is essential to use these or other tech-niques (cf. Modified Concept Book Usability EvaluationMethodology; Swartz, 2003) to ensure the usability ofVE systems, not only to avoid rendering them inef-fective but also to ensure that they are not hazardousto users. Recently, guidelines have been evolving forenhancing the design of social VEs (e.g., Second Lifeby Linden Labs, Whyville by Numedeon, Inc.), such asthose promoted by the Center for Disease Control andPrevention (CDC, 2010) for reaching individuals withtimely health information that may relate to campaignsand upcoming events.

5.2 Sense Of Presence

A usability criterion unique to VE systems is sense ofpresence. Virtual environments have the unique advan-tage of leveraging the imaginative ability of individualsto psychologically “transport” themselves to anotherplace, one that may not exist in reality (Sadowski andStanney, 2002). To support such transportation, VEsprovide physical separation from the real world byimmersing users in the virtual world via, for example,an HMD, then imparting sensorial sensations via mul-timodal feedback that would naturally be present in thealternate environment. Focus on generating such pres-ence is one of the primary characteristics distinguishingVEs from other means of displaying information.

Presence has been defined as the subjective percep-tion of being immersed in and surrounded by a virtualworld rather than the physical world one is currently

Page 13: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1043

situated in (Stanney et al., 1998b). Virtual environmentsthat engender a high degree of presence are thoughtto be more enjoyable, effective, and well received byusers (Sadowski and Stanney, 2002). High-presenceVEs are also suggested to be effective learning envi-ronments (Mantovani and Castelnuovo, 2003), as wellas to enhance behavioral modeling outcomes and leadto greater imitation in the physical world (Fox et al.,2009b). Presence can be “broken” (i.e., lost) by externalinterference (e.g., people talking in the real-world duringVE exposure), internal interference (e.g., daydreaming),inconsistent mediation (e.g., lag, distortions), contradic-tory mediation (e.g., when the virtual does not behavelike the real), or unrefined mediation (e.g., informa-tion overload; Chertoff, Schatz, McDaniel, and Bowers,2008). To enhance and maintain presence, designers ofVE systems should spread detail around a scene, let userinteraction determine when to reveal important aspects,maintain a natural and realistic, yet simple appearance,and utilize textures, colors, shapes, sounds, and otherfeatures to enhance realism (Kaur, 1999). To gener-ate the feeling of immersion within the environment,designers should isolate users from the physical environ-ment (use of an HMD may be sufficient), provide con-tent that involves users in an enticing situation supportedby an encompassing stimulus stream, provide naturalmodes of interaction and movement control, and uti-lize design features that enhance vection (Stanney et al.,2000). To enhance presence in learning environments,the design of perceptual features (i.e., perceptual real-ism, interactivity and control), individual factors (i.e.,imagination and suspension of disbelief, identification,motivation and goals, emotional state), content charac-teristics (i.e., plot, story, narration, and dramaturgy),and interpersonal, social, and cultural context shouldbe carefully considered (Mantovani and Castelnuovo,2003). Presence can be assessed via Witmer and Singer’s(1998) Presence Questionnaire or techniques used bySlater and Steed (2000) as well as a number of othermeans (Sadowski and Stanney, 2002).

6 APPLICATION DOMAINS

Virtual environments have been adopted by an ever-growing number of domains. Originally primarily usedas a training platform, recent times have seen VEs inas diverse areas as operating rooms and courtrooms.These applications can provide adaptable, modest cost,deployable, and safe selection and training solutions,create game-based and learning virtual experiences thatwould otherwise be impossible to explore, and offerrehabilitation and medical applications that reach farbeyond the conventional.

6.1 VE as a Selection and Training ToolIf one looks at training as a continuum across whicha trainee matures in their declarative, procedural, andstrategic knowledge, as well psychomotor skills and atti-tudes, then VE training is thought to be most suitableonce the trainee has foundational declarative knowledgeinstantiated and some rudimentary procedural knowl-edge (Cohn et al., 2007). In general, van Merrienboer

and Kester (2005) recommend following the “fidelityprinciple,” where learning is supported via a gradualincrease in the fidelity of the training environment. Sim-ilarly, many training strategies reduce fidelity early inthe training lifecycle to minimize complexity and avoidoverloading the trainee (Regian et al., 1992). Thesesuggestions are paralleled by stress-exposure trainingparadigms that suggest moving trainees through threestages, with an early focus on information provisionand knowledge acquisition, followed by skills acqui-sition, and then culminating with practice of acquiredskills under conditions that gradually approximate thestress environment (Driskell and Johnston, 1998). Takentogether, these theories suggest the following (Cohnet al., 2007):

• Classroom lectures and low-fidelity trainingsolutions (e.g., schematics, mock-ups) are mostsuitable for initial acquisition of declarativeknowledge (i.e., general facts, principles, rules,and concepts) (Kelly et al., 1985; Rouse, 1982;1991). VE training simulators would generallybe less effective for such initial training, as theycan be overly complex and confusing (Andrews,1988; Boreham, 1985; Jones, 1990).

• Medium-fidelity VE training solutions, suchas desktop VEs, are suggested to be suitablefor training basic procedural knowledge andproblem solving skills, and practice of such skillsto mastery (Patrick, 1992; Pappo, 1998).

• High-fidelity VE training solutions, such as fullyimmersive VEs, can be used for consolidation oflearned declarative knowledge and basic skillsand procedures, practice of acquired knowledgeand skills (e.g., mission rehearsal), as wellas development of more advanced strategicknowledge and tactical skills (Forrest et al.,2002; Maran and Glavin, 2003; Vozenilek et al.,2004).

• High-fidelity VE training solutions, which arefully immersive and multisensory, may alsobe suitable for behavioral conditioning withstressors. Once basic knowledge and skills aremastered, attitudes and stress-induced behaviorsare likely most appropriately trained in immer-sive and engaging solutions, which have theauthenticity to generate realistic responses fromtrainees; yet there is limited research on this topic(Driskell and Johnston, 1998).

Beyond the ability of various forms of VEs to supportseveral stages along the trainee continuum, they alsooffer the ability to immerse trainees in multiple contexts.This is important because learning is context specific(Anderson et al., 2000). By providing training in multi-ple context and from multiple points of view, VEs canbe used in an effort to avoid the “reductive tendency” oflearners to over-simplify new concepts, especially thosegleaned in dynamic, highly-interactive environments,as well as the development of “knowledge-shields”erected to confirm simplified beliefs and understandings

Page 14: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1044 PERFORMANCE MODELING

(Feltovich et al., 2004). Consequently, while highfidelity VE training simulations oftentimes may be con-sidered as the ultimate training solution, they are notsuitable for all types of training and it is essential todetermine the optimal level of fidelity that is requiredfor a given training solution.

In terms of applying VE to enhance human perfor-mance, training is thus actually the second stage of atwo-stage process. Ideally, one would like to select thoseindividuals that have a certain degree of “performancecapability” and are, in turn, ready for immersive VEtraining. Traditional approaches to selection focus onsocial and psychophysical assessments. For example,aptitude tests, ranging from traditional pen-and-paper-type to psychomotor tests to computer-based (but notVE!) assessments (Carretta and Ree, 1993; 1995), haveall been used with varying levels of success. The singlemost important criticism of each of these approaches isthat they are designed to be predictive of future perfor-mance and as such are more often than not abstractionsof aspects of the larger task(s) for which the indi-vidual is being selected. An alternate approach wouldbe to provide selectees with a method that providesa direct indication of their performance abilities. Thisdistinction, essentially between a test being predictiveof performance ability versus indicative of performanceability, has a great impact on selection. A meta-analysisperformed within the aviation domain, where much ofselection research has focused, found that typical predic-tive validities (most often reported as either the correla-tion coefficient r or the multiple correlation coefficient Rand representing the degree to which given predictor/setof predictors and performance metrics are related) forsuch assessments range from a low of 0.14 to a “high”of about 0.40 (Martinussen, 1996). Yet, when a virtualsimulation component is added to this mix, these valueshave been shown to improve considerably, pushing cor-relations towards the 0.60 level (Gress and Willkomm,1996). This suggests that VE systems should be usedas part of a comprehensive performance enhancementprogram that focuses on selecting those users with thecorrect set of knowledge, skills, and abilities (KSAs)and then providing, when needed, training to fine tunethose KSAs. One approach to accomplishing this goalwould be to assess trainee readiness by immersing themin virtual game-based cognitive assessment tools. Forexample, CogGauge immerses trainees in a mock space-ship cockpit in which the trainees must perform cogni-tive tasks at various celestial bodies and in so doinggain rewards that can culminate in the creation of aspace station (Carpenter et al., 2010; Johnston, Carpen-ter, and Hale, 2011). The engaging nature of CogGaugeserves to motivate trainees, making the KSA assessmentprocess more interesting and engaging. Such immersiveassessment tools can be used to determine where in theprogression from novice to expert a given trainee is andthen provide the most suitable form of training.

The suitability of virtual training solutions has beenexplored in a wide variety of areas, such as perceptualand cognitive performance (Carroll et al., 2010b),decision making under stress (Carroll et al., 2010a;Hill et al., 2003), operational readiness (Barba et al.,

2006), and cross-cultural communication (Deaton et al.,2005; Stanney et al., 2010). Similarly, VE applicationsare being used as interactive tools for teaching medicalstudents, nurses, and doctors knowledge and skills asvaried as the basics of human anatomy, complicatedsurgical procedures, communication skills, decision-making skills, and location of medical equipment withincritical care vehicles (Grantcharov et al., 2004; Johnsenet al., 2005; Jones et al., 2010; Segal and Fernandez,2009; Fried et al., 2010; Hassinger et al., 2010). AsVE technology has matured, the breadth of VE trainingapplications has likewise grown (King, 2009).

Flight skills are often trained in simulators and virtualenvironments (e.g., Aerosim’s Virtual Flight Deck™,Microsoft Flight Sim, RealFlight®). Such applicationsprovide a good example for demonstrating a key advan-tage of immersive training, which is the breadth of per-formance data that can be collected to evaluate trainingeffectiveness. For example, behavioral and neurophysi-ological measures can be assessed during VE trainingand used to assess a learner’s perceptual and cogni-tive processes (Carroll et al., 2010b; 2010c). These datacan include such things as measurement and synch-ing of eye tracking and electroencephalography (EEG)data with behavioral metrics (e.g., the actions taken inthe VE) to capture unobservable performance charac-teristics, such as learner cognitive state (i.e., workload,engagement, distraction) and perceptual performance(i.e., scan data) during VE training. These data can, inturn, be used to identify the root cause of performancebreakdowns and present instructors/ learners with per-formance summaries, such as scan data heat maps andcognitive replays illustrating how perceptual and cogni-tive processes contributed to performance breakdowns.The analysis can go even further, spatially correlatingeye tracking data with VE scenario specific objects (e.g.,specific flight gauges) and then diagnosing such thingsas the appropriateness of attention allocation (e.g., is thepilot scanning relevant instruments at the correct time?),root cause of performance errors (e.g., is the error dueto inadequate scanning, lack of detection of criticalevents, or inappropriate actions?), and issues with cog-nitive state (e.g., is the pilot disengaged, overloaded,or distracted?). Thus, interactive VE training solutionsallow trainees to not only consolidate their knowledgeand practice their skills but also to provide adaptivetraining based on individual learner performance. Thegranular level of performance data available to VE train-ing applications can also support “precision” trainingof various aspects of performance, such as perceptualskills used in search and detection task (e.g., securitybaggage screening, imagery analysis, threat detection,medical diagnostics; Carroll et al. 2010b, 2010c; Haleet al., 2007). In such applications, trainees can be shownnot only the ‘right’ way to search a scene within a VE,but also how their specific search techniques differedfrom an expert and demonstrate the types of strategiesthat would be most helpful in improving an individualtrainee’s search strategies.

One might ask if the benefits of VE selectionand training are worthwhile given the level of effortnecessary to develop such immersive applications. In

Page 15: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1045

the U.S. Air Force the cost of a single individual stu-dent pilot failing to complete basic flight school canrun to $100,000 (Siem et al., 1988). Student failurecan be attributed to both inadequate selection tech-niques and deficient training techniques. Clearly, bothselection and training play critical roles in producingeffective pilots. The challenge is to develop a trainingprogram that ensures a smooth union between the two, asolution that identifies the best candidates and then pro-vides the optimum training. VE training solutions canbe used to address both of these aspects of training byusing immersive cognitive assessment tools to ensure thetrainee is indeed ready for the complexities presented byimmersive training and then providing detailed perfor-mance diagnostics that can realize “precision” trainingsolutions that are uniquely tailored to a given trainee’sperformance deficiencies.

While VE applications may prove effective inselecting training candidates and providing them withtraining that is tailored to their given capabilities, thebottom-line for assessing the value of VE selection andtraining likely lies in how transferrable the skills fromsuch training are to their target domain. A constantthread in training research is the notion that, in order fortraining to be effective, the basic skills being taught mustshow some degree of transfer to real-world performance.Over 100 years ago, Thorndike and Woodworth (1901)laid down the most basic training transfer principlewhen they proposed that transfer was determined by thedegree of similarity between any two tasks. Applyingthis heuristic to VE design, one might conclude that themost basic way to ensure perfect transfer is to ensurethat the real-world performance elements that are meantto be trained should be replicated perfectly in a virtualenvironment. This notion of “identical elements” couldeasily create a serious challenge for system designerseven by today’s technology standards, as VEs arestill not able to perfectly duplicate the wide range ofsensorial stimuli encountered during daily interactionswith our world (Stoffregen et al., 2003). Counteringthis somewhat simplistic design approach is Osgood’s(1949) principle that greater similarity between anytwo tasks along certain dimensions will not guaranteewholesale, perfect transfer. The challenge, as noted byRoscoe (1982), is to find the right balance betweentechnical fidelity and training effectiveness. These issuesare explored in detail by Stanney and Cohn (2012).

6.2 VE as an Entertainmentand Education Tool

Virtual environments have reached beyond their originalapplications, primarily as military training tools, andhave extended into a wide variety of entertainmentapplications. From interactive arcades to cybercafes,the entertainment industry has leveraged the uniquecharacteristics of the VE medium, providing dynamicand exciting experiences in a multitude of forms. Virtualenvironment entertainment applications have found theirway into games, sports, movies, art, online communities,location-based entertainment, theme parks, and othervenues (Badique et al., 2002; Nakatsu et al., 2005). By

exploiting the unique interactive characteristics of VEscompared to more traditional entertainment media (e.g.,film, play), VE technology provides a more immersivemedium for entertainment through the use of simpleartificial virtual characters (i.e., avatars), engagingnarrative, and dynamic control to create an immersiveinteractive experience.

There are many forms of virtual entertainment,including:

• Video games. Immersive video games havebecome omnipresent (Chatfield, 2010). Gener-ally, these games require their users to formulatehypotheses, learn game rules via trial-and-error,multitask, interactively develop strategies, anddynamically solve problems. These are skillsthat have life-relevance and thus the use of videogames for edutainment has been widely con-sidered. In fact, serious games, which are videogames aimed at learning and other productiveendeavors, are being taken much more seriouslyin recent years (Gunter, Kenny, and Vick, 2008).So, while some lambaste the amount of timeyoung people are engaged in video games andsuggest they are living in a media-saturatedworld (Rideout, Foehr, and Roberts, 2010), oth-ers are focused on leveraging this intense interestin productive ways, with one of the primaryfocuses being the use of interactive games ineducation (Aldrich, 2009; Squire, 2005). Someeven suggest that such games can be used to getover the prepubescent literacy slump that leadsto educational failures (Glee, 2008).

• Computer role-playing games. Computer role-playing video games (CRPGs; e.g., Dungeons& Dragons , Ultima Underworld , Might andMagic, The Elder Scrolls , Diablo), involve play-ers in controlling one or more characters (i.e.,a “party”) as they seek to fulfill a series ofquests (Barton, 2007). They involve fantasy,story-telling, and narrative progression, as wellas evolving player character development (e.g.,health, dexterity, strength). In 3D CRPGs, play-ers typically navigate the game world from afirst or third-person perspective. As with videogames, CRPGs have been adapted to educationpurposes such as teaching literacy skills (Adams,2009) and helping students craft interactive shortstories (Carbonaro et al., 2008).

• Massively multiplayer online role-playinggames (MMORPGs). MMORPGs (e.g., Ever-Quest , Meridian 59 , Ultima Online, Final Fan-tasy , World of Warcraft) are a genre of CRPGsthat involve a very large number of players inter-acting with one another within a virtual gameworld (Bartle, 2003). They are similar to GRPGsin their makeup but are differentiated by thevolume of players involved and the persistenceof the virtual world, which evolves continuously,even when players are offline. The psychologyof these games has become a topic of interestfor academic researcher, with players being

Page 16: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1046 PERFORMANCE MODELING

classified into various psychological groups (i.e.,achievers, explorers, socializers, killers; Bartle,2003) and categories of motivation (i.e., immer-sion, cooperation, achievement, competition;Radoff, 2011). MMORPGs are starting to beused for educational applications, such as teach-ing science and English (Eustace et al., 2004), aswell as supporting cooperative learning activitiesand exploring research questions (Childress andBraswell, 2006).

• Massively multiplayer virtual worlds(MMVWs). Massively multiplayer virtualworlds (e.g., Second Life, Active World , Twin-ity , Smeet) have been developed that lack theinbuilt narrative, goals/objectives, and rule-basedstructure of games and instead provide the oppor-tunity to explore and engage with other residentsand avatars of the world through socializing, par-ticipating in activities, exploring other lands, andcreating and trading virtual property and serviceswith one another (Guest, 2008). Such virtualworlds have been used as a platform for educa-tional purposes, scientific research, and the arts,as well as for launching personal relationshipsthat can even lead to marriage (Dickey, 2005;Hayes, 2006).

• Alternate reality games (ARGs). ARGs (e.g.,Dreadnot , The Art of the Heist , I Love Bees ,The Beast) are virtual games that involve intenseplayer involvement with a real-time story thatevolves according to the types of actions partic-ipants take in the virtual world (Kim, Allen, andLee, 2008). Players, which can form into Guilds– associations of players, interact directly withcharacters, which are controlled by game design-ers (as opposed to AI), to solve plot-based chal-lenges and puzzles. The ARG community canreach beyond the virtual through such means aswebsites, email messages, faxes, and voicemailmessages, with players working together to ana-lyze the story and coordinate real-life, as well asonline activities. ARGs have extended into inter-active television (e.g., The Fallen Alternate Real-ity , ReGenesis Extended Reality). An intriguingextension is to serious ARGs that focus on real-world problem solving, such as World WithoutOil that focused on solving the issue of a globaloil shortage (Egner, 2009) and Foldit , an ARGthat reframes nettlesome scientific challenges asa competitive multiplayer computer game, thelatter of which amazingly led to a breakthroughin HIV research (Khatib et al., 2011).

The crossover from purely entertainment to edutain-ment and real-world problem solving suggests that vir-tual interactive games have the potential to harness theingenuity of game players into a formidable force thatcan be directed toward educational purposes, as well assolving a wide range of scientific problems. The futureof interactive games is thus most intriguing . . . one can-not help but wonder where this creative energy will bedirected in the future.

6.3 VE as a Medical Tool

What makes virtual reality application development inthe assessment, therapy, and rehabilitation sciences sodistinctively important is that it represents more than asimple linear extension of existing computer technologyfor human use. Virtual reality offers the potential tocreate systematic human testing, training, and treatmentenvironments that allow for the precise control of com-plex, immersive, dynamic 3D stimulus presentations,within which sophisticated interaction, behavioral track-ing, and performance recording are possible (A. Rizzoet al., 2006, p. 36).

Much has been written about applications for VEwithin the medical arena (cf. Moline, 1995; Satava andJones, 2002). While some of these applications repre-sent unique approaches to harnessing the power of VE,many other applications, such as simulating actual medi-cal procedures, reflect training applications and thereforewill not be discussed anew here. One area of medicalapplication for which VE is truly coming into its own ismedical rehabilitation. In particular, two areas of reha-bilitation, behavioral/cognitive and motor, show strongpromise.

6.3.1 Behavioral/Cognitive RehabilitationApplications

In terms of behavioral rehabilitation applications, VEapplications have been gaining prominence in behav-ioral science research over the past several years. Forexample, VE cue reactivity programs have been success-fully tested for feasibility in nicotine (Bordnick et al.,2005), opiates (Kuntze et al., 2001), and alcohol (Bor-dnick et al., 2008) dependent individuals, as well asthose with eating disorders (Gutierrez-Maldonado et al.,2006). VE applications have also shown promise inmodifying exercise behavior (Fox and Bailenson, 2009)and retirement savings behavior (Ersner-Hershfieldet al., 2008) and managing pain (Dahlquist et al., 2007;Gold et al., 2007; Hoffman et al., 2008). Perhaps thefastest growing application for VEs in behavioral reha-bilitation is in the area of exposure therapy (Fox et al.,2009a; Gregg and Tarrier, 2007; Parsons and Rizzo,2008; Powers and Emmelkamp, 2008). For example, VEapplications have been used to treat acrophobia (thefear of heights; Coelho et al., 2006), agoraphobia (fearof open spaces; Botella et al., 2007), arachnophobia(fear of spiders; Cote and Bouchard, 2005), aviophobia(fear of flying; Rothbaum et al., 2000), combat-relatedposttraumatic stress disorder (Reger and Gahm, 2008),panic disorder (Botella et al., 2007), public speakinganxiety (Harris et al., 2002), and social phobia (Royet al., 2003). The reason for this broad use of VE tech-nology for exposure therapy is likely due to the idealmatching between VE’s strengths (presenting evolvinginformation with which users can interact in variousways) and such therapy’s basic requirements (incremen-tal exposure to the offending environment). Importantly,compared to previous treatment regimens, which often-times simply required patients to mentally revisit theirfears, VEs offer a significantly more immersive expe-rience. In fact, it is quite likely that many of VE’s

Page 17: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1047

shortcomings, such as poor visual resolution, inade-quate physics modeling underlying environmental cues,and failure to fully capture the wide range of senso-rial cues present in the real world, will be ignoredby the patient, whose primary focus is on overcom-ing anxiety engendered by her or his specific phobias.On a practical level, VEs enable patients to virtuallyvisit their therapist’s office, where they can be providedan individually tailored multimodal treatment experi-ence (Rothbaum et al., 1996; Emmelkamp et al., 2001;Anderson et al., 2003).

Beyond behavioral rehabilitation, VE applicationsare being developed for the study, assessment, and reha-bilitation of various types of cognitive processes, suchas perception, attention, and memory. For example, VEapplications are being used as perceptual skills train-ers, such as for elderly drivers who have degradedvisual scanning behavior (Romoser and Fisher, 2009)and for rehabilitating stroke victims who suffer fromunilateral spatial neglect, where an individual fails toperceive stimuli presented to the contralesional hemi-visual field even though they are not “blind” to thisarea (Katz et al., 2005). In terms of attention, attention-deficit hyperactivity disorder (ADHD) is an example ofa cognitive dysfunction that has been addressed via VErehabilitation applications (Parsons et al., 2007). Brooksand Rose (2003) suggest that VE rehabilitation appli-cations can be used both in terms of assessment ofmemory impairments and memory remediation (e.g., useof reorganization techniques), where it has been foundto promote procedural learning of those with mem-ory impairments; importantly, this learning has beenfound to transfer to improved real-world performance.Examples of memory remediation in VEs include its useto enhance the ability of stroke victims to remember toperform actions in the future (Brooks et al., 2004), aswell as its use in enhancing the performance of an indi-vidual with age-related impairment in memory-relatedcognitive processes (Optale et al., 2001). VEs have alsobeen shown to uncover subtle cognitive impairments thatmight otherwise go undetected (Tippett et al., 2009).In general, VE applications can provide precisely con-trolled means of assessing cognitive impairments thatare not available using more traditional evaluation meth-ods. Specifically, VEs can deliver an assessment envi-ronment, where controlled stimuli can be presented atvarying degrees of perception/attention/memory chal-lenge and level of deficit can be assessed. This levelof experimental control allows for the development ofboth cognitive impairment assessment and rehabilitationapplications that have a high level of specificity andecological validity.

6.3.2 Motor Rehabilitation Applications

Many of VE’s qualities that make it an ideal toolfor providing medical training—such as tactile feed-back and detailed visual information (Satava and Jones,2002)—also make it an ideal candidate for supplement-ing motor rehabilitation treatment regimens for suchconditions as stroke (Deutsch and Mirelman, 2007; Yehet al., 2007), cerebral palsy (Bryanton et al., 2006), and

amblyopia (i.e., lazy-eye; Eastgate et al., 2006). Specif-ically, Fox et al. (2009a) suggest that VEs have threefeatures that make them uniquely suited to facilitatingmotor rehabilitation: the ability to review one’s physicalbehavior and interactively examine one’s progress, seeone’s own avatar from a third-person perspective in realtime, and safely re-create real environments that cannototherwise be experienced (e.g., crossing a busy intersec-tion). In determining how best to apply VE in physicalrehabilitation treatment regimens, Holden (2005) sug-gested considering three practical areas in which VEis strongest: repetition, feedback, and motivation. Allthree elements are critical to both effective learning andregaining motor function. The application of VE, in eachcase, provides a powerful method for rehabilitation spe-cialists to maximize the effect of a treatment regimenfor a given session and, because they may reduce thetime investment required by therapists (one can sim-ply immerse the patient, initiate the treatment, and thenallow the program to execute), to also expand the accessof such treatments to a wider population.

Since VE is essentially computer based, patients caneffectively have their attention drawn to a specific setof movement patterns they may need to make to regainfunction; conducting this in a “loop” provides unlim-ited ability to repeat a pattern while using additionalvisualization aids, such as a rendered cursor or “follow-me” types of cues, to force the patient into moving aparticular way (cf. Chua et al., 2003). As well, it is a rel-atively simple matter to digitize movement information,store it, and then, based on comparisons to previouslystored, desired, movement patterns, to provide additionalfeedback to assist the patient. In terms of motivation,treatment scenarios can be tailored to capture specificevents that individual patients find most motivating: abaseball fan can practice her movement in a baseball-like scenario; a car enthusiast can do so in a drivingenvironment.

There are certain caveats that must be consideredwhen exploiting VE for rehabilitation purposes, mostsignificantly the potentially rapid loss of motor adapta-tions following VE exposure. Lackner and DiZio (1994)demonstrated that certain basic patterns of sensorimotorrecalibrations learned in a given physical environmentcan diminish within an hour, postexposure, althoughsubsequent findings (DiZio and Lackner, 1995) suggestthat there are certain transfer benefits that are longer last-ing. Brashers-Krugg et al. (1996) provided additionalevidence that sensorimotor recalibrations of the typelikely to be required for rehabilitation have postexposureperiods in excess of 4 h during which their effects canbe extinguished. Most importantly, Cohn et al. (2000)demonstrated that such recalibrations, when learned inVE, have essentially no transfer to real-world conditionspostexposure. Clearly, more research is needed to under-stand the conditions under which such transfer effectscan be made most effective within the clinical setting.

This is just a small glimpse of the potential appli-cations for VE technology, the limit of which is boundonly by our imagination. Recently, they have even beensuggested as having potential value as a simulation ofexperience to enhance courtroom practice (Leonetti and

Page 18: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1048 PERFORMANCE MODELING

Bailenson, 2010). It will be interesting to see the vastvariety of future VE applications that evolve.

7 CONCLUSIONS

Virtual environments have made substantial advancesover the past decade, both in terms of the hardware andsoftware used to generate them, as well as the breadthof their application. Yet, despite significant revolutionsin component technology, many of the challengesaddressed by incipient systems, such as multimodalsensori-interaction, visual representation, and scenariogeneration, have yet to be fully resolved. At the sametime, our understanding of the potential such tools haveto offer has advanced considerably. No longer simpleamusements, these powerful machines can provideeducational value, assist in treating physical andcognitive maladies, and even help design better VEsystems. As the uses for which VEs are ideally suitedcontinue to be defined and refined, one can anticipatethat current development challenges will be resolved,allowing for a greater reach and more beneficial impactfrom applications of VE technology.

ACKNOWLEDGMENTS

The authors would like to thank Hesham Fouad forhis contribution to the audio technology review inSection 2.1. This material is based upon work supportedin part by the National Science Foundation (NSF) underGrant No. IRI-9624968, the Office of Naval Research(ONR) under Grant No. N00014-98-1-0642, the NavalAir Warfare Center Training Systems Division (NAWCTSD) under Contract No. N61339-99-C-0098, and theNational Aeronautics and Space Administration (NASA)under Grant No. NAS9-19453. Any opinions, findings,and conclusions or recommendations expressed in thismaterial are those of the authors and do not necessarilyreflect the views or the endorsement of the NSF, ONR,NAWC TSD, or NASA.

REFERENCES

Acosta, E., and Liu, A. (2007), “Real-Time Volumetric Hapticand Visual Burrhole Simulation,” in Proceedings ofIEEE Virtual Reality Conference, VR 2007, Charlotte,NC, March 10–14, IEEE Press, Los Alamitos, CA,pp. 247–250.

Adams, M. G. (2009), “Engaging 21st-Century Adolescents:Video Games in the Reading Classroom,” English Jour-nal , Vol. 98, No. 6, pp. 56–59.

Akamatsu, M. (1994, July), “Touch with a Mouse: A MouseType Interface Device with Tactile and Force Display,”in Proceedings of 3rd IEEE International Workshop onRobot and Human Communication , Nagoya, Japan, July18–20, pp. 140–144.

Aldrich, C. (2009), “Virtual Worlds, Simulations, and Gamesfor Education: A Unifying View,” Innovate: Journal ofOnline Education , Vol. 5, No. 5, available: http://www.innovateonline.info/index.php?view=article&id=727,accessed October 28, 2011.

Algazi, V. R., Duda, R. O., Thompson, D. M., and Avendano,C. (2001, October), “The CIPIC HRTF Database,” in

Proceedings of the 2001 IEEE Workshop on Applicationsof Signal Processing to Audio and Electro-Acoustics ,New Paltz, NY, pp. 99–102.

Allbeck, J. M., and Badler, N. I. (2002), “Embodied Auto-nomous Agents,” in Handbook of Virtual Environments:Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 313–332.

Anderson, J. R., Greeno, J. G., Reder, L. M., & Simon,H. A. (2000), “Perspectives on Learning, Thinking,and Activity,” Educational Researcher , Vol. 29, No. 4,pp. 11–13.

Anderson, P., Rothbaum, B. O., and Hodges, L. F. (2003),“Virtual Reality Exposure in the Treatment of SocialAnxiety,” Cognitive and Behavioral Practice, Vol. 10,No. 3, pp. 240–247.

Andrews, D. H., Carroll, L. A., and Bell, H. H. (1995),“The Future of Selective Fidelity in Training Devices”,Educational Technology, Vol. 35, No. 6, pp. 32–36.

Axelsson, P., and Johnston, M. R. (2010), “Results from theEmpirical CRE Context Relevant Testing of the Systemfor Tactile Reception of Advanced Patterns (STRAP),”in Proceedings of 54th Human Factors and ErgonomicsSociety Annual Meeting , San Francisco CA, September27–October 1, 2010.

Badique, E., Cavazza, M., Klinker, G., Mair, G., Sweeney, T.,Thalmann, D., and Thalmann, N. M. (2002), “Entertain-ment Applications of Virtual Environments,” in Hand-book of Virtual Environments: Design, Implementation,and Applications , K. M. Stanney, Eds., Lawrence Erl-baum Associates, Mahwah, NJ, pp. 1143–1166.

Ballas, J. A., Brock, D., Stroup, J., and Fouad, H. (2001), “TheEffect of Auditory Rendering on Perceived Movement:Loudspeaker Density and HRTF,” in Proceedings ofthe 2001 International Conference on Auditory Display ,Espoo, Finland, July 29–August 1, pp. 235–238.

Barba, C., Deaton, J. E., Santarelli, T., Knerr, B., Singer, M.,and Belanich, J. (2006), “Virtual Environment Compos-able Training for Operational Readiness (VECTOR),” inProceedings of the Proceedings of the 25th Army Sci-ence Conference, “Transformational Army Science andTechnology—Charting the Future of S&T for the Soldier ,”Orlando, FL, November 27–30.

Bartle, R. (2003), Designing Virtual Worlds , New Riders,Berkeley, CA.

Barton, M. (2007), “The History of Computer Role-PlayingGames Part III: The Platinum and Modern Ages (1994–2004),” Gamasutra , available: http://www.gamasutra.com / view/feature/1571/the_history_of_computer_.php,accessed October 31, 2011.

Basdogan, C., and Loftin, B., (2008), “Multimodal Display Sys-tems: Haptic, Olfactory, Gustatory, and Vestibular,” inThe Handbook of Virtual Environment Training: Under-standing, Predicting and Implementing Effective TrainingSolutions for Accelerated and Experiential Learning ,Vol. 2, D. Schmorrow, J. Cohn, and D. Nichol-son, Eds., Praeger Security International, Westport, CN,pp. 116–135.

Basdogan, C., Laycock, S. D., Day, A. M., Patoglu, V., andGillespie, R. B. (2008), “3-DoF Haptic Rendering,” inHaptic Rendering , M. C. Lin and M. Otaduy, Eds., A. K.Peters. Wellesley, MA, pp. 311–331.

Begault, D. (1994), 3-D Sound for Virtual Reality and Multi-media , Academic, Boston.

Page 19: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1049

Biggs, S. J., and Srinivasan, M. A. (2002), “Haptic Interfaces,”in Handbook of Virtual Environments: Design, Implemen-tation, and Applications , K. M. Stanney, Ed., LawrenceErlbaum Associates, Mahwah, NJ, pp. 93–116.

Bordnick, P. S., Traylor, A. C., Graap, K. M., Copp, H. L.,and Brooks, J. (2005), “Virtual Reality Cue ReactivityAssessment: A Case Study in a Teen Smoker,” AppliedPsychophysiology and Biofeedback , Vol. 30, No. 3,pp. 187–193.

Bordnick, P. S., Traylor, A., Copp, H. L., Graap, K. M., Carter,B., Ferrer, M., and Walton, A. P. (2008), “AssessingReactivity to Virtual Reality Alcohol Based Cues,”Addictive Behaviors , Vol. 33, pp. 743–756.

Boreham, N. C. (1985), “Transfer of Training in the Generationof Diagnostic Hypotheses: The Effect of LoweringFidelity of Simulation,” British Journal of EducationalPsychology , Vol. 55, pp. 213–223.

Botella, C., et al. (2007), “Virtual Reality Exposure in the Treat-ment of Panic Disorder and Agoraphobia: A ControlledStudy,” Clinical Psychology and Psychotherapy , Vol. 14,pp. 164–175.

Bowman, D., Gabbard, J., and Hix, D. A. (2002), “Surveyof Usability Evaluation in Virtual Environments: Clas-sification and Comparison of Methods,” Presence: Tele-operators and Virtual Environments , Vol. 11, No. 4,pp. 404–424.

Brashers-Krug, T., Shadmehr, R., and Bizzi, E. (1996),“Consolidation in Human Motor Memory,” Nature, Vol.382, No. 6588, pp. 252–255.

Brooks, B. M., and Rose, F. D. (2003), “The Use of VirtualReality in Memory Rehabilitation: Current Findings andFuture Directions,” NeuroRehabilitation , Vol. 18, No. 2,pp. 147–157.

Brooks, B. M., Rose, F. D., Potter, E. A., Jayawardena, S.,and Morling, A. (2004), “Assessing Stroke Patients’Prospective Memory Using Virtual Reality,” Brain Injury ,Vol. 18, No. 4, pp. 391–401.

Bruder, G., Steinicke, F., and Hinrichs, K. H. (2009),“Arch-Explore: A Natural User Interface for ImmersiveArchitectural Walkthroughs,” in Proceedings of IEEESymposium on 3D User Interfaces (3 DUI ), March14–15, Lafayette, LA., K. Kiyokawa, Ed., IEEE Press,Los Alamitos, CA, pp. 75–82.

Bryanton, C., Bosse, J., Brien, M., McLean, J., McCormick,A., and Sveistrup, H. (2006), “Feasibility, Motivation,and Selective Motor Control: Virtual Reality Comparedto Conventional Home Exercise in Children with Cere-bral Palsy,” CyberPsychology and Behavior , Vol. 9,pp. 123–128.

Bullinger, H.-J., Breining, R., and Braun, M. (2001), “Vir-tual Reality for Industrial Engineering: Applications forImmersive Virtual Environments,” in Handbook of Indus-trial Engineering: Technology and Operations Manage-ment , 3rd ed., G. Salvendy, Ed., Wiley, New York,pp. 2496–2520.

Bungert, C. (2007), “HMD/Headset/VR-Helmet Compari-son Chart,” available: http://www.stereo3d.com/hmd.htm,accessed May 22, 2010.

Burdea, G. C., and Coiffet, P. (2003). Virtual Reality Technol-ogy , 2nd ed., John Wiley & Sons, Hoboken, NJ.

Burigat, S., and Chittaro, L. (2007), “Navigation in 3D VirtualEnvironments: Effects of User Experience and Location-Pointing Navigation Aids,” International Journal ofHuman-Computer Studies , Vol. 65, No. 11, pp. 945–958.

Butler, R. A. (1987), “An Analysis of the Monaural Displace-ment of Sound in Space,” Perception and Psychophysics ,Vol. 41, pp. 1–7.

Calvert, S. L. (2002), “The Social Impact of Virtual Environ-ment Technology,” in Handbook of Virtual Environments:Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 663–680.

Calvert, S. L., and Tan, S. L. (1994), “Impact of VirtualReality on Young Adult’s Physiological Arousal andAggressive Thoughts: Interaction Versus Observation,”Journal of Applied Developmental Psychology , Vol. 15,pp. 125–139.

Carbonaro, M. et al. (2008), “Interactive Story Authoring: AViable Form of Creative Expression for the Classroom,”Computers and Education , Vol. 51, No. 2, pp. 687–707.

Carpenter, A., Johnston, M., Kokini, C, and Hale, K.(2010), “CogGauge: A Game-Based Cognitive Assess-ment Tool,” Proceedings of the International Conferenceon Human-Computer Interaction in Aerospace HCI-Aero2010 (Cape Canaveral, 3–5 November 2010), ACMPress, New York.

Carretta, T. R., and Ree, M. J. (1993), “Basic AttributesTest (BAT): Psychometric Equating of a Computer-BasedTest,” International Journal of Aviation Psychology ,Vol. 3, pp. 189–201.

Carretta, T. R., and Ree, M. J. (1995), “Air Force OfficerQualifying Test Validity for Predicting Pilot Training Per-formance,” Journal of Business and Psychology, Vol. 9,pp. 379–388.

Carroll, M., Fuchs, S., Carpenter, A., Hale, K., Abbott, R. G.,and Bolton, A. (2010a), “Development of an Autodiag-nostic Adaptive Precision Trainer for Decision Making(ADAPT-DM),” International Test and Evaluation Jour-nal , Vol. 31, No. 2, pp. 247–263.

Carroll, M., Fuchs, S., Hale, K., Dargue, B., and Buck, B.(2010b), “Advanced Training Evaluation System: Lever-aging Neuro-Physiological Measurement to IndividualizeTraining,” Proceedings of the Interservice/Industry Train-ing, Simulation, and Education Conference (I/ITSEC)Annual Meeting, Orlando, FL.

Carroll, M. B., Kokini, C., and Moss, J. (2010c), Multi-AxisPerformance Interpretation Tool (MAPIT) Field TrainingEffectiveness Evaluation , Program Interim Report, Con-tract No. W911QY-07-C-0084, Office of Naval Research,Arlington, VA.

Carson, D. (2000a), “Environmental Storytelling, Part 1:Creating Immersive 3D Worlds Using Lessons Learnedfrom the Theme Park Industry,” available: http://www.gamasutra.com/features/20000301/carson_pfv.htm,accessed May 24, 2010.

Carson, D. (2000b), “Environmental Storytelling, Part 2:Bringing Theme Park Environment Design TechniquesLessons to the Virtual World,” available: http://www.gamasutra.com/features/20000405/carson_pfv.htm,accessed May 24, 2010.

Centers for Disease Control and Prevention (CDC) (2010,January 11), “CDC Virtual World Requirements and BestPractices,” available: http://www.cdc.gov/SocialMedia/Tools/guidelines/pdf/virtualworld.pdf, accessed June 1,2010.

Chatfield, T., (2010), Fun Inc.: Why Games are the 21stCentury’s Most Serious Business , Virgin Books.

Chen, J. L., and Stanney, K. M. (1999), “A TheoreticalModel of Wayfinding in Virtual Environments: Proposed

Page 20: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1050 PERFORMANCE MODELING

Strategies for Navigational Aiding,” Presence: Tele-operators and Virtual Environments , Vol. 8, No. 6,pp. 671–685.

Chertoff, D. B., Schatz, S. L., McDaniel, R., and Bowers,C. A. (2008). “Improving Presence Theory throughExperiential Design,” Presence: Teleoperators and VirtualEnvironments,, Vol. 17, No. 4, pp. 405–413.

Childress, M. F., and Braswell, R. (2006), “Using Mas-sively Multiplayer Online Role-Playing Games forOnline Learning,” Distance Education, Vol. 27, No. 2,pp. 187–196.

Chua, P. T., Crivella, R., Daly, B., Hu, N., Schaaf, R., Ventura,D., et al. (2003), “Training for Physical Tasks in VirtualEnvironments: Tai Chi,” Proceedings of IEEE VirtualReality Conference 2003 , Los Angeles, CA, March22–26, 2003, IEEE Press, Alamitos, CA, pp. 87–97.

Coelho, C. M., Santos, J. A., Silverio, J., and Silva,C. F. (2006), “Virtual Reality and Acrophobia: One-Year Follow-Up and Case Study,” CyberPsychology andBehavior , Vol. 9, pp. 336–341.

Cohen, M. (1992), “Integrating Graphic and Audio Windows,”Presence: Teleoperators and Virtual Environments , Vol. 1,No. 4, pp. 468–481.

Cohn, J., DiZio, P., & Lackner J. R. (2000), “Reaching DuringVirtual Rotation: Context-Specific Compensation forExpected Coriolis Forces,” Journal of Neurophysiology ,Vol. 83, No. 6, pp. 3230–3240.

Cohn, J. V., Stanney, K. M., Milham, L. M., Jones, D. L., Hale,K. S., Darken, R. P., and Sullivan, J. A. (2007), “TrainingEvaluation of Virtual Environments,” in Assessmentof Problem Solving Using Simulations , E. L. Baker,J. Dickieson, W. Wulfeck, and H. O’Neil, Eds., LawrenceErlbaum, Mahwah, NJ, pp. 81–105.

Cote, S., and Bouchard, S. (2005), “Documenting the Efficacyof Virtual Reality Exposure with Psychophysiological andInformation Processing Measures,” Applied Psychophys-iology and Biofeedback , Vol. 30, pp. 217–232.

Cruz-Neira, C., Sandin, D. J., and DeFanti, T. A. (1993),“Surround-Screen Projection-Based Virtual Reality: TheDesign and Implementation of the CAVE,” ACM Com-puter Graphics , Vol. 27, No. 2, pp. 135–142.

Dahlquist, L. M., Weiss, K. E., Law, E. F., Soumitri, S.,Herbert, L. J., Horn, S. B., Wohlheiter, K., and AckermanC. S. (2010), “Effects of Videogame Distraction and aVirtual Reality Type Head-Mounted Display Helmet onCold Pressor Pain in Young Elementary School-AgedChildren,” Journal of Pediatric Psychology , Vol. 35,No. 6, pp. 617–625.

Darken, R. P., and Peterson, B. (2002), “Spatial Orientation,Wayfinding, and Representation,” in Handbook of VirtualEnvironments: Design, Implementation, and Applications ,K. M. Stanney, Ed., Lawrence Erlbaum Associates,Mahwah, NJ, pp. 493–518.

Deaton, J. E., et al. (2005), “Virtual Environment CulturalTraining for Operational Readiness (VECTOR),” VirtualReality , Vol. 8, pp. 156–167.

Deutsch, J. E., and Mirelman, A. (2007), “Virtual Reality-BasedApproaches to Enable Walking for People Poststroke,”Topics in Stroke Rehabilitation , Vol. 14, pp. 45–53.

Dickey, M. D. (2005), “Three-Dimensional Virtual Worlds andDistance Learning: Two Case Studies of Active Worldsas a Medium for Distance Education,” British Journal ofEducational Technology Vol. 36, No, 3, pp. 439–451.

DiZio, P., and Lackner, J. R. (1995), “Motor Adaptation toCoriolis Force Perturbations of Reaching Movements:Endpoint but not Trajectory Adaptation Transfers to theNon-Exposed Arm,” Journal of Neurophysiology , Vol. 74,No. 4, pp. 1787–1792.

Draper, M. H., Prothero, J. D., and Viirre, E. S. (1997),“Physiological Adaptations to Virtual Interfaces: Resultsof Initial Explorations,” in Proceedings of the HumanFactors and Ergonomics Society 41st Annual Meeting ,Human Factors and Ergonomics Society, Santa Monica,CA, p. 1393.

Driskell, J. E., and Johnston, J. H. (1998), “Stress ExposureTraining,” in Making Decisions under Stress: Implica-tions for Individual and Team Training , J. A. Cannon-Bowers & E. Salas, Eds., APA Press, Washington, DC,pp. 191–217.

Durlach, B. N. I., and Mavor, A. S. (1995), Virtual Reality: Sci-entific and Technological Challenges , National AcademyPress, Washington, DC.

Eastgate, R. M., et al. (2006), “Modified Virtual Reality forTreatment of Amblyopia,” Eye, Vol. 20, pp. 370–374.

Egner, J. (2009, May 29), “One Story with 1700 Authors,”Current Magazine, Current.org. available: http://current.org/tech/tech0709itvsgame.shtml, accessed October 28,2011.

Emmelkamp, P. M. G., Bruynzeel, M., Drost, L., andvan der Mast, C. A. P. G., (2001), “Virtual RealityTreatment in Acrophobia: A Comparison with Exposurein Vivo,” Cyberpsychology & Behavior , Vol. 4, No. 3,pp. 335–339.

Erol, A., Bebis, G., Nicolescu, M., Boyle, D., and Twombly, X.(2007), “Vision-Based Hand Pose Estimation: A Review,”Computer Vision and Image Understanding , Vol. 108,pp. 52–73.

Ersner-Hershfield, H., Bailenson, J., and Carstensen, L. L.(2008), “Feeling More Connected to Your Future Self:Using Immersive Virtual Reality to Increase RetirementSaving,” poster presented at the Association for Psycho-logical Science Annual Convention, Chicago, IL.

Eustace, K., Lee, M., Fellows, G., Bytheway, A., and Irving,L. (2004), “The Application of Massively MultiplayerOnline Role Playing Games to Collaborative Learningand Teaching Practice in Schools,” in Beyond theComfort Zone: Proceedings of the 21st ASCILITE Con-ference, R. Atkinson, C. McBeath, D. Jonas-Dwyer, andR. Phillips, Eds., p. 263, Perth, 5–8 December, available:http://www.ascilite.org.au/conferences/perth04/procs/eustace-poster.html, accessed October 21, 2011.

Favalora, G. E. (2005), “Volumetric 3D Displays andApplication Infrastructure,” Computer , Vol. 38, No. 8,pp. 37–44.

Feltovich, P. J., Eccles, D. W., and Hoffman, R. R. (2004),“Implications of the Reductive Tendency for the Re-design of Complex Sociotechnical Systems,” Report,Institute for Human and Machine Cognition, to theAdvanced Decision Architectures Collaborative Technol-ogy Alliance. Sponsored by the US Army Research Labo-ratory under cooperative agreement DAAD19-01-2-0009.

Forrest, F. C., Taylor, M. C., Postlethwaite, K. C., andAspinall, R. (2002), “Use of a High Fidelity Simulator toDevelop Testing of the Technical Performance of NoviceAnaesthetists,” British Journal of Anaesthesia , Vol. 88,No. 3, pp. 338–344.

Page 21: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1051

Fouad, H. (2004), “Ambient Synthesis with Random SoundFields,” in Audio Anecdotes: Tools, Tips, and Techniquesfor Digital Audio, K. Greenebaum, Ed., A. K. Peters,Natick, MA.

Fouad, H., and Ballas, J. (2000, April), “An ExtensibleToolkit for Creating Virtual Sonic Environments,” paperpresented at the International Conference on AuditoryDisplays, ICAD 2000, Atlanta, GA.

Fox, J., and Bailenson, J. N. (2009), “Virtual Self-Modeling:The Effects of Vicarious Reinforcement and Identificationon Exercise Behaviors,” Media Psychology , Vol. 12,pp. 1–25.

Fox, J., Arena, D., and Bailenson, J. N. (2009a), “VirtualReality: A Survival Guide for the Social Scientist,”Journal of Media Psychology , Vol. 21, No. 3, pp.95–113.

Fox, J., Bailenson, J. N., and Binney, J. (2009b), “Virtual Expe-riences, Physical Behaviors: The Effect of Presence onImitation of an Eating Avatar,” Presence: Teleoperatorsand Virtual Environments , Vol. 18, No. 4, pp. 294–303.

Foxlin, E. (2002), “Motion Tracking Requirements andTechnologies,” in Handbook of Virtual Environments:Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 163–210.

Fried, M. P, et al. (2010), “From Virtual Reality to the Oper-ating Room: The Endoscopic Sinus Surgery SimulatorExperiment,” Otolaryngology—Head and Neck Surgery ,Vol. 142, No. 2, pp. 202–207.

Fuchs, S., Johnston, M., Hale, K. S., and Axelsson, P.(2008), “Results from Pilot Testing of a System forTactile Reception of Advanced Patterns (STRAP),” inProceedings of the Human Factors and ErgonomicsSociety Annual Meeting , New York, NY, September22–26.

Gabbard, J. L., Hix, D., and Swan, J. E. II (1999), “User-Centered Design and Evaluation of Virtual Environ-ments,” IEEE Computer Graphics and Applications ,Vol. 19, No. 6, pp. 51–59.

Gee, J., (2008), Getting Over the Slump: Innovation Strate-gies to Promote Children’s Learning , Joan GanzCooney Center, New York, NY, available: http://www.joanganzcooneycenter.org/upload_kits/1_1.pdf, accessedOctober 21, 2011.

Gold, J. I., Belmont, K. A., and Thomas, D. A. (2007),“The Neurobiology of Virtual Reality Pain Attenua-tion,” Cyberpsychology & Behavior , Vol. 10, No. 4,pp. 536–544.

Grantcharov, T. P., Kristiansen, V. B., Bendix, J., Bardram,L., Rosenberg, J., and Funch-Jensen, P. (2004), “Ran-domized Clinical Trial of Virtual Reality Simulation forLaparoscopic Skills Training,” British Journal of Surgery ,Vol. 91, No. 2, pp. 146–150.

Gregg, L., and Tarrier, N. (2007), “Virtual Reality in MentalHealth: A Review of the Literature,” Social Psychiatryand Psychiatric Epidemiology , Vol. 42, pp. 343–354.

Gress, W., and Willkomm, B. (1996), “Simulator-based TestSystems as a Measure to Improve the Prognostic Valueof Aircrew Selection,” Selection and Training Advancesin Aviation: Advisory Group for Aerospace Research andDevelopment Conference Proceedings , Prague, CzechRepublic, Vol. 588, pp. 15-1–15-4.

Gross, M., Wurmlin, S., Naef, M., Lamboray, E., Spagno, C.,Kunz, A., Koller-Meier, E., Svoboda, T., Gool, L. V.,Lang, S., Strehlke, K., Moere, A.V., and Staadt, O.(2003), “Blue-C: A Spatially Immersive Display and 3DVideo Portal for Telepresence,” ACM Transactions onGraphics , Vol. 22, No. 3, pp. 819–827.

Guest, T. (2008), Second Lives: A Journey through VirtualWorlds, Random House, New York.

Gunter, G. A., Kenny, R. F., and Vick, E. H. (2008), “Tak-ing Educational Games Seriously: Using the RETAINModel to Design Endogenous Fantasy into StandaloneEducational Games,” Educational Technology Researchand Development , Vol. 56 No. 5–6, pp. 511–537.

Gutierrez-Maldonado J, Ferrer-Garcia M, Caqueo-Urizar A,and Letosa-Porta A. (2006), “Assessment of EmotionalReactivity Produced by Exposure to Virtual Environmentsin Patients with Eating Disorders,” Cyberpsychology andBehavior , Vol. 9, No. 5, pp. 507–513.

Gutierrez-Osuna, R. (2004), “Olfactory Interaction,” in Berk-shire Encyclopedia of Human-Computer Interaction ,W. S. Bainbride, Ed., Berkshire Publishing, Great Bar-rington, MA, pp. 507–511.

Hale, K. S., Fuchs, S., Axelsson, P., Baskin, A., andJones, D. (2007), “Determining Gaze Parameters toGuide EEG/ERP Evaluation of Imagery Analysis,” inFoundations of Augmented Cognition , 4th ed., D. D.Schmorrow, D. M. Nicholson, J. M. Drexler, and L. M.Reeves, Eds., Strategic Analysis Inc., Arlington, VA,pp. 33–40.

Hale, K. S., and Stanney, K. M. (2004), “Deriving Hap-tic Design Guidelines from Human Physiological, Psy-chophysical, and Neurological Foundation,” IEEE Com-puter Graphics and Applications , Vol. 24, No. 2,pp. 33–39.

Hale, K. S., Stanney, K. M., Milham, L. M., Bell-Carroll,M. A., and Jones, D. L. (2009), “Multimodal Sen-sory Information Requirements for Enhancing Situa-tion Awareness and Training Effectiveness,” TheoreticalIssues in Ergonomics Science, Vol. 10, No. 3, pp.245–266.

Harris, S. R., Kemmerling, R. L., and North, M. M.(2002), “Brief Virtual Reality Therapy for Public Speak-ing Anxiety,” CyberPsychology and Behavior , Vol. 5,pp. 543–550.

Hassinger, J. P., Dozois, E. J., Holubar, J. D., Pawlina,W., Pendlimari, R., Fidler, J. L., Holmes, D. R., andRobb, R. A. (2010), “Virtual Pelvic Anatomy SimulatorImproved Medical Student Comprehension of PelvicAnatomy,” FASEB Journal , Vol. 24, 825.3.

Hayes, E. R. (2006), “Situated Learning in Virtual Worlds:The Learning Ecology of Second Life,” Proceedings ofthe 47th annual Adult Education Research Conference,University of Minnesota, Minneapolis, Minneapolis, Min-nesota, available: http://www.adulterc.org/Proceedings/2006/Proceedings/Hayes.pdf, accessed October 21 2011.

Hayward, V. (2008), “A Brief Taxonomy of Tactile Illu-sions and Demonstrations That Can Be Done in a Hard-ware Store,” Brain Research Bulletin , Vol. 75, No. 6,pp. 742–752.

Hetherington, I. L. (2007), “PocketSUMMIT: Small FootprintContinuous Speech Recognition,” in Proceedings ofInternational Conference on Spoken Language Processing(Interspeech 2007 - ICSLP ), Antwerp, Belgium, August27–31 IEEE Press, Los Alamitos, CA, pp. 1465–1468.

Hettinger, L. J. (2002), “Illusory Self-Motion in VirtualEnvironments,” in Handbook of Virtual Environments:

Page 22: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1052 PERFORMANCE MODELING

Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 471–492.

Heylen, D., Van Es, I., Nijholt, A., and Van Dijk, B.(2008), “Chapter 1: Controlling the Gaze of Conver-sational Agents,” available: http://en.scientificcommons.org/43376669, accessed June 2, 2010.

Hill, R. W., Jr., Gratch, J., Marsella, S., Rickel, J., Swartout,W., and Traum, D. (2003), “Virtual Humans in the Mis-sion Rehearsal Exercise System,” Kunstliche Intelligenz ,Vol. 17, pp. 5–12.

Hix, D., and Gabbard, J. L. (2002), “Usability Engineering ofVirtual Environments,” in Handbook of Virtual environ-ments: Design, Implementation, and Applications , K. M.Stanney, Ed., Lawrence Erlbaum Associates, Mahwah,NJ, pp. 681–699.

Hoffman, H. G., Patterson, D. R., Seibel, E., Soltani, M.,Jewett-Leahy, L., and Sharar, S. R. (2008), “VirtualReality Pain Control during Burn Wound Debridement inthe Hydrotank,” Clinical Journal of Pain , Vol. 24, No. 4,pp. 299–304.

Holden, M. (2005), “Virtual Environments for Motor Rehabili-tation: A Review,” CyberPsychology & Behavior , Vol. 8,No. 3, pp. 187–211.

Hollerbach, J. (2002), “Locomotion Interfaces,” in Handbookof Virtual Environments: Design, Implementation, andApplications , K. M. Stanney, Ed., Lawrence ErlbaumAssociates, Mahwah, NJ, pp. 239–254.

Holmes, D. W., Williams, J. R., and Tilke, P. (2010), “AnEvents Based Algorithm for Distributing ConcurrentTasks on Multi-Core Architectures,” Computer PhysicsCommunications , Vol. 181, No. 2, pp. 341–354.

Huggins-Daines, D., Kumar, M., Chan, A., Black, A. W., Rav-ishankar, M., and Rudnicky, A. I. (2006), “PocketSphinx:A Free Real-Time Continuous Speech Recognition Sys-tem for Hand-Held Devices,” in Proceedings of 31stInternational Conference on Acoustics, Speech, and Sig-nal Processing (ICASSP 2006 ), Toulouse, France, May14–19, IEEE Press, Los Alamitos, CA, pp. 185–188.

Isdale, J. (2000, April), “Motion Platform Systems,” VRNews: April Tech Review , available: http://vr.isdale.com/vrTechReviews/MotionLinks_2000.html, accessed May21, 2010.

Isdale, J., Fencott, C., Heim, M., and Daly, L. (2002), “ContentDesign for Virtual Environments,” in Handbook of VirtualEnvironments: Design, Implementation, and Applications ,K. M. Stanney, Ed., Lawrence Erlbaum Associates,Mahwah, NJ, pp. 519–532.

Jedrzejewski, M. (2004), “Computation of Room Acoustics onProgrammable Video Hardware,” Master’s Thesis, Polish-Japanese Institute of Information Technology, Warsaw,Poland.

Johnsen, K., et al. (2005), “Experiences in Using ImmersiveVirtual Characters to Educate Medical CommunicationSkills,” in Proceedings of 2005 IEEE Conference onVirtual Reality , IEEE CS Press, pp. 179–186.

Johnston, M. R., Carpenter, A. C., and Hale, K. (2011), “Test-Retest Reliability of CogGauge: A Cognitive AssessmentTool for Spaceflight,” in Engineering Psychology andCognitive Ergonomics: Lecture Notes in Computer Sci-ence, D. Harris, Ed., Springer, Heidelberg, Vol. 6781,pp. 565–571.

Johnston, M., Stanney, K., Hale, K., and Kennedy, R. S.(2010), “A Framework for Improving Situation Aware-ness of the UAS Operator through Integration of TactileCues,” in Proceedings of the 3rd Applied Human Factorsand Ergonomics (AHFE) International Conference 2010 ,Miami, FL, July 17–20.

Jones, A., McDowall, I., Yamada, H., Bolas, M., and Debevec,P. (2007, July), “Rendering an Interactive 360◦ LightField Display,” ACM Transactions on Graphics , Vol. 26,No. 3, Article 40.

Jones, D. L., Del Guidice, K., Moss, J., Hale, K. (2010), MedicVehicle Environment/Task Trainer (M-VETT): Phase IProcess and Results , Final Technical Report submittedto Office of the Secretary of Defense under contractW81XWH-10-C-0198.

Jones, D., Stanney, K., and Fouad, H. (2005), “An OptimizedSpatial Audio System for Virtual Training Simulations:Design and Evaluation,” in Proceedings of the Interna-tional Conference on Auditory Display , Limerick, Ireland,July 6–9, 2005.

Jones, K. (1990), “General Activity for ManagementEducation–A Deliberate Ambivalence,” Simulation/Games for Learning , Vol. 20, No. 2, pp. 142–151.

Jones, L., Bowers, C. A., Washburn, D., Cortes, A., and VijayaSatya, R. (2004), “The Effect of Olfaction on Immersioninto Virtual Environments,” in Human Performance,Situation Awareness and Automation: Current Researchand Trends , Vol. 2, D. A. Vincenzi, M. Mouloua, andP. A. Hancock, Eds., Lawrence Erlbaum, Mahwah, NJ,pp. 282–285.

Kalawsky, R. S. (1993), The Science of Virtual Realityand Virtual Environments , Addison-Wesley, Wokingham,England.

Kallman, E. A. (1993), “Ethical Evaluation: A NecessaryElement in Virtual Environment Research,” Presence:Teleoperators and Virtual Environments , Vol. 2, No. 2,pp. 143–146.

Karim, M. S., Karim, A. M. L., Ahmed, E., and Rokonuzzaman,M. (2003), “Scene Graph Management for OpenGLBased 3D Graphics Engine,” in Proceedings of theInternational Conference on Computer and InformationTechnology (ICCIT 2003 ), Vol. 1, pp. 395–400.

Katz, N., Ring, H., Naveh, Y., Kizony, R., Feintuch, U., andWeiss, P. L. (2005), “Interactive Virtual EnvironmentTraining for Safe Street Crossing of Right HemisphereStroke Patients with Unilateral Spatial Neglect,” Disabil-ity and Rehabilitation , Vol. 29, No. 2, pp. 177–181.

Kaur, K. (1999), “Designing Virtual Environments for Usabil-ity,” unpublished doctoral dissertation, City University,London.

Kelley, A. I., Orgel, R. F., and Baer, D. M. (1985), “SevenStrategies That Guarantee Training Transfer,” Trainingand Development Journal , Vol. 39, No. 11, pp. 78–82.

Kennedy, R. S., and Fowlkes, J. E. (1992), “Simulator SicknessIs Polygenic and Polysymptomatic: Implications forResearch,” International Journal of Aviation Psychology ,Vol. 2, No. 1, pp. 23–38.

Kennedy, R. S., and Graybiel, A. (1965), “The Dial test: AStandardized Procedure for the Experimental Productionof Canal Sickness Symptomatology in a Rotating Envi-ronment,” Report No. 113, NSAM 930, Naval School ofAerospace Medicine, Pensacola, FL.

Kennedy, R. S., and Stanney, K. M. (1996), “Virtual RealitySystems and Products Liability,” Journal of Medicine andVirtual Reality , Vol. 1, No. 2, pp. 60–64.

Page 23: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1053

Kennedy, R. S., Lane, N. E., Berbaum, K. S., and Lilienthal,M. G. (1993), “Simulator Sickness Questionnaire: AnEnhanced Method for Quantifying Simulator Sickness,”International Journal of Aviation Psychology , Vol. 3,No. 3, pp. 203–220.

Kennedy, R. S., Stanney, K. M., Ordy, J. M., and Dunlap, W.P. (1997), “Virtual Reality Effects Produced by Head-Mounted Display (HMD) on Human Eye-Hand Coordi-nation, Postural Equilibrium, and Symptoms of Cyber-sickness,” Society for Neuroscience Abstracts , Vol. 23,772.

Kennedy, R. S., Stanney, K. M., and Dunlap, W. P. (2000),“Duration and Exposure to Virtual Environments: Sick-ness Curves during and across Sessions,” Presence:Teleoperators and Virtual Environments , Vol. 9, No. 5,pp. 466–475.

Kennedy, R. S., Kennedy, K. E., and Bartlett, K. M. (2002),“Virtual Environments and Products Liability,” in Hand-book of Virtual Environments: Design, Implementation,and Applications , K. M. Stanney, Ed., Lawrence ErlbaumAssociates, Mahwah, NJ, pp. 543–554.

Kennedy, R. S., Drexler, J. M., Jones, M. B., Compton, D. E.,and Ordy, J. M. (2005), “Quantifying Human InformationProcessing (QHIP): Can Practice Effects Alleviate Bot-tlenecks?,” in Quantifying Human Information Process-ing , D. K. McBride and D. Schmorrow, Eds., LexingtonBooks, Lanham, MD, pp. 63–122.

Kessler, G. D. (2002), “Virtual Environment Models,” in Hand-book of Virtual Environments: Design, Implementation,and Applications , K. M. Stanney, Ed., Lawrence ErlbaumAssociates, Mahwah, NJ, pp. 255–276.

Khatib, F., et al. (2011), “Crystal Structure of a MonomericRetroviral Protease Solved by Protein Folding GamePlayers,” Nature Structural & Molecular Biology , Vol. 18,pp. 1175–1177.

King, M. (2009). Military Simulation and Virtual Training Mar-ket 2009–2019 . Companiesandmarkets.com. London,England.

Kim, J. Y., Allen, J. P., and Lee, E. (2008), “Alternate RealityGaming,” Communications of the ACM , Vol. 51, No. 2,pp. 36–42.

Kitazaki, M., Onimaru, S., and Sato, T. (2010), “Vectionand Action Are Incompatible,” in Proceedings of the2nd IEEE VR 2010 Workshop on Perceptual Illusionsin Virtual Environments (PIVE 2010 ), Waltham, MA,March 21, 2010, F. Steinicke and P. Willemsen, Eds.,pp. 22–23, available: http://pive.uni-muenster.de/paper/PIVE_proceedings2010.pdf, accessed May 31, 2010.

Knerr, B. W., Breaux, R., Goldberg, S. L., and Thurman,R. A. (2002), “National Defense,” in Handbook of VirtualEnvironments: Design, Implementation, and Applications ,K. M. Stanney, Ed., Lawrence Erlbaum Associates,Mahwah, NJ, pp. 857–872.

Koltko-Rivera, M. E. (2005), “The Potential Societal Impactof Virtual Reality,” in Advances in Virtual Environ-ments Technology: Musings on Design, Evaluation, andApplications , K. M. Stanney and M. Zyda, Eds.,HCI International 2005: 11th International Conferenceon Human-Computer Interaction, CD-ROM, Vol. 9,Lawrence Erlbaum Associates, Mahwah, NJ.

Konrad, J., and Halle, M. (2007, November), “3-D Displaysand Signal Processing—An Answer to 3-D Ills?,” IEEESignal Processing Magazine, Vol. 24, No. 6, pp. 97–111.

Kuntze, M. F., Stoermer, R., Mager, R., Roessler, A., Mueller-Spahn, F., and Bullinger, A. H. (2001), “Immersive

Virtual Environments in Cue Exposure,” Cyberpsychol-ogy and Behavior , Vol. 4, No. 4, pp. 497–501.

Lackner, J. R., and DiZio, P. (1994), “Rapid Adaptation toCoriolis Force Perturbations of Arm Trajectory,” Journalof Neurophysiology , Vol. 72, No. 1, pp. 299–313.

Laughlin, D., Roper, M., and Howell, K. (2007, April), “NASAeEducation Roadmap: Research Challenges in the Designof Persistent Immersive Synthetic Environments for Edu-cation and Training,” Federation of American Scientists,Washington, DC, available: http://www.fas.org/programs/ltp/publications/NASA%20eEducation%20Roadmap.pdf,accessed July 28, 2010.

Lawson, B. D., Sides, S. A., and Hickinbotham, K. A. (2002),“User Requirements for Perceiving Body Acceleration,”in Handbook of Virtual Environments: Design, Implemen-tation, and Applications , K. M. Stanney, Ed., LawrenceErlbaum Associates, Mahwah, NJ, pp. 135–161.

Leonetti, C., and Bailenson, J. N. (2010), “High-TechView: The Use of Immersive Virtual Environments inJury Trials,” Marquette Law Review , Vol. 93, No. 3,pp. 1073–1120.

Maekawa, T., Itoh, Y., Takamoto, K., Tamada, K., Maeda, T.,Kitamura, Y., and Kishino, F. (2009), “Tearable: HapticDisplay That Presents a Sense of Tearing Real Paper.Virtual Reality Software and Technology,” in Proceedingsof the 16th ACM Symposium on Virtual Reality Softwareand Technology , Association for Computing Machinery,New York, pp. 27–30.

Majumder, A. (2003), “A Practical Framework to AchievePerceptually Seamless Multi-Projector Displays Ph.DDissertation, University of North Carolina at ChapelHill, available: http://www.cs.unc.edu/∼welch/media/pdf/dissertation_majumder.pdf, accessed May 24, 2010.

Mantovani, F., and Castelnuovo, G. (2003), “Sense of Presencein Virtual Training: Enhancing Skills Acquisition andTransfer of Knowledge through Learning Experience inVirtual Environments,” in Being There: Concepts, Effectsand Measurement , G. Riva, F. Davide, and W. A.IJsselsteijn, Eds., Ios Press, Amsterdam, The Netherlands,pp. 167–181.

Maran, N. J., and Glavin, R. J. (2003), “Low- to High- FidelitySimulation–A Continuum of Medical Education?,” Med-ical Education , Vol. 37, No. S1, pp. 22–28.

Martinussen, M. (1996), “Psychological Measures as Predictorsof Pilot Performance: A Meta-analysis,” InternationalJournal of Aviation Psychology , Vol. 6, pp. 1–20.

Massimino, M., and Sheridan, T. (1993), “Sensory Substi-tution for Force Feedback in Teleoperation,” Presence:Teleoperators and Virtual Environments , Vol. 2, No. 4,pp. 344–352.

McCauley, M. E., and Sharkey, T. J. (1992), “Cybersickness:Perception of Self-Motion in Virtual Environments,”Presence: Teleoperators and Virtual Environments , Vol. 1,No. 3, pp. 311–318.

McCauley-Bell, P. R. (2002), “Ergonomics in VirtualEnvironments,” in Handbook of Virtual Environments:Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 807–826.

McQuaide, S. C., Seibel, E. J., Kelly, J. P., Schowengerdt,B. T., and Furness, T. A. III (2003), “A Retinal ScanningDisplay System That Produces Multiple Focal Planeswith a Deformable Membrane Mirror,” Displays , Vol. 24,No. 2, pp. 65–72

Melzer, J. E., and Rash, C. E. (2009), “The Potential ofan Interactive HMD,” in Helmet-Mounted Displays:

Page 24: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1054 PERFORMANCE MODELING

Sensation, Perception, and Cognition Issues , C. E. Rash,M. Russo, T. Letowski, and E. Schmeisser, Eds., U.S.Army Aeromedical Research Laboratory, Fort Rucker,AL, pp. 877–898.

Melzer, J. E., Brozoski, F. T., Letowski, T. R., Harding, T. H.,and Rash, C. E. (2009), “Guidelines for HMD Design,”in Helmet-Mounted Displays: Sensation, Perception, andCognition Issues , C. E. Rash, M. Russo, T. Letowski, andE. Schmeisser, Eds., U.S. Army Aeromedical ResearchLaboratory, Fort Rucker, AL, pp. 805–848.

Menzies, D. (2002, July), “Scene Management for ModelledAudio Objects in Interactive Worlds,” in Proceedings ofthe 8th International Conference on Auditory Displays ,Kyoto, Japan.

Minamizawa, K., Kamuro, S., Kawakami, N., and Tachisug-gest, S. (2008), “A Palm-Worn Haptic Display forBimanual Operations in Virtual Environments,” in Euro-Haptics 2008 , M. Ferre, Ed., Springer-Verlag, Berlin,pp. 458–463.

Moline, J. (1995), Virtual Environments for Health Care. WhitePaper for the Advanced Technology Program (ATP).Retrieved October 20, 2011, from the National Insti-tute of Standards and Technology website: http://www.itl.nist.gov/iaui/ovrt/projects/health/vr-envir.htm.

Mouba, J. (2009), “Performance of Source Spatialization andSource Localization Algorithms Using Conjoint Modelsof Interaural Level and Time Cues,” in Proceedings of theTwelfth International Conference on Digital Audio Effects(DAFx-09 ), Como, Italy, September 1–4.

Mulgund, S., Stokes, J., Turieo, M., and Devine, M. (2002),“Human/Machine Interface Modalities for Soldier Sys-tems Technologies,” Final Report No. 71950-00, TIAX,LLC, Cambridge, MA.

Murray, J. H. (1997), Hamlet on the Holodeck: The Future ofNarrative in Cyberspace, Free Press, New York.

Nakatsu, R., Rauterberg, M., and Vorderer, P. (2005), “ANew Framework for Entertainment Computing: FromPassive to Active Experience,” in Proceedings of the 4thInternational Conference on Entertainment Computing(ICEC 2005 ), Lecture Notes in Computer Science, Vol.3711, F. Kishino, Y. Kitamura, H. Kato, and N. Nagata,Eds., Springer Verlag, Berlin and Heidelberg, pp. 1–12.

North, M. M., North, S. M., and Coble, J. R. (2002), “VirtualReality Therapy: An Effective Treatment for Psycholog-ical Disorders,” in Handbook of Virtual Environments:Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 1065–1078.

Optale, G., Capodieci, S., Pinelli, P., Zara, D., Gamberini, L.,and Riva, G. (2001), “Music-Enhanced Immersive Vir-tual Reality in the Rehabilitation of Memory RelatedCognitive Processes and Functional Abilities: A CaseReport,” Presence: Teleoperators and Virtual Environ-ments , Vol. 10, No. 4, pp. 450–462.

Osgood, C. E. (1949), “The Similarity Paradox in HumanLearning: A Resolution,” Psychological Review , Vol. 56,pp. 132–143.

Pappo, H. A. (1998), Simulations for Skills Training , Educa-tional Technology Publications, Englewood Cliffs, NJ.

Parsons, T. D., and Rizzo, A. A. (2008), “Affective Out-comes of Virtual Reality Exposure Therapy for Anxi-ety and Specific Phobias: A Meta-Analysis,” Journal ofBehavior Therapy and Experimental Psychiatry , Vol. 39,pp. 250–261.

Parsons, T. D., Bowerly, T., Buckwalter, J. G., and Rizzo, A. A.(2007), “A Controlled Clinical Comparison of Attention

Performance in Children with ADHD in a Virtual RealityClassroom Compared to Standard NeuropsychologicalMethods,” Child Neuropsychology , Vol. 13, No. 4,pp. 363–381.

Patrick, J. (1992), Training: Research and Practice, AcademicPress, San Diego, CA.

Popescu, G. V., Burdea, G. C., and Trefftz, H. (2002), “Mul-timodal Inter-Action Modeling,” in Handbook of VirtualEnvironments: Design, Implementation, and Applications ,K. M. Stanney, Ed., Lawrence Erlbaum Associates, Mah-wah, NJ, pp. 435–454.

Portillo-Rodriguez, O., Avizzano, C. A., Bergamasco, M.,and Robles-De-La-Torre, G. (2006), “Haptic Renderingof Sharp Objects Using Lateral Forces,” in Proceed-ings of the IEEE International Symposium on Robot andHuman Interactive Communication (ROMAN06 ), Hat-field, United Kingdom, September 6–8, pp. 431–436.

Powers, M. B., and Emmelkamp, P. M. G. (2008), “VirtualReality Exposure Therapy for Anxiety Disorders: AMeta-Analysis,” Journal of Anxiety Disorders , Vol. 22,pp. 561–569.

Pulkki, V. (1997), “Virtual Sound Source Positioning UsingVector Base Amplitude Panning,” Journal of the AudioEngineering Society , Vol. 45, No. 6, pp. 456–466.

Rauterberg, M. (2009), “Entertainment Computing, SocialTransformation and the Quantum Field,” in Proceedingsof Intelligent Technologies for Interactive Entertainment(INTETAIN 2009 ), Lecture Notes of the Institute forComputer Sciences, Vol. 9, A. Nijholt, D. Reidsma,and H. Hondorp, Eds., Springer Verlag, Berlin andHeidelberg, pp. 1–8.

Radoff, J. (2011), Game On: Energize Your Business with SocialMedia Games , Wiley Publishing, Indianapolis, IN.

Reeves, L. M., Lai, J. C., Larson, J. A., Oviatt, S. L., Balaji,T. S., Buisine, S., Collings, P., Cohen, P. R., Kraal, B.,Martin, J. C., McTear, M. F., Raman, T. V., Stanney,K. M., Su, H., and Wang, Q. Y. (2004), “Guidelines forMultimodal User Interface Design,” Communications ofthe ACM , Vol. 47, No. 1, pp. 57–59.

Reger, G. M., and Gahm, G. A. (2008), “Virtual RealityExposure Therapy for Active Duty Soldiers,” Journal ofClinical Psychology , Vol. 64, pp. 940–946.

Regian, J. W., Shebilske, W. L., and Monk, J. M. (1992),“Virtual Reality: An Instructional Medium for Visual-Spatial Tasks,” Journal of Communication , Vol. 42,pp. 136–149.

Rideout, V. J., Foehr, U. G., and Roberts D. F. (2010),Generation M 2: Media in the Lives of 8–18 Year-Olds , Kaiser Family Foundation, Menlo Park, CA,available: http://www.kff.org/entmedia/upload/8010.pdf,accessed October 31, 2011.

Riecke, B. E., Valjamae, A., and Schulte-Pelkum, J. (2009),“Moving Sounds Enhance the Visually-Induced Self-Motion Illusion (Circular Vection) in Virtual Reality,”ACM Transactions on Applied Perception , Vol. 6, No. 2,Article 7, pp. 7:1–7:27.

Rizzo, A., Bowerly, T., Buckwalter, J., Klimchuk, D., Mitura,R., and Parsons, T. D. (2006), “A Virtual Reality Scenariofor All Seasons: The Virtual Classroom,” CNS Spectrums ,Vol. 11, No. 1, pp. 35–44.

Rizzo, S., Rothbaum, B., and Graap, K. (2006), “VirtualReality Applications for the Treatment of Combat-RelatedPTSD,” in Combat Stress Injury Theory, Research, andManagement , C. R. Figley and W. P. Nash, Eds.,pp. 295–329, Routledge, New York, NY.

Page 25: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

VIRTUAL ENVIRONMENTS 1055

Robles-De-La-Torre, G., and Hayward, V. (2001), “Force CanOvercome Object Geometry in the Perception of Shapethrough Active Touch,” Nature, Vol. 412, No. 6845,pp. 445–448.

Rolland, J. P., Biocca, F. A., Barlow, T., and Kancherla,A. (1995), “Quantification of Adaptation to Virtual-Eye Location in See-Thru Head-Mounted Displays,”in Proceedings of the IEEE Virtual Reality AnnualInternational-Symposium ’95 , IEEE Computer SocietyPress, Los Alimitos CA, pp. 56–66.

Rolston, M. (2010), “Your Computer in 2020,” in Your Lifein 2020, N. Perlroth, Ed., available: http://www.forbes.com/2010/04/08/3d-computers-2020-technology-data-companies-10-frog.html, accessed May 25, 2010.

Romoser, M. R. E., and Fisher, D. L. (2009), “The Effect ofActive Versus Passive Training Strategies on ImprovingOlder Driver’s Scanning in Intersections,” Human Fac-tors , Vol. 51, No. 5, pp. 652–668.

Roscoe, S. N. (1982), Aviation Psychology . Iowa StateUniversity Press, Ames, IA.

Rothbaum, B. O., Hodges, L., Smith, S., Lee, J. H., and Price,L. (2000), “A Controlled Study of Virtual Reality Expo-sure Therapy for the Fear of Flying,” Journal of Consult-ing and Clinical Psychology , Vol. 68, pp. 1020–1026.

Rothbaum, B. O., Hodges, L., Watson, B. A., Kessler, G. D.,and Opdyke, D. (1996), “Virtual Reality Exposure Ther-apy in the Treatment of Fear of Flying: A Case Report,”Behaviour Research & Therapy , Vol. 34, pp. 477–481.

Rouse, W. B. (1982), “A Mixed-Fidelity Approach to TechnicalTraining,” Journal of Educational Technology Systems ,Vol. 11, pp. 103–115.

Roy, S., Klinger, E., Legeron, P., Lauer, F., Chemin, I., andNugues, P. (2003), “Definition of a VR-Based Protocolto Treat Social Phobia,” CyberPsychology and Behavior ,Vol. 6, pp. 411–420.

Sadowski, W., and Stanney, K. (2002), “Presence in VirtualEnvironments,” in Handbook of Virtual Environments:Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 791–806.

Satava, R., and Jones, S. B. (2002), “Medical Applications ofVirtual Environments,” in Handbook of Virtual Environ-ments: Design, Implementation, and Applications , K. M.Stanney, Ed., Lawrence Erlbaum Associates, Inc., Mah-wah, NJ, pp. 93–116.

Seeber, B. U., and Fastl, H. (2003), “Subjective Selectionof Non-Individual Head-Related Transfer Function,” inProceeding of the 2003 International Conference onAuditory Display , Boston University, Boston, MA, July6–9, pp. 259–262.

Segal, D., and Fernandez, R. L. (2009, August), “TeachingPhysician Decision Making in a Technical Age,” VirtualMentor , Vol. 11, No. 8, pp. 607–610.

Serenko, A., and Detlor, B. (2004), “Intelligent Agents asInnovations,” Artificial Intelligence and Society , Vol. 18,No. 4, pp. 364–381.

Sheridan, T. B. (1993), “My Anxieties About Virtual Envi-ronments,” Presence: Teleoperators and Virtual Environ-ments , Vol. 2, No. 2, pp. 141–142.

Shilling, R. D., and Shinn-Cunningham, B. (2002), “VirtualAuditory Displays,” in Handbook of Virtual Environ-ments: Design, Implementation, and Applications , K. M.Stanney, Ed., Lawrence Erlbaum Associates, Mahwah,NJ, pp. 65–92.

Siem, F. M., Carretta, T. R., and Mercatante, T. A. (1988),Personality, Attitudes, and Pilot Training Performance:Preliminary Analysis, Tech. Report. N. AFHRL-TP-87-62, Brooks Air Force Base, TX: AFHRL, Manpower andPersonnel Division.

Slater, M., and Steed, A. (2000), “A Virtual Presence Counter,”Presence: Teleoperators and Virtual Environments , Vol. 9,No. 5, pp. 413–434.

Spors, S., and Ahrens, J. (2010), “Analysis and Improvementof Pre-Equalization In 2.5-Dimensional Wave Field Syn-thesis,” in Proceedings of the 128th Audio EngineeringSociety (AES) Convention, P21—Multichannel and Spa-tial Audio: Part 2 Session (P21-3 ), London, May 25.

Squire, K., (2005), “Changing the Game: What HappensWhen Video Games Enter the Classroom,” Innovate:Journal of Online Education , Vol. 1, No. 6, available:http://innovateonline.info/pdf/vol1_issue6/Changing_the_Game_What_Happens_When_Video_Games_Enter_the_Classroom_.pdf, accessed October 28, 2011.

Stanney, K. M., and Cohn, J. V. (2012), “Virtual Environ-ments,” in Handbook of Human–Computer Interaction(3rd Edition), J. Jacko, Ed., Taylor & Francis, Hampshire,United Kingdom.

Stanney, K. M., Graeber, D. A., and Kennedy, R. S. (2005),“Virtual Environment Usage Protocols,” in Handbook ofStandards and Guidelines in Ergonomics and Human Fac-tors , W. Karwowski, Ed., Lawrence Erlbaum, Mahwah,NJ, pp. 381–398.

Stanney, K. M., and Kennedy, R. S. (1998, October),“Aftereffects from Virtual Environment Exposure: HowLong Do They Last?,” in Proceedings of the 42ndAnnual Human Factors and Ergonomics Society Meeting ,Chicago, IL, pp. 1476–1480.

Stanney, K. M., and Kennedy, R. (2008), “Simulator Sick-ness,” in Human Factors in Simulation and Training ,D. Vincenzi, J. A. Wise, M. Mouloua, and P. A. Han-cock, Eds., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 117–128,

Stanney, K. M., Kingdon, K., Nahmens, I., and Kennedy,R. S. (2003), “What to Expect from Immersive VirtualEnvironment Exposure: Influences of Gender, Body MassIndex, and Past Experience,” Human Factors , Vol. 45,No. 3, pp. 504–522.

Stanney, K., Kokini, C., Fuchs, S., Axelsson, P., andPhillips, C. (2010), “Auto-Diagnostic Adaptive Preci-sion Training—Human Terrain (ADAPT-HT): A Con-ceptual Framework for Cross-Cultural Skills Training,”in Proceedings of the 3rd Applied Human Factorsand Ergonomics (AHFE) International Conference 2010 ,Miami, FL, July 17–20.

Stanney, K. M., Mollaghasemi, M., and Reeves, L. (2000),“Development of MAUVE, the Multi-Criteria Assessmentof Usability for Virtual Environments System,” FinalReport, Contract No. N61339-99-C-0098, Naval AirWarfare Center, Training Systems Division, Orlando, FL.

Stanney, K. M., Mourant, R., and Kennedy, R. S. (1998a),“Human Factors Issues in Virtual Environments: AReview of the Literature,” Presence: Teleoperators andVirtual Environments , Vol. 7, No. 4, pp. 327–351.

Stanney, K. M., et al. (1998b), “Aftereffects and Sense ofPresence in Virtual Environments: Formulation of aResearch and Development Agenda (Report sponsoredby the Life Sciences Division at NASA Headquarters),”International Journal of Human-Computer Interaction ,Vol. 10, No. 2, pp. 135–187.

Page 26: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Virtual Environments

1056 PERFORMANCE MODELING

Stanney, K. M., Schmorrow, D. D., Johnston, M., Fuchs, S.,Jones, D., Hale, K., Ahmad, A., and Young, P. (2009),“Augmented Cognition: An Overview,” in Reviews ofHuman Factors and Ergonomics , Vol. 5, F. T. Durso, Ed.,Human Factors and Ergonomics Society, Santa Monica,CA, pp. 195–224.

Stanney, K., et al. (2004), “A Paradigm Shift in InteractiveComputing: Deriving Multimodal Design Principles fromBehavioral and Neurological Foundations,” InternationalJournal of Human-Computer Interaction , Vol. 17, No. 2,pp. 229–257.

Stanney, K. M., and Zyda, M. (2002), “Virtual Environments inthe 21st Century,” in Handbook of Virtual Environments:Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 1–14.

Steinicke, F., and Willemsen, P., Eds. (2010), Proceed-ings of the 2nd IEEE VR 2010 Workshop on Percep-tual Illusions in Virtual Environments (PIVE 2010 ),Waltham, MA, March 21, 2010, available: http://pive.uni-muenster.de/paper/PIVE_proceedings2010.pdf, accessedMay 31, 2010.

Sterling, G. C., Magee, L. E., and Wallace, P. (2000,March), “Virtual Reality Training—A Consideration forAustralian Helicopter Training Needs?,” paper presentedat Simulation Technology and Training (SimTecT2000),Sydney Australia, March 2000.

Stoffregen, T., Bardy, B. G., Smart, L. J., and Pagulayan,R. (2003), “On the Nature and Evaluation of Fidelityin Virtual Environments,” in Virtual and Adaptive Envi-ronments: Applications, Implications, and Human Perfor-mance Issues , L. J. Hettinger and M. W. Haas, Eds.,Lawrence Erlbaum Associates, Publishers, Mahwah, NJ,pp. 111–128.

Storms, R. L. (2002), “Auditory-Visual Cross-Modality Interac-tion and Illusions,” in Handbook of Virtual Environments:Design, Implementation, and Applications , K. M. Stan-ney, Ed., Lawrence Erlbaum Associates, Mahwah, NJ,pp. 455–470.

Strickland, D., Hodges, L., North, M., and Weghorst, S.(1997), “Overcoming Phobias by Virtual Exposure,”Communications of the ACM , Vol. 40, No. 8, pp. 34–39.

Sullivan, A. (2004), “DepthCube Solid-State 3D Volu-metric Display,” in Proceedings of the SPIE Stereo-scopic Displays and Virtual Reality Systems , Vol. 5291,pp. 279–284.

Suma, E. A., Clark, S., Finkelstein, S. L., and Wartell,Z. (2010), “Leveraging Change Blindness for Walk-ing in Virtual Environments,” in Proceedings of the2nd IEEE VR 2010 Workshop on Perceptual Illusionsin Virtual Environments (PIVE 2010 ), Waltham, MA,March 21, 2010, F. Steinicke and P. Willemsen, Eds.,p. 10, available: http://pive.uni-muenster.de/paper/PIVE_proceedings2010.pdf, accessed May 31, 2010.

Swartz, K. O. (2003), “Virtual Environment Usability Assess-ment Methods Based on a Framework of UsabilityCharacteristics,” Master’s Thesis, Virginia PolytechnicInstitute and State University, Blacksburg, VA.

Thorndike, E. L., and Woodworth, R. S. (1901), “The Influenceof Improvement of One Mental Function Upon theEfficiency of the Other Functions,” Physiological Review ,Vol. 8, No. 3, pp. 247–261.

Tippett, W. J., Lee, J-H., Zakzanis, K. K., Black, S. E., Mraz,R., and Graham, S. J. (2009), “Visually Navigating a

Virtual World with Real-World Impairments: A Study ofVisually and Spatially Guided Performance in Individualswith Mild Cognitive Impairments,” Journal of Clinicaland Experimental Neuropsychology , Vol. 31, No. 4,pp. 447–454.

Turk, M. (2002), “Gesture Recognition,” in Handbook of Vir-tual Environments: Design, Implementation, and Applica-tions , K. M. Stanney, Ed., Lawrence Erlbaum Associates,Mahwah, NJ, pp. 223–238.

Ultimate 3D Links (2010), “Commercial 3D Software,” avail-able: http://www.3dlinks.com/links.cfm?categoryid=1&subcategoryid=1#, accessed May 25, 2010.

Van Merrienboer, J. J. G., and Kester, L. (2005), “TheFour-Component Instructional Design Model: Multime-dia Principles in Environments for Complex Learning,”In R. E. Mayer (Ed.), The Cambridge Handbook of Multi-media Learning , Cambridge University Press, New York,pp. 71–93.

Vertanen, K., and Kristensson, P. O. (2009), “Parakeet:A Continuous Speech Recognition System for MobileTouch-Screen Devices,” in Proceedings of the 13thInternational Conference on Intelligent User Interfaces ,Sanibel Island, FL. February 9–11, pp. 237–246.

Vince, J. (2004), Introduction to Virtual Reality , 2nd ed.,Springer-Verlag, Berlin.

Vozenilek, J., Huff, J. S., Reznek, M., and Gordon, J. A. (2004),“See One, Do One, Teach One: Advanced Technologyin Medical Education,” Academic Emergency Medicine,Vol. 11 No. 11, pp. 1149–1154.

Washburn, D. A., and Jones, L. M. (2004), “Could OlfactoryDisplays Improve Data Visualization?,” Computing inScience and Engineering , Vol. 6, No. 6, pp. 80–83.

Washburn, D. A., Jones, L. M., Satya, R. V., Bowers, C.A., and Cortes, A. (2003), “Olfactory Use in VirtualEnvironment Training,” Modeling and Simulation , Vol. 2,No. 3, pp. 19–25.

Welch, R. B. (1978), Perceptual Modification: Adapting toAltered Sensory Environments , Academic, New York.

Welch, R. B. (1997), “The Presence of Aftereffects,” in Designof Computing Systems: Cognitive Considerations , G. Sal-vendy, M. Smith, and R. Koubek, Eds., Elsevier SciencePublishers, Amsterdam, The Netherlands, pp. 273–276.

Wiecha, J., Heyden, R., Sternthal, E., and Merialdi, M. (2010),“Learning in a Virtual World: Experience with UsingSecond Life for Medical Education,” Journal of MedicalInternet Research , Vol. 12, No. 1, p. e1, available:http://www.jmir.org/2010/1/e1/, accessed May 20, 2010.

Witmer, B., and Singer, M. (1998), “Measuring Presencein Virtual Environments: A Presence Questionnaire,”Presence: Teleoperators and Virtual Environments , Vol. 7,No. 3, pp. 225–240.

Yee, N., Bailenson, J. N., and Ducheneaut, N. (2009),“The Proteus Effect: Implications of Transformed DigitalSelf-Representation on Online and Offline Behavior,”Communication Research , Vol. 36, No. 2, pp. 285–312.

Yeh, S-C., Parsons, T. D., McLaughlin, M., and Rizzo,A. A. (2007), “Virtual Reality Upper Extremity MotorTraining for Post-Stroke Rehabilitation,” Journal of theInternational Neuropsychological Society , Vol. 13, Suppl.S1, p. 58.

Zotkin, D. N., Duraiswami, R., Grassi, E., and GumerovN. A. (2006), “Fast Head Related Transfer FunctionMeasurement Via Reciprocity,” Journal of the AcousticalSociety of America , Vol. 120, No. 4, pp. 2202–2215.