augmenting reality in rehabilitation medicine

11
ELSEVIER Artificial Intelligence in Medicine 6 (1994) 289-299 Artificial Intelligence in Medicine Augmenting reality in rehabilitation medicine Walter J. Greenleaf a,*, Maria A. Tovar b a Greenleaf Medical Systems, 2248 Park Boulevard, Palo Alto, CA 94306, USA b Section on Medical Informatics, Stanford University, Stanford, CA 94305, USA Received August 1993; revised November 1993 Abstract Virtual reality comprises the computer-based generation of realistic three-dimensional visual, auditory, and tactile environments in which a user can explore and interact with virtual objects. Although traditionally used for input devices to virtual worlds, the instru- mented glove and sensor technologies of virtual reality may provide a dramatic new method for the measurement and amplification of human motion. This paper discusses some potential uses of virtual reality technology to support and augment routine activities for people who have physical disabilities. We present two different but related applications of this technology. Key words: Virtual reality; Disability; Adaptive aids; Rehabilitation medicine 1. Virtual reality technology Although various interface styles have emerged, including workstation-based local environments [61 and non-head-mounted circumambiant projections [7,16], virtual reality characteristically comprises the computer-based generation of inter- active three-dimensional worlds in which a suitably equipped user can explore and interact with virtual objects 1181. Presentation of these virtual environments is often multi-sensory, encompassing visual, auditory, and tactile displays, and user input is mediated by behavior transducers which afford more intuitive interaction, using natural human skills. Virtual reality has the potential to revolutionize the way in which users interact with computers and manage complex information. These technologies may trans- * Corresponding author. Email: [email protected] 0933-3657/94/$07.00 0 1994 Elsevier Science B.V. All rights reserved SSDI 0933-3657(94)00008-G

Upload: stanford

Post on 26-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

ELSEVIER Artificial Intelligence in Medicine 6 (1994) 289-299

Artificial Intelligence in Medicine

Augmenting reality in rehabilitation medicine

Walter J. Greenleaf a,*, Maria A. Tovar b

a Greenleaf Medical Systems, 2248 Park Boulevard, Palo Alto, CA 94306, USA

b Section on Medical Informatics, Stanford University, Stanford, CA 94305, USA

Received August 1993; revised November 1993

Abstract

Virtual reality comprises the computer-based generation of realistic three-dimensional visual, auditory, and tactile environments in which a user can explore and interact with virtual objects. Although traditionally used for input devices to virtual worlds, the instru- mented glove and sensor technologies of virtual reality may provide a dramatic new method for the measurement and amplification of human motion. This paper discusses some potential uses of virtual reality technology to support and augment routine activities for people who have physical disabilities. We present two different but related applications of this technology.

Key words: Virtual reality; Disability; Adaptive aids; Rehabilitation medicine

1. Virtual reality technology

Although various interface styles have emerged, including workstation-based local environments [61 and non-head-mounted circumambiant projections [7,16], virtual reality characteristically comprises the computer-based generation of inter- active three-dimensional worlds in which a suitably equipped user can explore and interact with virtual objects 1181. Presentation of these virtual environments is often multi-sensory, encompassing visual, auditory, and tactile displays, and user input is mediated by behavior transducers which afford more intuitive interaction, using natural human skills.

Virtual reality has the potential to revolutionize the way in which users interact with computers and manage complex information. These technologies may trans-

* Corresponding author. Email: [email protected]

0933-3657/94/$07.00 0 1994 Elsevier Science B.V. All rights reserved SSDI 0933-3657(94)00008-G

290 W.J. Greenleaf, M.A. Tovar /Artificial Intelligence in Medicine 6 (1994) 289-299

form computer users from remote observers communicating through narrow-band- width devices into active participants in inclusive information environments that more fully engage our senses and natural spatial abilities. Virtual reality potentially endows users with the ability not only to create their own imaginary worlds, but also to manipulate and change these worlds with ease. Inside virtual reality, a user can walk through a virtual house, drive a virtual car, or run a marathon in a park that does not exist. Unlike watching television, in virtual reality the user is not a voyeur, but rather is an actor: this technology demands that the user interact with her surroundings.

The components of a typical immersive virtual reality system are (1) a head-mounted visual and auditory display; (2) six-dimensional tracking sensors for position and orientation of the head and

various body parts and instruments; (3) control devices, such as wands with embedded switches or gloves with flexion

sensors for decoding gestural commands; (4) computer workstations that can render complex graphics at real-time interac-

tive rates (typically at least 15 frames per second); and (5) software that maintains the behavior of the virtual world and mediates the

user’s interaction with it [lo]. Some systems also include innovative input devices, such as biopotential trans- ducers [22,23], and incorporate other sensory display modalities, such as olfaction [41.

Head-mounted displays can provide the user with wide-field-of-view three-di- mensional information about the virtual environment. Although currently limited by relatively poor spatial resolution and image rendering speeds that do not yet support interactive photorealism, current displays can make use of other visual cues, such as kinetic depth effect, stereoscopy, and lighting models, to help immerse the user in a compelling environment. While images are currently displayed using miniature conventional image sources, such as two liquid-crystal television screens mounted within a set of goggles [19], in the future possibilities include the projection of visual images directly onto the retina [13]. This will greatly reducing the bulk and cost of these components and enabling more realistic see-through ‘head-up’ image displays than are achievable with current resolution- limited displays.

As this new interface paradigm evolves, we are witnessing numerous experi- ments in experiential education [l-3,14,17], immersive design [12], and medical procedure augmentation [ll], and may soon see the emergence of useful applica- tions in the day-to-day work and home life.

2. New uses of VR technology: Measurement and magnification of motion

Among the most popular and widely available input devices for virtual reality is the DataGloveTM (Fig. 1). The DataGlove is a thin-cloth glove with sheathed optical fibers running along its surface over each digit and looping back to a light source/sensor box. The fibers crossing over each joint are abraded slightly,

UV. Greenleaf, M.A. Tovar /Artificial Intelligence in Medicine 6 (1994) 289-299 291

Tut&-FccQuk

Fig. 1. The VPL DataGloveTM (circa 1988).

increasing the refractive surface area of that segment of the fiber. Light-emitting diodes send photons along the fibers and photo-detectors measure the signal at the end of the return path. When the joints of the hand bend, the fibers bend and photons refract out of the fiber, attenuating the signal returning to the sensors. The returned signal is thus reflective of the amount of flexion of a single joint.

The digital interpretation of the returned light signal can then be translated into joint flexion data, and the set of joint flexions can define a hand gesture, with specified windows of error tolerance for precise flexion of each joint. Typically, a graphical representation of the hand is presented on a computer monitor (or in a head-mounted display), and the image of the hand moves in real time, shadowing the movements of the hand in the DataGlove and immediately replicating even the most subtle actions.

The medical/ rehabilitation DataGlove instruments flexion and extension of the metacarpophalangeal, proximal and distal interphalangeal joints of the minor digits, and the metacarpophalangeal and interphalangeal joint of the thumb. Depending on the application, the glove also incorporates fiber optic sensors that measure extension, flexion, radial and ulnar deviation of the wrist.

292 W.J. Greenka~ M.A. Tovar /Artificial Intelligence in Medicine 6 (1994) 289-299

To determine the position and orientation of the hand in space, the glove relies on a spatial tracking system. One such system, the Polhemus electromagnetic tracker, uses a pulsed low-frequency magnetic field to determine the position and orientation of a sensor in relation to the field source. The source can be placed at any desired point of reference (within its range of effectiveness), while the sensor is usually mounted on the dorsum of the glove.

Similarly, the DataSuitTM is a custom-tailored body suit fitted with the same flexion-sensitive fiber-optic sensors found in the DataGlove. The DataSuit collects data dynamically in three-dimensional space, usually relative to a 6D tracking sensor mounted on the body. Its sensors are able to track the full range of motion of the person wearing the DataSuit as she bends, walks, waves, etc.

Since the functional relationship between the flexion sensor data and the displayed graphical interpretation is under programmer control, we may elect to create either a one-to-one spatial mapping or some other arbitrary relationship. Slight flexions might be amplified and graphically displayed as robust movements, or flexion of one physical joint might be graphically presented as flexion of an entirely different ‘virtual’ joint. Thus, mapping one’s finger movements onto movement of one’s virtual legs might be one instantiation of ‘letting your fingers do the waIking’. The extension of this mapping flexibility to other alternative sensing devices and to other virtual events should be obvious.

Part of the utility of virtual reality for physically impaired individuals lies in its unique ability to enable these people to accomplish tasks and have experiences that would otherwise be denied them because of physical limitations [15]. Within a virtual environment, a user can maneuver and affect objects without the limitations he would normally experience in the real world. For example, within a virtual environment, an individual with cerebral palsy, confined to a wheelchair but equipped with appropriate voluntary behavior transducers, can run a teIephone switchboard, play handball, or dance (virtually).

3. Virtual interface aids for people with disabilities

There are approximately 35 million people in the United States who have physical disabilities severe enough to affect their ability to work in certain types of jobs. Due to the relative aging of our population, the number of Americans with disabilities is also growing [20]. Since computers are becoming an integral part of our living, educational, and working environments, the Americans with Disabilities Act mandates that we make them accessible to all people, including people who have disabilities. Computers in general, and specifically now virtual interface technologies, can also provide increased access to daily activities for people who have disabilities.

The field of virtual reality is still emerging as a viable industry, and yet there are a growing number of potential applications which will significantly benefit the disabled population. Virtual reality may allow physically impaired people to have experiences that they might not otherwise have [151 and to perform tasks which

W.J. Greenleaf; M.A. Tovar /Artificial Intelligence in Medicine 6 (1994) 289-299 293

would otherwise be difficult at best. Potential uses of virtual interface technology in assisting the disabled include the following: (1) Customized interfaces to telemanipulators can provide physical capabilities

that otherwise would not exist because of limitations due to a physical disability. Physically disabled people will be able to engage in real world activities via robotic devices that they control from within seamless virtual environments. Software which permits arbitrary mapping of user behavior onto actions of one’s virtual body in a teleoperations interface will open up new avenues of activity and employment (Fig. 2).

(2) Virtual reality will enable people who have disabilities to experience situations and sensations which would not be accessible to them in the physical world. For example, a person might be able to experience the sensation of free movement through video and computer-generated representations of physical locations while seated in a wheelchair or confined to a bed. Trimble and his collaborators [19] have developed a similar prototype system based on six-di- mensional head and hand location sensors, a head-mounted display, and a DataGlove that facilitates the design and evaluation of barrier-free environ- ments for elderly and disabled people.

(3) Virtual reality can allow a person to be in one location while experiencing a sense of presence in a totally different location. Phone conferencing and networked ‘talk’ facilities are prime examples of this phenomenon already in daily use. The emergence of broader-bandwidth shared telepresent environ- ments will allow individuals to participate in an even greater variety of day-to-day activities without leaving home. This will provide greatly increased access to social, political, and economic activities for the disabled and the aged.

(4) Virtual reality may facilitate control of work processes that would otherwise be too complicated for cognitively impaired people, by mapping a complex task into a simpler one within a virtual environment. Greatly simplified head-up displays of complex avionics data in the modern cockpit are an example of this concept already in place for able bodied persons [81. Systems which present customized versions of textual information for persons with reading disabilities, for example, are envisioned in the not too distant future.

(5) Virtual reality can provide monitoring and training environments in which users are able to quantify their progress after disease or injury and during rehabilitation [9,23]. For example, using a glove-like device and commercially available three-dimensional modeling software, a physical therapist can create functional tests to measure a person’s ability to grasp certain objects. Tradi- tionally, when this sort of functional testing is performed without the assistance of a computer, the therapist can make only a qualitative assessment of hand function. Using instrumented glove technology, the therapist can now record and archive a quantitative assessment of function by measuring joint angles in real-time, and can custom-tailor the functional test to meet a predetermined rehabilitation protocol.

Most important to people with disabilities, virtual interface technology provides a highly adaptable mechanism for interacting with computers, one which harnesses

294 W.J. Greenleaf, M.A. Touar /Artificial Intelligence in Medicine 6 (1994) 289-299

Fig. 2.

a person’s strongest ability - what she can do and control best - rather than constraining the disabled user to the current interface paradigm. Even the most primitive virtual reality systems have allowed users to record personalized gestures and to map these gestures to actions. Extended to the wider range of interface metaphors, these actions can range from a simple command, such as a mouse click on a computer screen, to more complex functions, such as controlling a robotic arm.

We present two applications which illustrate the potential of virtual interface technology in assisting people who have disabilities: the Virtual Receptionist and the Movement-Analysis System.

3.1. The Virtual Receptionist

The Virtual Receptionist is a prototype system developed by Greenleaf Medical Systems (GMS) that allows people with speech or motor impairments to perform the job of receptionist: answering and making phone calls, taking messages, and other secretarial tasks. Motions made by the user are mapped to input events; thus, we harness the user’s strongest ability, allowing him to interact with the

W.J. Greenleaf, M.A. Tovar/Artificial Intelligence in Medicine 6 (1994) 289-299 295

computer program. This is accomplished by integrating various virtual interface and augmentative communication technologies around a specific well-defined work task.

For example, a person who cannot easily move his entire arm to manipulate a cursor control device such as a mouse may have good control of some other part of the body, such as specific fingers or the wrist. Similarly, a user with a speech impairment may use the sound digitizer to speak prerecorded canned phrases commonly used in routine telephone conversations. These phrases can be recorded by a person with good speech control or can be practiced by the impaired person until he has recorded a version that is satisfactory to him.

The Virtual Receptionist requires fiber-optic flexion sensors for detecting movement, a sound digitizer for recording and playing back voice, a PBX for managing the telephone, and an Apple MacintoshTM computer for controlling the various inputs and outputs and for maintaining records. The principal software components of the Virtual Receptionist are the Gesture-Control System and the GloveTalker.

The Gesture-Control System expands the functional capabilities of users who have motor disabilities. The user wears a DataGlove to perform complicated tasks through simple hand gestures. User-specific gestures are mapped onto specific actions. Interpreting signals from the fiber-optic sensors of the DataGlove, the user’s simple gestures are translated to a programmed set of equivalent instruc- tions.

The GloveTalker, which ‘speaks’ for the user, is an extension of the Gesture- Control System. In this application, the user is able to generate speech by signaling the computer with her personalized set of gestures. The program recognizes joint flexion configurations (gestures) and passes this information through to the system’s voice-synthesis procedures, which ‘speak’ for the DataGlove wearer. For example, a user may map a closed fist onto the phrase ‘Good morning, Sue speaking.’ The voice output can be transmitted over a computer network or over a telephone system, thus enabling vocally impaired individuals to communicate verbally over distance.

The Gesture-Control System allows for a great deal of freedom in calibrating and mapping gestures, and in specifying flexion tolerance windows, so that people capable only of relatively gross muscle control can still use the system. A typical configuration of the current interface to the Virtual Receptionist is shown in Fig. 3. This prototype system allows users to custom-tailor the display interface and program functions to best match users’ needs. It is quite simple to rearrange the look and locations of the Virtual Receptionist functions via the HyperCard interface.

3.2. Assessment of motor skills

Precise measurement of motor skills is an important aspect of determining the degree of certain physical disabilities. Functional evaluation is of great value in assessing a person’s condition after disease or during the process of recovery. This

1

1

1

1

1

1

1’

1

1

y j:; ii/,

/jl ‘John Smith 111 200 Rosewood Drive / Grove City, CA 94003 iii 8

296 W.J. Greenleaf; M.A. Tovar /Artificial Intelligence in Medicine 6 (1994) 289-299

Fig. 3. User interface to the Virtual Receptionist. The column labeled ‘Speak’ contains a set of phrases previously specified by the user. The numbered buttons in the upper-right corner depict the number of telephone lines incoming to the PBX. The buttons labeled ‘Dial’, ‘Record’, ‘Playback’, ‘Notes’, and ‘Messages’ support ancillary functions such as initiating calls, taking messages, and recording phrases. To operate the system, the user needs only to make a hand gesture, which is mapped to the movement of the mouse. There is a gesture for for double-clicking, a gesture for selection, and so on.

type of motor analysis is usually accomplished by subjective observation of patient performance on standardized tests [21]. The subjective nature of the test reduces its accuracy and reliability. Performance standards may differ across individual therapists and may be inconsistent within the same therapist. It is therefore often difficult to evaluate a patient’s progress over time, particularly if different evalua- tors have assessed the patient’s performance.

Current approaches to quantification of upper-extremity motion fall into two categories: visual and effective. Quantitutiue visual methods involve digitizing a visual record of the motion - for example, digitizing movements within a sequence of videotape frames. The main limitation of this technique is that a single camera can view motion in only two dimensions. To assess movement accurately in the camera’s visual plane, the third dimension must be held constant; the person must move along a known line parallel to the plane of the film in the camera. Although a useful technique in gait assessment, in most cases of upper limb and finger movement, the examiner cannot maintain the correct orientation even for short periods, making this technique difficult and cumbersome.

Effective methods measure the motion’s effect, rather than the motion itself. A work simulator is one example of an effective assessment tool. Instrumented work

W.J. Greenleaf, M.A. Tovar /Artificial Intelligence in Medicine 6 (1994) 289-299 297

simulators measure the force exerted by a person on a variety of attachments that simulate tools used in the workplace. A major limitation of this approach is that no data are collected on how the person effects the force.

To overcome the limitations of these current methods for quantifying upper-ex- tremity motion, we have adapted the DataGlove technology to create a real-time measurement device for the hand. The Movement Analysis System (MA9 links the DataGlove’s fiber-optic sensor data to a software tool for quantitative assess- ment of upper extremity function. In its current implementation, MAS consists of a DataGlove with additional specially fitted sensors to collect wrist motion data. The data are collected in real time and are stored in a small recording device. The recording device, which can be clipped to the wearer’s belt, stores up to 8 hours of data. Collected data can then be downloaded to a computer via a serial port for further analysis.

A future development goal is to integrate a fully configured glove, MAS, and virtual environment modeling software to provide a system that will better facili- tate upper-extremity functional assessment. Such a system will allow a therapist to design a virtual world composed of objects traditionally used in functional assess- ment tests, such as balls, cubes, keys, and pencils. The therapist will be able to record motions, the location and trajectory of the user’s hand required to accom- plish the motions, and any associated hand tremors or spasms. Once the data are collected, the therapist can use standard statistical analysis software to interpret them. She can also elect to play back the motions using animation, and, since the world and the patient’s virtual body are modeled spatially, she can change her viewpoint on the animation arbitrarily to study the motion from another angle. None of these procedures would be possible using traditional observational assess- ment methods.

4. Discussion

While we have identified some immediate applications of virtual interface component technologies, fully immersive virtual reality presents many technologi- cal challenges which must be met before it will be useful and available to people with disabilities. For example, although advances in hardware technology continue to improve the performance of graphics workstations, rendering speed limitations still hinder the interactive performance of all but very simple immersive virtual environments. As a result, interaction with virtual objects can be awkward, if not aggravating. Add to this the cognitive load of remapping one’s physical and virtual bodies, and the systems remain nearly unusable. Still, the promise of the technol- ogy holds out hope.

For people who have disabilities, technology can be a great equalizer. The widespread availability and multitude of applications for personal computers, for example, have had a profound effect on the scope of opportunities for physically and mentally challenged people. Although many new adaptive aid applications are expensive and many others are still under development, a few remarkable products

298 W.J. Greenleaf; M.A. Tovar /Artificial Intelligence in Medicine 6 (1994) 289-299

are readily available, such as voice recognition and speech-synthesis devices. These enabling interface technologies, along with keyboard remapping facilities, are even standard features of some current personal computer systems.

Virtual reality is an emerging technology that radically alters how people interact with computers. The utility of virtual reality for physically impaired people lies in the technology’s unique ability to enable these people to accomplish tasks and to have experiences that would otherwise be denied them because of physical limitations.

There is great variation in disability among people who have physical impair- ments, including paralysis, severe weakness, sensory deficits, and missing limbs. Among the problems faced by people with physical impairments are poor muscle control; difficulty in seeing, sensing or grasping; difficulty in reaching objects; and difficulty in performing complex or compound manipulations. Since the sensor technology has not yet been used extensively in this field, it is difficult to envision potential problems and to estimate how much more work will be required for the development of assistive interface devices to accommodate this wide range of needs.

The incorporation of virtual reality technology to provide increased access poses additional challenges in the design of applications to make them accessible to people who have disabilities, but meeting these challenges will likely have other unanticipated benefits. Software that is designed to be easy to use by people who have performance limitations may also be easier to use by someone else. For example, MouseKeys is a feature that was added to personal computer operating systems to allow people who cannot use a mouse to move the cursor via the keyboard. This feature is commonly used by people doing graphics layout to make fine adjustments in graphics positioning because it allows precise, pixel-by-pixel movement from the keyboard which is not possible with the standard mouse. In effect, making systems more accessible to the physically impaired can provide new insights into better human interface design in general [20].

Acknowledgments

We thank Lyn DuprC for her comments and editing of this manuscript, Joan Walton for her design and implementation of the Virtual Receptionist, Joseph Vick Roy for his insights into functional assessment of the upper extremity, John Reed for his design and implementation of the Motion Analysis System, and Harry Murphy for the thought-provoking discussions about the potential impact of virtual reality to aid people who have disabilities.

References

[l] J.W. Berelsford, Physics education in a virtual environment, in: Proc. Human Factors and Ergonomics Sot. 37th Ann. Meeting, Santa Monica, CA (1993) 1286-1290.

W.J. Greenleaf; M.A. Tovar /Artificial Intelligence in Medicine 6 (1994) 289-299 299

[2] M. Bricken, Virtual reality learning environments: Potential and challenges, Proc. SIGGRAPH ‘91, Comput. Graphics 25 (1991) 178-184.

[3] W. Bricken and W. Winn, Designing virtual worlds for use in mathematics education: The example of experiential algebra, Educational Technol. 32 (1992) 12-19.

[4] C. Carlsson and 0. Hagsand, DIVE - a multi-user virtual reality system, in: Proc. IEEE 1993 Mrtual Reality Annual Int. Symp., Seattle, WA (1993) 394-400.

[5] J.C. Chung, M.R. Harris, F.P. Brooks, H. Fuchs, M.T. Kelley, J. Hughes, M. Ouh-Young, C. Cheung, R.L. Holloway and M. Pique, Exploring virtual worlds with head-mounted displays, in: Proc. SPIE Conf on Three-Dimensional Visualization and Display Technologies, Los Angeles, CA (1989) 42-52.

[6] M.F. Deering, High resolution virtual reality, Proc. SIGGRAPH ‘92, Comput. Graphics 26 (1992) 195-201.

[7] M.F. Deering, Making virtual reality more real: Experience with the virtual portal, in: Proc. Graphics Interface ‘93, Toronto, Ont., Canada (1993) 195-202.

[8] T.A. Furness, Super Cockpit amplifies pilot’s senses and actions, Government Computer News (Aug. 15, 1988) 76-77.

[9] W.J. Greenleaf, DataGlove and DataSuit: Virtual reality technology applied to the measurement of human movement, in: Medicine Meets Mrtual Reality II: Interactive Technology & Healthcare - I/isionary Applications for Simulation, tiualization, Robotics, San Diego, CA (1994) 63-69.

[lo] U.G. Gupta, Virtual reality: A computerized illusionary world, J. Comput. Informat. Syst. 33 (1992-1993) 46-50.

[ll] I. Hunter, Teleoperated microsurgical robot and associated virtual environment, in: Medicine Meets ‘Virtual Reality II: Interactive Technology & Healthcare - Visionary Applications for Simula- tion, Visualization, Robotics, San Diego, CA (1994) 85-89.

[12] K.L. Kaplan, Project description: Surgical room of the future, in: Medicine Meets Virtual Reality II: Interactive Technology & Healthcare - Ksionary Applications for Simulation, Visualization, Robotics, San Diego, CA (1994) 95-98.

[13] J. Kolhn, A retinal display for virtual environment applications, in: Proc. Sot. for Information Display, 1993 Int. Symp., Digest of Technical Papers XXV, Playa de1 Rey, CA (1993) 827.

[14] J.R. Merril, Surgery on the cutting edge: Virtual reality applications in medical education, Virtual Reality World 1 (Nov./Dee. 1993) 34-38.

[15] H. Murphy, Personal communication, Sep. 1992. [16] C. Neira-Cruz, D.J. Sandin and T.A. DeFanti, Surround-screen projection-based virtual reality:

The design and implementation of the CAVE, in: Proc. SIGGRAPH ‘93, Comput. Graphics 27 (1993) 135-142.

[17] V.S. Pantelidis, Virtual reality in the classroom, Educational Technol. 33 (April 1993) 23-27. [18] R.J. Stone, Virtual reality: interfaces for the 21st century, Sixth Int. Expert Systems Conf London,

UK (March 19-21, 1991). [19] J. Trimble, T. Morris and R. Crandall, Virtual reality: Designing accessible environments, Team

Rehab (1992) 33-36. [20] G.C. Vanderheiden, Design of software application programs to increase their accessibility for

people with disability, Tecnical Report, Trace R&D Center, Department of Industrial Engineer- ing, University of Wisconsin-Madison, Madison, Wisconsin, 1991.

[21] J. Vick Roy, Personal communication, Oct. 1989. [22] D. Warner, T. Anderson and J. Johanson, Bio-Cybernetics: A biologically responsive interactive

interface, in: Medicine Meets Virtual Reality II: Interactive Technology & Healthcare - Ksionary Applications for Simulation, Ksualization, Robotics, San Diego, CA (1994) 237-241.

[23] D. Warner, J. Sale and S. Price, The neurorehabilitation workstation: A clinical application for machine-resident intelligence, in: Proc. 13th Annual Int. Conf IEEE Engineering in Medicine and Biology Sot., Los Alamitos, CA (1991) 1266-1267.