advanced operator interface design for complex space telerobots

10
Autonomous Robots 11, 49–58, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. Advanced Operator Interface Design for Complex Space Telerobots J. CORDE LANE, CRAIG R. CARIGNAN AND DAVID L. AKIN Space Systems Laboratory, University of Maryland, College Park, MD 20742 Abstract. With technology advancements in computers and displays, computer interfaces can be used to alleviate the operator workload while controlling a complex robot. A graphical simulation of the robotic system can be used to improve development, train operators, and enhance their performance during actual operations. This paper summarizes the advantages realized using a graphical simulation to visually display telemetry from a multiple arm space telerobot. By displaying the commanded position of a manipulator graphically along with the actual position, the operator becomes more effective in diagnosing anomalies in the system. The negative impact of communication time delay can also be alleviated using this commanded display. The above advantages coupled with the simulation’s ability to display multiple synthetic views, to move each view to any virtual location, and to highlight functions to emphasize important information, can ease the operator’s workload, making him or her more effective in controlling a complex system. Keywords: telerobotics, graphical simulation, predictive display, time delay 1. Ranger TSX Real-Time Visualization The interest in the use of robotics has increased through the years. The most common commercial application is the use of manufacturing robots on an assembly line performing preprogrammed routines to fabricate prod- ucts quickly and reliably. Teleoperated robots permit humans to perform a wide variety of indeterminate tasks from a remote location. For example, many po- lice departments now use remotely controlled robots to locate and dispose of bombs. The ability to extend human capability, using robotics, can benefit the space program. A robot with the proper sensing and manipulation abilities, which can be controlled from the ground, could improve safety and astronaut effectiveness. A robot could assist tasks that previously required an astronaut to perform extra vehicular activity (EVA). Building on past experi- ence in space human factors and space telerobotics, the Space Systems Laboratory is currently developing the Ranger Telerobotic Shuttle Experiment (RTSX), an ad- vanced testbed telerobot designed to demonstrate that robots can perform maintenance tasks on space stations and satellites. The Ranger TSX vehicle has four manipulator arms, shown in Fig. 1, to perform its required tasks. A six degree of freedom (DOF) positioning leg connects the main body of the vehicle to the pallet, providing mo- bility for the vehicle to optimize its working position. The two 8-DOF dexterous arms are then used to per- form the maintenance task. A seven DOF video arm is used to provide the remote operator with the desired view of the work area. The plan is to control Ranger either from a local con- trol station on the Shuttle or remotely from the ground. Figure 2 shows the layout of an operator console for the ground control station. The operator uses the two hand controllers to directly control the robot. Video from dif- ferent cameras on the robot can be viewed from three video monitors. A Silicon Graphics computer and monitor displays a user interface containing a graph- ical simulation of the robot. The flight control station has a functional equivalent of hand controllers, moni- tors, and computer, but it is packaged differently on the Space Shuttle Aft Flight Deck. To successfully control a vehicle with the complexity of Ranger from a remote ground station, virtual reality techniques are used to augment the human-computer interface. For example,

Upload: jcorde-lane

Post on 02-Aug-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Advanced Operator Interface Design for Complex Space Telerobots

Autonomous Robots 11, 49–58, 2001c© 2001 Kluwer Academic Publishers. Manufactured in The Netherlands.

Advanced Operator Interface Design for Complex Space Telerobots

J. CORDE LANE, CRAIG R. CARIGNAN AND DAVID L. AKINSpace Systems Laboratory, University of Maryland, College Park, MD 20742

Abstract. With technology advancements in computers and displays, computer interfaces can be used to alleviatethe operator workload while controlling a complex robot. A graphical simulation of the robotic system can beused to improve development, train operators, and enhance their performance during actual operations. This papersummarizes the advantages realized using a graphical simulation to visually display telemetry from a multiple armspace telerobot. By displaying the commanded position of a manipulator graphically along with the actual position,the operator becomes more effective in diagnosing anomalies in the system. The negative impact of communicationtime delay can also be alleviated using this commanded display. The above advantages coupled with the simulation’sability to display multiple synthetic views, to move each view to any virtual location, and to highlight functions toemphasize important information, can ease the operator’s workload, making him or her more effective in controllinga complex system.

Keywords: telerobotics, graphical simulation, predictive display, time delay

1. Ranger TSX Real-Time Visualization

The interest in the use of robotics has increased throughthe years. The most common commercial applicationis the use of manufacturing robots on an assembly lineperforming preprogrammed routines to fabricate prod-ucts quickly and reliably. Teleoperated robots permithumans to perform a wide variety of indeterminatetasks from a remote location. For example, many po-lice departments now use remotely controlled robots tolocate and dispose of bombs.

The ability to extend human capability, usingrobotics, can benefit the space program. A robot withthe proper sensing and manipulation abilities, whichcan be controlled from the ground, could improvesafety and astronaut effectiveness. A robot could assisttasks that previously required an astronaut to performextra vehicular activity (EVA). Building on past experi-ence in space human factors and space telerobotics, theSpace Systems Laboratory is currently developing theRanger Telerobotic Shuttle Experiment (RTSX), an ad-vanced testbed telerobot designed to demonstrate thatrobots can perform maintenance tasks on space stationsand satellites.

The Ranger TSX vehicle has four manipulator arms,shown in Fig. 1, to perform its required tasks. A sixdegree of freedom (DOF) positioning leg connects themain body of the vehicle to the pallet, providing mo-bility for the vehicle to optimize its working position.The two 8-DOF dexterous arms are then used to per-form the maintenance task. A seven DOF video arm isused to provide the remote operator with the desiredview of the work area.

The plan is to control Ranger either from a local con-trol station on the Shuttle or remotely from the ground.Figure 2 shows the layout of an operator console for theground control station. The operator uses the two handcontrollers to directly control the robot. Video from dif-ferent cameras on the robot can be viewed from threevideo monitors. A Silicon Graphics� computer andmonitor displays a user interface containing a graph-ical simulation of the robot. The flight control stationhas a functional equivalent of hand controllers, moni-tors, and computer, but it is packaged differently on theSpace Shuttle Aft Flight Deck. To successfully controla vehicle with the complexity of Ranger from a remoteground station, virtual reality techniques are used toaugment the human-computer interface. For example,

Page 2: Advanced Operator Interface Design for Complex Space Telerobots

50 Lane, Carignan and Akin

Figure 1. Ranger TSX arm configuration.

Figure 2. Ground control station.

computer graphics are used to provide alternative viewsthat could not be achieved with live video.

A graphical simulation was developed to allow anoperator to visualize the telerobot and worksite in athree-dimensional environment. Several windows areprovided to allow simultaneous viewing of multipleviews. One stereo and three monoscopic views fromover 50 predefined virtual cameras may be selected;views attached to each arm, to the vehicle body, andthe worksite improve situational awareness by pro-viding several frames of references which the worldmay be observed. Every view is completely recon-figurable allowing the operator to freely move aboutthe virtual environment and adjust the field of view.

Figure 3. Control station with virtual environment.

Telemetry data, either from a training simulation orfrom sensors on the vehicle, is used to update the sim-ulation moving the robotic system and highlightingchanged states.

The screenshot in Fig. 3 shows the control stationfor Ranger TSX. Using the keyboard and mouse, theoperator uses the windows on the perimeter to changecontrol parameters and monitor the vehicle’s state. Theoperator can use a variety of input devices to directlyinteract with the graphical simulation and the vehiclemoving each of the manipulators to perform the spec-ified maintenance task. The graphical simulation notonly can be displayed from a standard computer moni-tor, but stereo is provided using a head mounted displayor LCD shutter glasses.

2. Testing with Simulations

Developing and testing Ranger TSX prior to the Shuttlemission requires simulation of the space environmenthere on earth. The Space Systems Laboratory uses a25-foot deep, 50-foot diameter tank of water to simulatethe microgravity environment of space. The RangerNeutral Buoyancy Vehicle (NBV), shown in Fig. 4,was designed to be a functionally equivalent telerobottested in the neutral buoyancy tank. The same controlstation and graphical simulation software is used to flyeither Ranger TSX or NBV. This allows for extensivetesting of the software using NBV to ensure all controlalgorithms perform as expected.

Neutral buoyancy provides a high fidelity simula-tion for developing and training. However, there are

Page 3: Advanced Operator Interface Design for Complex Space Telerobots

Advanced Operator Interface Design 51

Figure 4. Ranger NBV in the space systems laboratory.

times when Ranger NBV is either not functioning ornot available. The use of a computer simulation fills thevoid allowing for some level of training and develop-ment without the actual vehicle. Even with a relativelylow-fidelity computer simulation, much can be accom-plished with the system.

The same input devices, control station, and graph-ical simulation are used during the computer simula-tions as TSX and NBV robotic operations. Figure 5shows how each of the software processes are linkedtogether. The Arm Control Simulation is a functionalequivalent of the control software used on to commandthe actual manipulators on the robot. The Arm Inter-action Simulation is a mathematical process that runson a computer to mimic the behavior of the actualrobot. This includes addition of manipulator dynamics,collision with the worksite, and manipulating mainte-nance task elements. Data from the arm simulationsare streamed to update the status on the control stationpanels. The graphical simulation is used in place of livevideo coming back from the robot.

The simulations are used to train new operatorsto learn the fundamentals of controlling the robot.

Figure 5. Training simulation software processes.

Lessons are learned on how to properly use the dif-ferent input devices, how each of the control stationfunctions are utilized, and what the procedure stepsare for performing specific tasks. Using these simu-lations, novices with no prior experience controllingrobots, including young children, have learned enoughto pilot neutral buoyancy vehicles within a few min-utes. Figure 6 show another project, SCAMP, whichis designed to fly within the tank and take video. Ev-ery year an open house event allows the general publicto control a telerobot. People from all ages and back-grounds learn how to use the ground control station toadequately fly the training simulation. After only a fewminutes of experimentation with the simulation, theymore confidently take controls of the actual underwatervehicle.

Although the training simulations have proved suc-cessful in quickly reducing the learning times of op-erators, the greater advantage these simulations haveprovided is the capability to develop the robotic sys-tem. Over five hundred hours of human factors testinghas occurred using the simulations to determine thebest strategies for controlling Ranger remotely underdifferent amounts of time delay. The arm control soft-ware has been tested extensively using many hours ofcomputer simulations, debugging code, adjusting con-trol parameters, and developing unique control meth-ods before testing with hardware. And much analysishas used the graphical simulations to test ideas out be-fore considering major design steps. Before changingthe size and length of the manipulators, the computersimulations were updated and tested to determine theeffects of such changes.

Although the graphical simulation replaces actualvideo during training, the graphical simulation can beused to augment live video during robotic operations.The graphical simulation is updated by telemetry data;this data can either come from a simulation or fromsensors on the vehicle. During vehicle operations thegraphical simulation can provide additional virtual

Page 4: Advanced Operator Interface Design for Complex Space Telerobots

52 Lane, Carignan and Akin

Figure 6. SCAMP in a computer training simulation (left) and performing operations in the tank (right).

views of the robot. Operators can move to an infi-nite number of locations using the virtual environmentand use that vantage point to assist them with theirtask.

The virtual environment can also be used to per-form simple functions when actual video is not pro-vided. For example, during one test, the live videosatellite feed was lost. However, the data feedbackwas still streaming, allowing the operators to con-tinue performing basic manipulator checkouts withoutvideo.

The capability to display the actual position of therobot within the graphical simulation has proven help-ful many circumstances. The ability to augment, andeven replace live video, has improved operator’s sit-uational awareness. However, a graphical simulationhas the advantage of displaying things that could neverbe observed from live video images. The graphicalsimulation can easily highlight items to grab the op-erator’s attention to important information. Teleme-try data can warn of impending problems, which canbe displayed within the virtual environment. An errorcondition can cause a manipulator to flash until the op-erator addresses the condition. Virtual graphics filterscan be used to observe power consumption, tempera-ture values, and global status for each manipulator jointby providing a gradient of colors to indicate variouslevels.

There are many applications that can assist the op-erator for viewing the status of the system using thevirtual environment that would not be possible withvideo images. The rest of this paper will focus on onespecific visualization: a graphical display of the com-mands sent to the vehicle.

3. Reducing Time Delay Effects

Although sensor data provides a fine method for visu-alizing the system’s current state, much can be learnedfrom being able to view the desired state of the systemand how the two differ. This has proved most effectivein reducing the effects of time delay. Ranger TSX isto be controlled not only on orbit, but also from theground. Depending on the communications techniqueused to control Ranger, a round-trip time delay couldrange anywhere from less than one second to ten sec-onds. Research has shown how time delay can signif-icantly degrade operator performance. Ferrell (1965)showed that as time delay increases, the completiontime for a two-degree of freedom (DOF) manipulatorpositioning task increased proportionally. At a time de-lay of approximately 1 sec., Ferrell found that operatorsbegan to switch their control strategy from continuallycommanding and compensating for any time delay to amove and wait strategy. With a several second time de-lay, operators would input a series of commands, thenwait out the time delay to observe the results beforesending a new series of commands. This “move andwait” strategy significantly increased completion timesof tasks. Black (1971) had similar results illustratingthe increase in task time with higher levels of delay fora 6DOF manipulator task. Thompson (1977) found,at different amounts of time delay, that as the task’sdifficulty increased, from touching a simple plate toinserting a square peg into a hole, the time to com-plete the task increased proportionally. Building off ofexisting research, further testing was performed to de-termine how time delay would affect a specific spacerobot under development.

Page 5: Advanced Operator Interface Design for Complex Space Telerobots

Advanced Operator Interface Design 53

In the past, operators could only send raw commandssuch as “move forward”. Kelley (1962) used predictivedisplays to help control dynamic lags of submarinesand time delays of spacecraft. These older systems hadopen loop control: a force command was given and thesystem would then act without knowing the outcomeon its actual position. The pilot would close the loopby making small adjustments to keep the system withinoperating parameters, such as keeping a submarine ata level depth.

To address time delay, many researchers use a pre-dictive display which forecasts where the manipulatorwill be after the delay. Noyes (1982) developed oneof the first telemanipulator predictive displays. He hadoperators control a 6DOF arm to perform a manip-ulation (block moving) task and a path tracing task.A predictor display showing a wireframe of the ma-nipulator was overlaid on the delayed actual video. Aperformance increase from 50–150% was found withsubjects using the predictor display compared to thedelayed video alone. ROTEX used a stereo predictivestereographic simulation to help cope with up 7 sec-onds in delay (Hirzinger et al., 1994). Mar (1985) over-laid a computer generated predictive display on top oflive video, a 15–25% improvement when working with1.5, 3, and 5-second time delays. During large motions,the predictive display performed well. But during finalmovements, like final placement of an object, the pre-dictive display would obscure the live video lesseningits advantage. Error between the predictive and actualdisplay reduces the effectiveness of a predictive dis-play, however, it can be a powerful tool in amelioratingthe effects of time delay.

For a robotic system, a computer generated predic-tive display can be overlaid on live video of the actualsystem (Black, 1971). Figure 7 shows an example ofa predictive display, which is translucent, overlaid onthe opaque actual display. In this case, the actual dis-play is also computer generated from the telemetry datafrom the vehicle. The operator can then use the predic-tive display to observe what will happen to the systemas he or she inputs commands. Mid-course correctionscan even be made with the predictive display, whichcan prevent the actual system from impacting or takingthe wrong course of action.

With the advancement of computer technology, sys-tems could be developed with more on-board process-ing. Advanced control algorithms allow the system toclose the control loop autonomously, instead of de-pending on a human pilot to continuously make small

Figure 7. Translucent predictive display overlaid on the actualdisplay.

Figure 8. Commanded display block diagram.

adjustments to close the loop. A diagram of a systemwith closed loop control is shown in Fig. 8. Using in-put devices, the control station calculates and sendsdesired positions, xd , to the system. As the left handcolumn of Fig. 8 shows, those commands are delayedand then received by the on-board Controller whichdrives the system to the desired location. The Controlleruses the error between the desired and actual position,

Page 6: Advanced Operator Interface Design for Complex Space Telerobots

54 Lane, Carignan and Akin

xd − x , to determine the appropriate force to generate,u. That force is then sent to the Actual System, whichthen moves in response to the commanded and externalforces. The position of the actual system, x ., is read bythe sensors and sent back to the Controller. This loopcontinues, so that without further input from the pilot,the system will attempt to go to the desired positionand then maintain that position. Any new desired po-sition is merely used to generate the appropriate forceto move the system to the new location. While this in-ternal loop is going on, the actual system’s position iscontinuously streaming to the Actual Display.

A predictive display can be effectively used withthe closed loop controller. As the middle path in theblock diagram in Fig. 8 illustrates, the desired positioncommand, xd , is also sent to the Modeled Controller.The Modeled Controller may be the exact same pro-cess as the Controller, just located on the local proces-sor, or it may be a simplified controller. As with theController, it uses an modeled actual position, x , andthe desired position, xd , to generate a modeled force,u. That modeled force is then used by the PredictiveSimulation to calculate the predicted actual position,x , which is fed back to the Modeled Controller; thisfeedback attempts to close the loop making the pro-jected position, x , match the commanded position, xd .The predictive actual position, x , is also illustrated onthe Predictive Display. Since the Modeled Controllerand Predictive Simulation are generally fed by onlythe desired position, error between the modeled math-ematics and the actual system will make the Predictiveand Actual Displays differ. The typical solution is tooccasionally use the actual position, x , to calibrate thePredictive Simulation.

Since a predictive simulation is based on a model, itis only as good as its model. This can prove useful whendetailed dynamics are programmed into the model. Ifthe robotic system is flexible and oscillates with everycommand, and those dynamics are properly modeled,then the resulting oscillations will be incorporated inthe predictive display. When modeled correctly, the op-erator can use the predictive display to compensate forthose dynamics, perhaps preventing the robotic systemfrom impacting due to dynamic oscillations. The resultcan prove to be invaluable to the operator; however,a simpler technique may prove as effective for manytasks.

The right hand column of Fig. 8 shows the simplic-ity of the commanded display, which merely displaysthe commanded desired position, xd . A commanded

display is similar to a predictive display in that it showswhere the system will go in the future. The Controllerand Actual System already have the responsibility, withtheir closed loop control, to move to that commandedlocation. A commanded display does not require theuse of any mathematical modeling or error calibrationalgorithms. Instead, the actual commands sent to thesystem are displayed. The onboard control system thenhas the responsibility of moving to the commandedposition.

The operator is able to control the commanded dis-play in real-time, in effect controlling a “video game”which in turn commands the actual system to mimicthe output from that video game. Since no dynamicsare modeled, the commanded display gives no clueson oscillations or transient movements of the actualsystem. However, the commanded display shows thesteady state position that the actual system shouldreach. Assuming the actual system is capable of ex-ecuting the task and nothing has failed, given timeit will achieve the same location as the commandeddisplay.

In some situations, the lack of model dynamicsis an advantage of the commanded display. With theuse of adaptive control techniques, robotic systems canlearn and compensate for certain detrimental dynamiceffects: for example drag, friction, and inertia. The op-erator can then be removed from these effects by onlycontrolling a commanded display, allowing the actualsystem to adapt and thereby making the robot easier tocontrol. Commanded displays are useful in situationswhere the system is capable of following commandsand where the intermediate status of the system is nei-ther warranted nor desired.

Five subjects controlled a simulation of a 7 DOF ma-nipulator to perform a peg-in-hole task (Lane, 2000).The time to insert the peg successfully was determinedat different amounts of time delay. Figure 9 shows thelinear correlation between completion time and amountof delay, this linear relationship was found with Fer-rell’s earlier experiments (Ferrell, 1965). By overlay-ing a commanded display, completion times droppedby 22%–62% as time delay increased. The effects oftime delay were not as strong with the commanded dis-play; the increase in time with the commanded displaywas one-fifth that of the unmitigated display. The over-laid commanded display seemed to lower times evenwith no delay. This 22% decrease in time was believedto be caused by the ability to observe a manipulatorimpact; the commanded display would literally leave

Page 7: Advanced Operator Interface Design for Complex Space Telerobots

Advanced Operator Interface Design 55

Figure 9. Comparison between unmitigated time delay (noCmd)and commanded display (Cmd*).

the ‘stuck’ actual arm behind. This additional feed-back allowed subjects to compensate faster when themanipulator was ‘caught up’ on the worksite. This dra-matic reduction in completion time was due to the per-fect correlation between the commanded display andthe actual delayed display. Subjects relied heavily onthe expectation that the arm would actually move to theproper location. Any deviation errors between the com-manded and actual display would lower performance.

The commanded display may not need to adjustfor errors due to dynamics. Calibration between anyoverlay graphics can be difficult. Errors due to livevideo aberrations, changing attributes due to temper-ature changes or mechanical discrepancies, or even er-rors in telemetry data may need to be compensated. Mar(1985) found problems trying to calibrate the predictorto the live video images. Although aligned at the neu-tral position, misalignment up to the size of the gripperwas apparent within the workspace of the experiment.However, even with small errors, a predictive or com-manded display can still be very useful (Kelley, 1962;Lane, 2000).

In the 7 DOF manipulator peg-in-hole experimentdescribed above, not only was time delay varied, a ran-domized error in each joint was altered (Lane, 2000).This error would cause the arm to drift as it was com-manded. If the manipulator was stationary, no driftingoccurred. But as it moved faster, it would oscillate andtypically drift away from the nominal commanded po-sition. While the task was performed the error in eachjoint was tracked and upon completion the overall jointerror was averaged for that trial. Figure 10 shows thelinear curve fit relations between completion time andaverage joint error. As the error increased the task grewmore difficult causing completion times to increase.The effect of time delay can also be observed as the

Figure 10. Completion time for increasing error.

increased delay shifts the performance line up. Also,there may be an interaction between error and time de-lay; combined time delay and error caused a sharper risein completion times. This experiment concurred withprevious studies (Kelley, 1962; Lane, 2000); althougherrors between the commanded/predicted display andthe actual display did lower performance, the erroneouscommanded /predicted display outperformed unmiti-gated time delay.

Any additional display, predictive or commanded,can clutter the screen and perhaps even occlude criti-cal task information. Mar (1985) claimed that duringlarge motions, the predictive display performed well.But during final movements, such as final placement ofan object, the predictive display would obscure the livevideo lessening its advantage. Bejczy (1990) used awire-frame predictive display which reduces occlusionduring testing. During the peg-in-hole test, even thoughthe commanded display was translucent, some subjectscomplained about occlusion (Lane, 2000). When the er-ror and time delay was low, the commanded and actualdisplays would only slightly deviate causing difficultyin distinguishing between the two.

4. Identifying System Anomalies

During robotic operations without time delay, a negligi-ble displacement exists between where a manipulator iscommanded to go and where it currently resides. Whenboth the commanded and actual position are overlaid,a small difference can be seen due to the oscillating dy-namics of the actual arm. Therefore it is standard not toobserve much deviation between the two displays. Inmany instances certain anomalies have occurred duringoperations causing the two displays to diverge greatly.

Page 8: Advanced Operator Interface Design for Complex Space Telerobots

56 Lane, Carignan and Akin

Figure 11. Deviation between commanded and actual display dueto contact.

This ability of the graphical simulation has been usedto quickly diagnose and help the operators to determinewhat has transpired.

The most common deviation between the com-manded and actual position is due to contact with anobject. Figure 11 shows the translucent commandeddisplay offset from the actual, which has contacted thegrid surface. This not only indicates to the operator thatcontact has occurred, it can be used to give rudimen-tary force information. To close the loop and move themanipulator to the commanded state, the controller in-creases the torque of each joint proportionally to therequired deviation. Therefore, as the commanded andactual displays move apart, greater force is applied tothe arm. The graphical simulation could be used dur-ing contact operations to monitor the force build-upof the manipulator. When performing a peg-in-holetask, some force is required to complete the task. How-ever, the graphical simulation would help identify whenthe manipulator was jammed. The operator could thenback off the forces and try the insertion again. Havingthe ability to detect impacts improved performance, asshown in Fig. 9, in a simulated peg-in-hole task.

Certain anomalies can be hard for an operator to de-tect, especially when focusing on controlling the robot,not diagnosing malfunctions. In the early developmentof the manipulators a joint occasionally no longer re-sponded to commands. All information coming backfrom the affected joint looked normal; it merely ignoredfurther commands. This resulted in the manipulator to

Figure 12. Wrist failure causing discrepancy.

appearing to be working, just difficult to control. If theoperator commanded a forward motion, the manipu-lator moved forward and slightly to one side. Manyoperators dismissed the malfunction as operator erroror some other dynamic within the system. Using thegraphical simulation, this anomaly was properly iden-tified in less than one minute by observing that only onejoint did not match the commanded display properly,as shown in Fig. 12.

On one occasion, the robot’s on-board controller ex-hibited a joint error. Due to a programming inconsis-tency, the computer offset the shoulder joint by 180◦.Although the actual arm was forward in live video, thecomputer believed it was turned around backwards.This caused the operator’s controls to be reversed.Commanding the arm forward resulted in a backwardmotion. The operators assumed a control station er-ror occurred, but with the graphical simulation, the ac-tual anomaly was identified immediately as shown inFig. 13. Although the commanded (translucent) manip-ulator was forward, the actual (opaque) arm was turnedaround.

The ability to view both the commands and teleme-try can also be used to monitor the communicationstatus. Several times an operator uses the input devicesto control the robot with no result. The graphical simu-lation can be used to determine whether the commandswere sent. If the commanded display does not move,no command was sent to the vehicle and the opera-tor would check the communication status. When thecommanded display did move, the operator would diag-nose the on-board system processes to confirm the cur-rent controllers were loaded. All of the above scenarios

Page 9: Advanced Operator Interface Design for Complex Space Telerobots

Advanced Operator Interface Design 57

Figure 13. Discrepancy between the commanded and actualdisplay.

occurred during the development of Ranger. Quick andcorrect identification of the anomalies would not havebeen possible using live video only, and would havetake much longer using a traditional graphical userinterface.

5. Virtual Environment Control Station

Figure 14 shows the conceptual design of a control sta-tion, which emphasizes the virtual environment as theprimary means of control. All of the above conceptsof multiple movable views, commanded displays, en-

Figure 14. Conceptual design for ranger TSX virtual environmentcontrol system.

hanced telemetry visualization, and anomaly resolutionwould be included. Alternate views can be displayedon top of the primary view. Important control modeswill be shown reminding the operator of important pa-rameters. Telemetry from the robot will be visuallypresented to warn the operator of potential errors orinternal anomalies that need to be addressed. All thisinformation will be able to be filtered by user prefer-ences so that the operator may focus on the current task.

The largest difference would be that the graphicalsimulation would no longer only be a method to vi-sualize data; it would also become an input device.The operator will interact with the virtual environ-ment. Selecting an arm would bring up status menus tochange control modes, selecting a joint may allow thatjoint to be controlled directly. Graphical control win-dows can be transparently overlaid and manipulated.Objects can be displayed, highlighted, and removedfrom the virtual environment without the operator re-moving their hands of the hand controllers. Withoutever removing their hands from the hand controllers,the operator will access these features. This designleads to a reduction of operator workload so that oneoperator could effectively control a complex roboticsystem.

6. Conclusion

Computer simulations allow for a means to design andtest the system prior to fabrication, for faster trainingcreating effective operators, and can assist during ac-tual operations using real-time information visualiza-tion. For the Ranger TSX project, most simulation timewas devoted to assisting development. Whether testinga new video arm configuration, checking the stabilityof a control algorithm, or determining the most opti-mal operation procedure, the simulation was a way todevelop ideas before implementing on the actual sys-tem. Training with computer simulations was very ef-fective for teaching novices the basics of teleroboticcontrol and procedures for performing a task. Effectivepractice in controlling the robot is better performed inthe neutral buoyancy tank, where complicated motiondynamics, friction, and collisions are better simulatedthan in computer simulation.

Displaying the robot’s telemetry to a graphical simu-lation allows for visualization of information that couldnot be illustrated on live video. Multiple views canbe displayed simultaneously and moved independently.Highlighting objects within the simulation allows for

Page 10: Advanced Operator Interface Design for Complex Space Telerobots

58 Lane, Carignan and Akin

easy identification of certain system anomalies. Whencontrolling the robot during time delay, the abilityto overlay the commanded location of the manipu-lator can significantly assist the operators. When thecommanded display accurately predicted the motionof the manipulator, up to 62% reduction in comple-tion time occurred during a simulated peg-in-hole task.As the prediction became more inaccurate, the effec-tiveness of the commanded display decreased. How-ever, even under moderate errors between the com-manded and actual display, the erroneous commandeddisplay still performed superior to no mitigationcontrol.

The capability of being able to view both the com-mands and the actual telemetry enhances operator situ-ational awareness, allowing the operator to understandwhat the robot is attempting to do or why it is inca-pable of doing so. Using the additional enhancementof highlighting, the graphical simulation augments thecapability of a traditional graphical user interface. Thisstudy has shown that commanded displays are just thebeginning to offloading the operator during complexrobotic tasks. As more telemetry from the robot can bedisplayed graphically to the operator, he or she will beable to perform their task more effectively.

References

Backes, P. 1994. Supervised Autonomy for Space Telerobotics,American Institute of Aeronautics and Astronautics, Inc.:Washington, DC.

Black, J. 1971. Factorial study of remote manipulation with trans-mission time delay. M.S. Thesis, MIT.

Bejczy, A. and Kim, W. 1990. Predictive displays and shared compli-ance control for time-delayed manipulation. In Proceedings IEEEInternational Workshop on Intelligent Robots Systems, IROS.

Ferrell, W. 1965. Remote manipulation with transmission delay.IEEE Transactions in Human Factors in Electronics, 6(1):24–32.

Hirzinger, G., Brunner, B., Ditrich, J., and Heindl, J. 1994. ROTEX—The first remotely controlled robot in space. In Proceedings of theIEEE Conference on Robotics and Automation.

Kelley, C. 1962. Predictor instruments look into the future. ControlEngineering, 9(3):86–92.

Lane, J. 2000. Human factors optimization of virtual environmentattributes for a space telerobotic control station. Diss. Univ. ofMaryland, College Park.

Mar, L. 1985. Human control performance in operation of a time-delayed master-slave telemanipulator. B.S. Thesis, MIT.

Noyes, M. 1982. Superposition of graphics on low bit rate video asan aid in teleoperation. M.S. Thesis, MIT.

Thompson, D. 1977. The development of a six degree of freedomrobot evaluation test. In Proceedings of 13th Annual ConferenceManual Control, MIT, Cambridge, MA.

Corde Lane received his B.S. (1993) and M.S. (1997) in AerospaceEngineering at the University of Maryland, College Park. He is cur-rently pursuing a Ph.D. in Aerospace Engineering in the Universityof Maryland Space Systems Laboratory. His thesis research centerson control station software ranging from traditional graphical userinterfaces to virtual reality as spart of the Ranger Telerobotic FlightExperiment.

Craig Carignan is a Research Associate in the Department ofAerospace Engineering at the University of Maryland. He receivedhis Doctor of Science degree from the Massachusetts Institute ofTechnology in 1987 where he also received S.B. and S.M degreesin Aeronautics and Astronautics. Prior to joining the University ofMaryland, he worked at the NASA Goddard Robotics Laboratory onprototype controls development for the Space Station Flight Teler-obotic Servicer Project. He was a member of the critical design reviewboard for the Shuttle Demonstration Test Flight and was an attitudecontrol system consultant on the Small Explorer Project. His mainresearch interests are in robot impedance control, dynamical model-ing, and haptic simulation. He currently leads the robotics effort onthe Ranger Telerobotic Shuttle Experiment.

David Akin is an Associate Professor at the University ofMaryland, holding joint appointments in the Department ofAerospace Engineering and the Institute for Systems Research. Hereceived a Doctor of Science degree in Aeronautics and Astronau-tics at the Massachusetts Institute of Technology in 1981. He is theDirector of the University of Maryland Space Systems Laboratory,where he conducts research into advanced technologies for humanand robotic space operations at the Neutral Buoyancy Research Fa-cility. He was the Principal Investigator on the Experimental Assem-bly of Structures in EVA, flown on STS-61B, and is the PrincipalInvestigator for the Ranger Telerobotic Shuttle Experiment.