handbook of human factors and ergonomics (salvendy/handbook of human factors 4e) || human...

26
CHAPTER 34 HUMAN SUPERVISORY CONTROL Thomas B. Sheridan Massachusetts Institute of Technology Cambridge, Massachusetts 1 DEFINITIONS OF HUMAN SUPERVISORY CONTROL 990 2 SOME HISTORY 991 3 EXAMPLES OF HUMAN SUPERVISORY CONTROL IN CURRENT TECHNOLOGICAL SYSTEMS 992 4 SUPERVISORY ROLES AND HIERARCHY 993 5 SUPERVISORY LEVELS AND STAGES 996 6 PLANNING AND LEARNING: COMPUTER REPRESENTATION OF RELEVANT KNOWLEDGE 997 7 TEACHING THE COMPUTER 1000 8 MONITORING OF DISPLAYS AND DETECTION OF FAILURES 1002 9 INTERVENING AND HUMAN RELIABILITY 1005 10 MODELING SUPERVISORY CONTROL 1007 11 POLICY ALTERNATIVES FOR HUMAN SUPERVISORY CONTROL 1010 12 SOCIAL IMPLICATIONS AND THE FUTURE OF HUMAN SUPERVISORY CONTROL 1011 13 CONCLUSIONS 1013 REFERENCES 1013 1 DEFINITIONS OF HUMAN SUPERVISORY CONTROL Human supervisory control is construct, formally some- thing constructed by the mind: a theoretical entity, a working hypothesis or concept pertaining to a relatona- ship between a human and a machine or physical sys- tem. It is not by itself a normative or predictive model, though it is descriptive of relationships between sys- tem elements where both human and computer actively interact. The word “human” is added because the term “supervisory control” is sometimes used by control engi- neers to describe software agents that aid in system measurement. This chapter is not a comprehensive or even-handed review of the literature in human – robot interaction, mon- itoring, diagnosis of failures, human error, mental work- load, or other closely related topics. Sheridan (1992, 2002), Sarter and Amalberti (2000), Degani (2004), and Sheridan and Parasuraman (2006) cover these aspects more fully. The term human supervisory control is derived from the close analogy between the characteristics of a human supervisor’s interaction with subordinate human staff members and a person’s interaction with “intelligent” automated subsystems. A supervisor of people gives di- rectives that are understood and translated into detailed actions by staff members. In turn, staff members aggre- gate and transform detailed information about process results into summary form for the supervisor. The degree of intelligence of staff members determines the supervisor’s willingness to delegate. Automated sub- systems permit the same sort of interaction to occur between a human supervisor and the process (Ferrell and Sheridan, 2010). Supervisory control behavior is interpreted to apply broadly to include vehicle control (aircraft and spacecraft, ships, highway and undersea vehicles), continuous process control (oil, chemicals, power generation), robots and discrete tasks (manufac- turing, space, undersea, mining), and medical and other human–machine systems. In a strictest definition, the term human supervisory control (or just supervisory control as often used in the present context) indicates that one or more human oper- ators are setting initial conditions for intermittently ad- justing and receiving information from a computer that itself closes an inner control loop through electrome- chanical sensors, effectors, and the task environment. In a broader sense, supervisory control means interaction with a computer to transform data or to produce con- trol actions. Figure 1 compares supervisory control with direct manual control (Figure 1a ) and full automatic control (Figure 1e ). Figures 1c and 1d characterize supervisory control in the strict formal sense; Figure 1b characterizes supervisory control in the latter (broader) sense. The essential difference between these two charac- terizations of supervisory control is that in the first and stricter definition the computer can act on new informa- tion independent of and with only blanket authorization and adjustment from the supervisor; that is, the com- puter implements discrete sets of instructions by itself, 990 Handbook of Human Factors and Ergonomics, Fourth Edition Gavriel Salvendy Copyright © 2012 John Wiley & Sons, Inc.

Upload: gavriel

Post on 08-Dec-2016

216 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

CHAPTER 34HUMAN SUPERVISORY CONTROL

Thomas B. SheridanMassachusetts Institute of TechnologyCambridge, Massachusetts

1 DEFINITIONS OF HUMANSUPERVISORY CONTROL 990

2 SOME HISTORY 991

3 EXAMPLES OF HUMAN SUPERVISORYCONTROL IN CURRENTTECHNOLOGICAL SYSTEMS 992

4 SUPERVISORY ROLES AND HIERARCHY 993

5 SUPERVISORY LEVELS AND STAGES 996

6 PLANNING AND LEARNING:COMPUTER REPRESENTATION OFRELEVANT KNOWLEDGE 997

7 TEACHING THE COMPUTER 1000

8 MONITORING OF DISPLAYS ANDDETECTION OF FAILURES 1002

9 INTERVENING AND HUMANRELIABILITY 1005

10 MODELING SUPERVISORY CONTROL 1007

11 POLICY ALTERNATIVES FOR HUMANSUPERVISORY CONTROL 1010

12 SOCIAL IMPLICATIONS AND THEFUTURE OF HUMAN SUPERVISORYCONTROL 1011

13 CONCLUSIONS 1013

REFERENCES 1013

1 DEFINITIONS OF HUMAN SUPERVISORYCONTROL

Human supervisory control is construct, formally some-thing constructed by the mind: a theoretical entity, aworking hypothesis or concept pertaining to a relatona-ship between a human and a machine or physical sys-tem. It is not by itself a normative or predictive model,though it is descriptive of relationships between sys-tem elements where both human and computer activelyinteract. The word “human” is added because the term“supervisory control” is sometimes used by control engi-neers to describe software agents that aid in systemmeasurement.

This chapter is not a comprehensive or even-handedreview of the literature in human–robot interaction, mon-itoring, diagnosis of failures, human error, mental work-load, or other closely related topics. Sheridan (1992,2002), Sarter and Amalberti (2000), Degani (2004), andSheridan and Parasuraman (2006) cover these aspectsmore fully.

The term human supervisory control is derived fromthe close analogy between the characteristics of a humansupervisor’s interaction with subordinate human staffmembers and a person’s interaction with “intelligent”automated subsystems. A supervisor of people gives di-rectives that are understood and translated into detailedactions by staff members. In turn, staff members aggre-gate and transform detailed information about processresults into summary form for the supervisor. Thedegree of intelligence of staff members determines the

supervisor’s willingness to delegate. Automated sub-systems permit the same sort of interaction to occurbetween a human supervisor and the process (Ferrelland Sheridan, 2010). Supervisory control behavior isinterpreted to apply broadly to include vehicle control(aircraft and spacecraft, ships, highway and underseavehicles), continuous process control (oil, chemicals,power generation), robots and discrete tasks (manufac-turing, space, undersea, mining), and medical and otherhuman–machine systems.

In a strictest definition, the term human supervisorycontrol (or just supervisory control as often used in thepresent context) indicates that one or more human oper-ators are setting initial conditions for intermittently ad-justing and receiving information from a computer thatitself closes an inner control loop through electrome-chanical sensors, effectors, and the task environment. Ina broader sense, supervisory control means interactionwith a computer to transform data or to produce con-trol actions. Figure 1 compares supervisory control withdirect manual control (Figure 1a) and full automaticcontrol (Figure 1e). Figures 1c and 1d characterizesupervisory control in the strict formal sense; Figure 1bcharacterizes supervisory control in the latter (broader)sense.

The essential difference between these two charac-terizations of supervisory control is that in the first andstricter definition the computer can act on new informa-tion independent of and with only blanket authorizationand adjustment from the supervisor; that is, the com-puter implements discrete sets of instructions by itself,

990 Handbook of Human Factors and Ergonomics, Fourth Edition Gavriel SalvendyCopyright © 2012 John Wiley & Sons, Inc.

Page 2: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 991

Humanoperator

Task

Manual control Supervisory control Fully automaticcontrol

Task Task Task Task

Humanoperator

Humanoperator

Humanoperator

Humanoperator

(a) (b) (c) (d ) (e)

Display

Computer

Sensor Sensor Sensor Sensor SensorActuator Actuator Actuator Actuator Actuator

Computer Computer Computer

Controller Display Display Display DisplayController Controller Controller

Figure 1 Supervisory control as related to direct manual control and full automation.

closing the loop through the environment. In the sec-ond definition the computer’s detailed implementationsare open loop; that is, feedback from the task has noeffect on computer control of the task except through thehuman operator. The two situations may appear similarto the supervisor, since he or she always sees and actsthrough the computer (analogous to a staff) and there-fore may not know whether it is acting open loop orclosed loop in its fine behavior. In either case the com-puter may function principally on the efferent or motorside to implement the supervisor’s commands (e.g., dosome part of the task entirely and leave other parts to thehuman or provide some control compensation to ease thetask for the human). Alternatively, the computer mayfunction principally on the display side (e.g., to inte-grate and interpret incoming information from below orto give advice to the supervisor as to what to do next,as with an “expert system”). Or it may work on boththe efferent and afferent sides.

2 SOME HISTORY

The 1940s saw human factors engineering come intobeing to ensure that soldiers could operate machines inWorld War II. In the 1950s human factors emerged asa professional field, first in essentially empirical “knobsand dials” form, concentrating on the human–machineinterface, accompanied by ergonomics, which focusedon the physical properties of the workplace. This was

supported over the next decade by the theoretical under-pinnings of human–machine systems theory and model-ing (Sheridan and Ferrell, 1974). Such theories includedcontrol, information, signal detection, and decision the-ories originally developed for application to physicalsystems but now applied explicitly to the human oper-ator. As contrasted with human factors engineering atthe interface, human–machine systems analysis consid-ers characteristics of the entire causal “loop” of deci-sion, communication, control, and feedback—throughthe operator’s physical environment and back again tothe human.

From the late 1950s the computer began to intervenein the causal loop: electronic compensation and stabilityaugmentation for control of aircraft and similar systems,electronic filtering of signal patterns in noise, and elec-tronic generation of simple displays. It was obvious thatif vehicular or industrial systems were equipped withsensors that could be read by computers and by motorsthat could be driven by computers, then, even thoughthe overall system was still very much human controlled,control loops between those sensors and motors could beclosed automatically. Thus, the chemical plant operatorwas relieved of keeping the tank at a given level or thetemperature at a reference; he or she needed only toset in that desired level or temperature signal from timeto time. So, too, after the autopilot was developed forthe aircraft, the human pilot needed only to set in thedesired altitude to heading; an automatic system wouldstrive to achieve this reference, with the pilot monitoringto ensure that the aircraft did in fact go where desired.

Page 3: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

992 PERFORMANCE MODELING

The automatic building elevator, of course, has beenin place for many years and is certainly one of the firstimplementations of supervisory control. Recently, devel-opers of new systems for word processing and handlingof business information (i.e., without the need to controlany mechanical processes) have begun thinking alongsupervisory control lines.

The full generality of the idea of supervisory controlcame to the author and his colleagues (Sheridan, 1960;Ferrell and Sheridan, 1967) as part of research on howpeople on Earth might control vehicles on the moonthrough round-trip communication time delays (imposedby the speed of light). Under such constraint, remotecontrol of lunar roving vehicles or manipulators wasshown to be possible only by performing in “move-and-wait” fashion. This means that the operator can commitonly to a small incremental movement open loop, that is,without feedback (which actually is as large a movementas is reasonable without risking collision or other error),then stopping and waiting one delay period for feedbackto “catch up,” then repeating the process in steps untilthe task is completed.

Experimental attempts to drive or manipulate con-tinuously this way only produced instability, as simplecontrol theory predicts (i.e., where loop gains exceedunity at a frequency such that the loop time delay isone half-cycle, instead of errors being nulled out, theyare only reinforced). Performing remote manipulationwith delayed force feedback was later shown by Ferrell(1967) to be essentially impossible since forces at unex-pected times act as significant disturbances to produceinstability. At least the visual feedback can be ignoredby the operator.

It was shown that if, instead of the human operatorremaining within the control loop, he or she commu-nicates a goal state relative to the remote environment,and if the remote system incorporates the capability tomeasure proximity to this goal state, the achievement ofthis goal state can be turned over to the remote subor-dinate control system for implementation. In this casethere is no delay in the control loop implementing thetask, and thus there is no instability.

There necessarily remains, of course, a delay in thesupervisory loop. This delay in the supervisor’s con-firmation of desired results is acceptable as long as(1) the subgoal is a sufficiently large “bite” of the task,(2) the unpredictable aspects of the remote environmentare not changing too rapidly (i.e., disturbance bandwidthis low), and (3) the subordinate automatic system istrustworthy.

Under these conditions and as computers graduallybecome more capable both in hardware and software(and as “machine intelligence” finally makes its real ifmodest appearance), it is evident that telemetry trans-mission delay is in no way a prerequisite to the use-fulness of supervisory control. The incremental goalspecified by the human operator need not be simply anew steady-state reference for a servomechanism (as inresetting a thermostat) in one or even several dimen-sions (e.g., resetting both temperature and humidity orcommanding a manipulator endpoint to move to a newposition, including three translations and three rotations

relative to its initial position). Each new goal statementcan be the specification of an entire trajectory of move-ments (as the performance of a dance or a symphony)together with programmed branching conditions (whatto do in case of a fall or other unexpected event).

In other words, the incremental goal statement canbe a program of instructions in the full sense of a com-puter program which make the human supervisor anintermittent real-time computer programmer, actingrelative to the subordinate computer much the same as ateacher or parent or boss behaves relative to a student ora child or subordinate worker. The size and complexityof each new program are necessarily a function of howmuch the computer can (be trusted to) cope with inone bite, which in turn depends on the computer’sown sophistication (knowledge base) and the complexity(uncertainty) of the task.

3 EXAMPLES OF HUMAN SUPERVISORYCONTROL IN CURRENT TECHNOLOGICALSYSTEMS

While supervisory control first evolved in delayed feed-back situations such as controlling robots on the moonfrom Earth, it has grown to encompass a wide variety ofother systems and for different reasons that have mostlyto do with what humans do best (setting goals) and whatcomputers do best (routine execution of conrol actionsbased on sensed feedback).

Supervisory control is now found in various forms inmany industrial, military, medical, and other contexts.However, this form of human interaction with technol-ogy is still relatively little recognized or understood in aformal way by system designers who want to take max-imum advantage of automation yet want to benefit fromthe intelligence of the human agent.

Aircraft autopilots are now “layered,” meaning thatthe pilot can select among various forms and levels ofcontrol. At the lowest level the pilot can set in a newheading or rate of climb. Or he or she can program asequence of heading changes at various waypoints or asequence of climb rates initiated at various altitudes orprogram the inertial guidance system to take the aircraftto a given runway at a distant city. Given the existenceof certain ground-based equipment, the pilot can pro-gram an automatic landing on a given runway, and so on.The pilot not only can set commands for different con-trol models but also can also modify different modes ofdisplay: how information is presented. Sheridan (2002)reviews how such automation is creeping into the air-craft flight deck. Sarter and Amalberti (2000) describethe modern flight management system in some detail.

Efforts now underway by governments in both theUnited States and Europe are major technological up-grades of the air traffic control systems. In the UnitedStates it is called NextGen (for NextGeneration AirTransportation System), and in the European Commu-nity it is called Single European Sky, or SESAR. Thetwo efforts are being coordinated, and in both casesinvolve the introduction of much new automation and

Page 4: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 993

Table 1 Some NextGen Flight Operations UsingSupervisory Control

Negotiating four-dimensional (4D) (three in space, one intime) flight trajectories shortly before pushbackDealing with off-nominal aircraft on the airport surfaceController/pilot use of digital data communication(Datalink)Traffic flow manager use of new capacity/flow/weathermodelsAircraft operation conflict and resolution responsibilitiesResponding to aircraft deviation from their assigned 4DtrajectoriesWeather conflict and decision to reroute around weatherpatternsEffecting a new ‘‘best equipped–best served’’ policyDynamic reconfiguration of en route or terminal airspaceMerging and spacing in terminal airspaceSetting up for continuous curved rather than step-downdescentPairing for descent to parallel runways

supervisory control, for example in the flight operationslisted in Table 1 (Sheridan, 2010).

The unmanned aeronautical vehicle (UAV) is nowsubsuming an ever greater role in military operations andsoon will do the same in domestic airspace to monitornational borders, inspect crops, and possibly eventuallycarry freight. UAVs are typically flown by setting suc-cessive waypoints in 3D space, a supervisory function.

Supervisory control of a simpler sort is now evidentin the cruise control system of current automobiles andtrucks and is being upgraded in the form of “advanced”or “intelligent” cruise control, wherein a radar detectorcontrols speed to maintain a safe distance behind aleading vehicle.

In modern hospital operating rooms, intensive careunits, and ordinary patient wards there are numerous su-pervisory control systems at work. The modern anes-thesiology workstation is a good example. Drugs inliquid or gaseous form are pumped into the patient atrates programmed by the anesthesiologist and by sen-sors monitoring patient respiration heart rate and othervariables.

Modern chemical and nuclear plants can be pro-grammed to perform heating, mixing, and various otherprocesses according to a time line and including varioussensor-based conditions for shutting down or otherwiseaborting the operation. Nandi and Ruhe (2002) describethe use of supervisory control in sintering furnaces. Seijiet al. (2001) provide an extensive review of modernsupervisory control in nuclear power plants.

Robots of all kinds are being developed: for indus-trial manufacturing (e.g., for both inspection and assem-bly of products on assembly lines); for space (e.g.,planetary rovers); for undersea applications (e.g., in theBritish Petroleum oil spill and oceanographic research);for security applications (e.g., inspecting threaten-ing packages in airports and other public places);

military applications (e.g., detonating improvised explo-sive devices); home cleaning applications (e.g., clean-ing swimming pools, vacuuming carpets); offices (e.g.,delivering mail); and hospitals (e.g., for minimally inva-sive surgery). Most of these robots have mobility capa-bility; some have arms for manipulation. Almost allembody supervisory control, at least in primitive form.

Many of the examples cited above characterize thefirst or stricter definition of supervisory control previ-ously given (Figures 1c and d ), where the computer,once programmed, makes use of its own artificial sen-sors to ensure completion of the tasks assigned. Manyfamiliar systems, such as automatic washing machines,dryers, dishwashers, or stoves, once programmed, per-form their operations open loop; that is, there is nomeasurement or knowledge of results. If the task can beperformed in such open-loop fashion, and if the humansupervisor can anticipate the task conditions and is goodat selecting the right open-loop program, there is noreason not to employ this approach. To the human super-visor, whether the lower level implementation is openor closed loop is often opaque and/or of no concern; theonly concern is whether the goal is achieved satisfac-torily. For example, a programmable microwave ovenwithout the temperature sensor in place operates openloop, whereas the same oven with the temperature sen-sor operates closed loop. To the human supervisor orprogrammer, they look the same.

A very important aspect of supervisory control isthe ability of the computer to “package” informationfor visual display to the human supervisor, includingdata from many sources; from the past, present, oreven predicted future; and presented in words, graphs,symbols, pictures, or some combination. Ubiquitousexamples of such integrated displays are so-calleddecision support tools in aircraft and air traffic control,chemical and power plants, and various other industrialor military settings too numerous to review here.General interest in supervisory displays became evidentin the mid-1970s (Edwards and Lees, 1981; Sheridanand Johannsen, 1976; Wiener and Curry, 1980; Sheridanand Hennessy, 1984).

4 SUPERVISORY ROLES AND HIERARCHY

The human supervisor’s roles are (1) planning off-line what task to do and how to do it; (2) teaching(or programming) the computer what was planned;(3) monitoring the automatic action online to makesure that all is going as planned and to detect failures;(4) intervening, which means the supervisor takes overcontrol after the desired goal state has been reachedsatisfactorily or interrupts the automatic control inemergencies to specify a new goal state and reprogram anew procedure; and (5) learning from experience so as todo better in the future. These are usually time-sequentialsteps in task performance.

We may view these steps as being within three nestedloops, as shown in Figure 2. The innermost loop, mon-itoring, closes on itself; that is, evidence of somethinginteresting or completion of one part of the cycle of

Page 5: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

994 PERFORMANCE MODELING

SUPERVISORY STEP ASSOCIATEDMENTAL MODEL

ASSOCIATEDCOMPUTER AID

1. PLAN

a) Understandcontrolled process

Physical variables:transfer relations

Physical processtraining aid

Satisficing aid

Procedures trainingand optimization aid

Procedures library;action decision aid(in-situ simulation)

Aid for editingcommands

Aid for calibrationand combinationof measures

Estimation aid

Detection and diagnosisaid for failure or halt

Abort execution aid

Error rectification aid

Normal completionexecution aid

Immediate recordand memory jogger

Cumulative recordand analysis

Aspirations: preferencesand indifferences

General operatingprocedures and guidelines

Decision options:state-procedure-actionimplications; expectedresults of control actions

Command language(symbols, syntax, semantics)

State information sourcesand their relevance

Expected results of pastactions

Likely modes and causesof failure or halt

Criteria and optionsfor abort

Criteria for error andoptions to rectify

Options and criteriafor task completion

Immediate memoryof salient events

Cumulative memoryof salient events

b) Satisfice objectives

c) Set general strategy

2. TEACH

a) Decide and testcontrol actions

b) Decide, test, andcommunicate commands

3. MONITOR AUTOMATION

a) Acquire, calibrate, andcombine measuresof process state

b) Estimate process statefrom current measureand past control actions

c) Evaluate process state:detect and diagnosefailure or halt

4. INTERVENE

a) If failure: executeplanned abort

b) If error benign:act to rectify

c) If normal end oftask: complete

5. LEARN

a) Record immediateevents

b) Analyze cumulativeexperience; update model

Figure 2 Functional and temporal nesting of supervisory roles.

Page 6: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 995

the monitoring strategy leads to more investigation andmonitoring. We might include minor online tuning of theprocess as part of monitoring. The middle loop closesfrom intervening back to teaching; that is, human inter-vention usually leads to programming of a new goal statein the process. The outer loop closes from learning backto planning; intelligent planning for the next subtask isusually not possible without learning from the last one.

The three supervisory loops operate at different timescales relative to one another. Revisions in fine-scalemonitoring behavior take place at brief intervals. Newprograms are generated at somewhat longer intervals.Revisions in significant task planning occur only at stilllonger intervals. These differences in time scale furtherjustify Figure 2.

More and more a multiplicity of computers are usedin a supervisory control system, as shown in Figure 3.One typically large computer is in the control room togenerate displays and interpret commands. This can becalled a human-interactive computer (HIC), part of ahuman-interactive system (HIS). It in turn forwards thatcommand to various microprocessors that actually closeindividual control loops through their own associatedsensors and effectors. The latter can be called task-interactive computers (TICs), each part of its own task-interactive system (TIS).

The HIC is conceived to be a large enough com-puter to communicate in a human-friendly way usingnear-natural language, good graphics, and so on. Thisincludes being able to accept and interpret commandsand to give the supervisor useful feedback. The HICshould be able to recognize patterns in data sent up toit from below and decide on appropriate algorithms forresponse, which it sends down as instructions. Eventu-ally, the HIC should be able to run “what would happenif . . . ” simulations and be able to give useful advicefrom a knowledge base, that is, include an expert system.

The HIC, located near the supervisor in a controlroom or cockpit, may communicate across a barrier of

time or space with a multiplicity of TICs, which proba-bly are microprocessors distributed throughout the plantor vehicle. The latter are usually coupled intimatelywith artificial sensors and actuators in order to deal inlow-level language and to close relatively tight controlloops with objects and events in the physical world.

The human supervisor can be expected to communi-cate with the HIC intermittently in information “chunks”(alphanumeric sentences, icons, etc.) while the task com-municates with the TIC continuously in computer lan-guage at the highest possible bit rates. The availabilityof these computer aids means that the human supervisor,while retraining the knowledge-based behavior function,is likely to download some of the rule-based programsand almost all of the skill-based programs into the HIC.The HIC, in turn, should download a few of the rule-based programs, and most of the skill-based programs,to the appropriate TICs.

Figure 4 presents the functions of Figure 2 in theform of a flowchart. Each supervisory function is shownabove, and the (usually multiple) automated subsystemsof the TIC are shown below. Normally, for any giventask, the planning and learning roles are performed off-line relative to the online human-mediated and automaticoperations of the other parts of system and therefore areshown at the top with light lines connecting them tothe rest of the system. Teaching precedes monitoringon the first cycle but thereafter follows monitoring andintervening (as necessary) within the intermediate loop.The inner loop monitoring role is carried out within the“estimate state” and “allocate attention” boxes.

Allocation of functions between the human and themachine need not be fixed. There have been numerouspapers discussing the potential for dynamic allocation—where the allocation changes as a function of the flow ofdemands and the workload of the two entities (see, e.g.,Sheridan, 1997). In the sections that follow the varioussupervisory roles are discussed in more detail, bringing

Human supervisor

Human-interactive computer(combines high-level controland expert advisory system)

Control instructions,requests for advice

High-level feedback,advice Human-interactive system

(in control room or cockpit)

Multiplied signal transmission(may involve bandwidth

constraints or time delays)

Task-interactive system(remote from operator)

Controlled process may becontinuous process, vehicle,

robot, etc.

Task-interactivecomputer

1

Task1

Task2

Taskn

Task-interactivecomputer

2

Task-interactivecomputer

n

Figure 3 Hierarchical nature of supervisory control.

Page 7: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

996 PERFORMANCE MODELING

1 PLAN

3 MONITOR

2 TEACHor

4 INTERVENE

Modelphysical system

Detect/diagnoseany abnormality

Select desiredcontrol action

Select/executecommands

Estimate state

Allocateattention

Satisfice tradeoffsamong objectives Formulate strategy

Learn

Process information Normalcomputercommands

Direct manualintervention

Comp1

TASK1

TASK2

TASK3

TASKN

Comp2

CompN

Task interactivecomputers

Figure 4 Flowchart of supervisor functions (including both mental models and decision aids). (From Sheridan, 1992.)

in examples of research problems and prototype systemsto aid the supervisor in these roles.

5 SUPERVISORY LEVELS AND STAGES

Supervisory control may involve varying degrees ofcomputer aiding in acquiring information and executingcontrol, as in Table 2). This “level of automation” idea,originally presented in Sheridan and Verplank (1978)with 10 rather than 8 levels, has been picked up and usedby others in various ways. Parasuraman et al. (2000)added the idea that the successive stages of informationacquisition, information analysis, action decision, andaction implementation are usually automated to differentdegrees. The best degree of automation is seldom thesame at the various stages.

Figure 5 is an example of how, in the writer’s opin-ion, the Federal Aviation Administrations’s NextGen

Table 2 Scale of Degrees of Automation

1. The computer offers no assistance; the human must doit all.

2. The computer suggests alternative ways to do thetask.

3. The computer selects one way to do the task and

4. Executes that suggestion if the human approves or

5. Allows the human a restricted time to veto beforeautomatic execution or

6. Executes the suggestion automatically, then necessarilyinforms the human, or

7. Executes the suggestion automatically, then informsthe human only if asked.

8. The computer selects the method, executes the task,and ignores the human.

Page 8: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 997

project will automate operations at the four stages inthe midterm and far term as compared to the currentlevels.

Experience with military systems calls attention tothe appropriateness of various levels of automation(Cummings, 2005):

The Patriot missile system has a history of friendlyfire incidents that can at least be partially attributedto a lack of understanding of human limitations insupervisory control. . . .

The Patriot missile has two modes: semiautomatic(management by consent, (level 4 above—an oper-ator must approve a launch) and automatic (man-agement by exception —the operator is given aperiod of time to veto the computer’s decision (level5 above). However, in practice the Patriot is typ-ically left in the automatic mode and the friendlyfire incidents are believed to be a result of problemsin the automatic mode. There are known “ghosting”problems with the Patriot radar: because operationsare in close proximity to other Patriot missile bat-teries, false targets will appear on a Patriot opera-tor’s screen. Under the automatic mode (managementby exception), operators are given approximately 15seconds to reject the computer’s decision, which isinsufficient to solve both false targeting problems aswell as adequately address friend or foe concernsthrough any other means of communication. Afterthe accident investigations, the US Army admit-ted that there is no standard for Patriot training,autonomous operations procedures (automatic mode)are not clear, and that operators commonly lose sit-uational awareness of air tracks).

6 PLANNING AND LEARNING: COMPUTERREPRESENTATION OF RELEVANTKNOWLEDGE

The first and fifth supervisory roles described pre-viously, planning and learning, may be consideredtogether since they are similar activities in many ways.Essentially, in the planning role the supervisor asks“What would happen if . . . ?” questions of the accumu-lated knowledge base and considers what the implica-tions are for hypothetical control decisions. In learning,the supervisor asks “What did happen?” questions ofthe database for the more recent subtasks and considerswhether the initial assumptions and final control deci-sions were appropriate.

The designer of an automatic control system ormanual control system must ask: “What variables doI wish to make do what, subject to what constraints andwhat criteria?” The planning role in supervisory controlrequires that the same kinds of questions be answered,because, in a sense, the supervisor is redesigning anautomatic control system each time that he or sheprograms a new task and goal state. Absolute constraints

on time, tools, and other resources available need to beclear, as do the criteria of trade-off among time, dollarsand resources spent, accuracy, and risk of failure.

Just as computer simulation figures into planning,it also figures into supervisory control—the differencebeing that such simulation may more likely be subjectedto time stress in supervisory control. Simulation requiresacquiring some idea of how the process (system to becontrolled) works, that is, a set of equations relatingthe various controllable variables, the various uncontrol-lable but measurable variables (disturbances), and thedegree of unpredictability (noise) on measured systemresponse variables. This is a common representation ofknowledge. Given measured inputs and outputs, thereare well-established means to infer the equations if theprocesses are approximately linear and differentiable.

Once such a model is in place, the supervisor canposit hypothetical inputs and observe what the outputswould be. Also, one may use such a process model asan “observer” (in the sense of modern control theory).Namely, when control signals are put into both themodel and actual processes and the model parametersare then trimmed to force certain model outputs toconform to corresponding actual process outputs that canbe measured (Figure 6), other process outputs that areinconvenient to measure may be estimated (“observed”)from the model. Just as this is a theoretical prerequisiteto optimal automatic control of physical systems, soit is likely to be a useful practice to aid humans insupervisory control (Sheridan, 1984).

A different type of knowledge representation is thatused by the artificial intelligence (AI) community. Hereknowledge is usually couched in the form of if–thenlogical statements called production rules, semanticassociation networks, and similar forms. The input to asimulated program usually represents in cardinal num-bers a hypothetical physical input to a simulated physicalsystem. In contrast, the input to the AI knowledge basecan be a question about relationships for given data ora question about data for given relationships. This canbe in less restrictive ordinal form (e.g., networks ofdiadic relations) or in nominal form (e.g., lists).

Currently, there is great interest in how best totransfer expertise from the human brain (knowledgerepresentation, mental model) into the correspondingrepresentation or model within the computer, how bestto transfer it back, and when to depend on each of thosesources of information. This research on mental modelshas a lively life of its own (Falzon, 1982; Gentner andStevens, 1983; Rouse and Morris, 1984; Sheridan, 1984;Moray, 1997) quite independent of supervisory control.

An important aspect of planning is visualization. Thenow rather sophisticated tool of computer simulation,when augmented by computer graphics, enables remark-able visualization possibilities. When further augmentedby human interactive devices such as head-mountedvisual and auditory displays and high-bandwidth force-reflecting haptics (mechanical arms), the operator canbe made to feel present in a virtual world, as has beenpopularized by the oxymoron virtual reality . Of course,the idea of virtual reality is not new. The original idea

Page 9: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

998 PERFORMANCE MODELING

Level ofautomation

8

7

6

5

4

3 Voice

RadarG

PS

Data com

2

1

Informationacquisition

Informationanalysis anddisplay

Responsedecision

Responseimplementation

Range ofequipage

Current ATMNextGen mid termNextGen far term

Figure 5 Estimated levels of automation of NextGen automation.

Discrepancy

Test

(Observer)Model of process

Actual process

Human

−+

Figure 6 Use of computer-based observer as an aid to supervisor.

of Edwin Link’s first flight simulators (developed earlyin the 1940s) was to make the pilot trainee feel as if heor she were flying a real aircraft. First they were instru-ment panels only, then a realistic out-the-window viewwas created by flying a servo-driven video camera over ascale model, and finally, computer graphics were used tocreate the out-the-window images. Now all commercialairlines and military services routinely train with com-puter display, full-instrument, moving-platform flightsimulators. Similar technology has been applied to ship,automobile, and spacecraft control. The salient point forthe present discussion is that the new simulation capa-bilities now permit visualization of alternative plans aswell as better understanding of complex state informa-tion in situ, during monitoring. That same technology,of course, can be used to convey a sense of presence inan environment that is not simulated but is quite real and

merely remote—communicated via closed-circuit videowith cameras slaved to the observer’s head.

Supervisory aiding in planning of the moves of atelerobot is illustrated by the work of Park (1991). Hiscomputer graphic simulation let a supervisor try outmoves of a telerobot arm before committing to the actualmove. He assumed that for some obstacles the positionsand orientation were already known and represented ina computer model. The user commanded each straight-line move to a subgoal point in three-dimensional spaceby designating a point on the floor or the lowest hor-izontal surface (such as a tabletop) by moving a cursorto that point (say, A in Figure 7a) and clicking, thenlifting the cursor by an amount corresponding to thedesired height of the subgoal point (say, A) above thatfloor point and observing on the graphic model a bluevertical line being generated from the floor point tothe subgoal point in space. This process was repeated

Page 10: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 999

Start

A

a

B

bc

C

Originalobject

PC0

P C0

PC1

Viewing position(camera location)

Projected object(virtual object)

D0 = V0

(a)

(b)

Virtual objectV0

New virtual objectV1

Projected objectD1

Originalobject

New viewing position

Viewing position

Figure 7 Park’s display of computer aid for obstacle avoidance: (a) human specification of subgoal points on graphicmodel; (b) generation of virtual obstacles for a single viewing position (above) and a pair of viewing positions (below).(From Park, 1991.)

Page 11: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

1000 PERFORMANCE MODELING

for successful subgoal points (say, B and C). Usingthe computer display, the user could view the resultingtrajectory model from any desired perspective (althoughthe “real” environment could be viewed only from theperspective provided by the video camera’s location).Either of two collision avoidance algorithms could beinvoked: a detection algorithm that indicated where onsome object a collision occurred as the arm was movedfrom one point to another or an automatic avoidancealgorithm that found (and drew on the computer screen)a minimum-length, no-collision trajectory from thestarting point to the new subgoal point. Park’s aidingscheme also allowed new observed objects to be addedto the model by graphically “flying” them into geometriccorrespondence with the model display. Another aidwas to generate virtual objects for any portion of theenvironment in the umbral region (not visible) after twovideo views (Figure 7b). In this case the virtual objectswere treated in the same way in the model and in thecollision avoidance algorithms as the visible objects.Experiments with this technique showed that it was easyto use and that it avoided collisions.

At the extreme of time desynchronization is record-ing an entire task on a simulator, then sending it to thetelerobot for reproduction. This might be workable whenone is confident that the simulation matches the reality ofthe telerobot and its environment or when small differ-ences would not matter (e.g., in programming telerobotsfor entertainment). Doing this would certainly make itpossible to edit the robot’s maneuvers until one was sat-isfied before committing them to the actual operation.Machida et al. (1988) demonstrated such a techniqueby which commands from a master–slave manipulatorcould be edited much as one edits material on a video-tape recorder or a word processor. Once a continuoussequence of movements had been recorded, it could beplayed back either forward or in reverse at any time rate.It could be interrupted for overwrite or insert operations.Their experimental system also incorporated computer-based checks for mechanical interference between therobot arm and the environment.

A number of planning aids are manifest in modern airtraffic control. Computers are used to show the expectedarrival of aircraft at airports and the gaps between them.This helps the human controller to command minorchanges in aircraft speed or flight path to smooth theflow. The center TRACON automation system (CTAS)assists in providing an optimal schedule and three-dimensional spacing. Other systems use radar data toproject ahead and alert the controller to potential con-flicts (violation of aircraft separation standards)(Wickens et al., 1997). NextGen promises a number ofadditional decision aiding displays, such as those listedin Table 1.

One aspect of supervisory control that is often notplanned and is taken for granted (and where learning canbe painful) is team coordination in distributed decisionmaking. NextGen has thankfully recognized the problemand has established research efforts into what in thatcontext is called “cooperative air traffic management.”In the military operations context Cummings (2005)provides an example:

On April 14, 1994, two US Army Black Hawkhelicopters were transporting U.S., French, British,and Turkish commanders, as well as Kurdish para-military personnel across this zone when two USF-15 fighters shot them down, killing all 26 onboard. The Black Hawks had previously contactedand received permission from the AWACs to enterthe no-fly zone. Yet despite this, AWACs confirmedthat there should be no flights in the area whenthe F-15s misidentified the US helicopters as IraqiHind helicopters. The teamwork displayed in thissituation was a significant contributing factor to thefriendly fire incident, as the F-15s never learned fromAWACs that a friendly mission was supposed to bein the area. It was later determined that the F-15wingman backed up the other F-15’s decision that thetargets were Iraqi forces despite being unsure, whichwas yet another breakdown in communication. Eachteam member did not share information effectively,resulting in the distributed decision making of theAWACs and F-15s pilots to come to incorrect andfatal conclusions

7 TEACHING THE COMPUTER

Teaching or programming a task, including a goalstate and a procedure for achieving it and includingconstraints and criteria, can be formidable or quite easy,depending on the command hardware and software. Bycommand hardware is meant the way in which humanresponse (hand, foot, or voice) is converted to physicalsignals to the computer. Command hardware can beeither analogic or symbolic. Analogic means that thereis a spatial or temporal isomorphism among human re-sponse, semantic meaning, and/or feedback display. Forexample, moving a control up rapidly to increase themagnitude of a variable quickly, which causes a displayindicator to move up quickly, would be a properanalogic correspondence.

Symbolic command, by contrast, is accomplished bydepressing one or a unique series of keys (as typingwords on a typewriter) or uttering one or a series ofsounds (as in speaking a sentence), each of which hasa distinguishable meaning. For symbolic commands aparticular series or concatenation of such responses hasa different meaning from other concatenations. Spatialor temporal correspondence to the meaning or desiredresult is not a requisite. Sometimes analogic and sym-bolic can be combined: for example, where up–downkeys are both labeled and positioned accordingly.

It is natural for people to intermix analogic and sym-bolic commands or even to use them simultaneously.Typical industrial robots are taught by a combination ofgrabbing hold and leading the endpoint of the manipu-lator around in space relative to the workpiece, at thesame time using a switch box on a cable (a teach pen-dant) to key in codes for start, stop, speed, and so on,between various reference positions. This happens, forexample, when a person talks and points at the sametime or plays the piano and conducts a choir with his orher head or free hand.

Page 12: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 1001

In regard to teaching the computer, Ferris et al(2010) use the term directability , which they define asability to direct efficiently and safely the activities ofthe automation. They point out that the interface mustbe designed to avoid what Norman 1986 has called thegulf of execution, “where an operator struggles withidentifying and operating the proper controls and com-mands to translate an intended action into the machine’slanguage.” They point out that problems occur when, forexample, different aircraft employ automation controlswith similar shape, feel, and/or location that activatedifferent systems or require different manipulations(Abbott et al., 1996). Such inconsistencies can leavepilots who transition between aircraft or airlines highlyvulnerable to errors, especially when under stress.

Supervisory command systems have been developedfor mechanical manipulators that utilize both analogicand symbolic interfaces with the supervisor and thatenable teaching to be both rapid and available in termsof high-level language. Brooks (1979) developed such asystem, which he called SUPERMAN, which allows thesupervisor to use a master arm to identify objects anddemonstrate elemental motions. He showed that evenwithout time delay for certain commands, which referto predefined location, supervisory control that includedboth teaching and execution took less time and hadfewer errors than manual control.

Yoerger (1982) developed a more extensive androbust supervisory command system that enables a vari-ety of arm–hand motions to be demonstrated, defined,called on, and combined under other commands. In oneset of experiments, Yoerger compared three differentprocedures for teaching a robot arm to perform a contin-uous seam weld along a complex curved workpiece. Theend effector (welding tool) had to be kept 1 in. away andretain an orientation perpendicular to the curved surfaceto be welded and move at constant speed. Yoerger testedhis subjects in three command (teaching) modes. Thefirst mode was for the human teacher to move the master(with slave following in master–slave correspondence)relative to the workpiece in the desired trajectory. Thecomputer would memorize the trajectory and then causethe slave end effector to repeat the trajectory exactly.The second mode was for the human teacher to movethe master (and slave) to each of a series of positions,pressing a key to identify each. The human would thenkey in additional information specifying the parametersof a curve to be fit through these points and the speedat which it was to be executed, and the computer wouldthen be called upon for execution. The third mode wasto use the master–slave manipulator to contact and tracealong the workpiece itself, to provide the computer withthe knowledge of the location and orientation of the sur-faces to be welded. Then, using the typewriter keyboard,the human teacher would specify the positions and ori-entations of the end effector relative to the workpiece.The computer could then execute the task instructionsrelative to the geometric references given.

Identifying the geometry of the workpiece analogi-cally and then giving symbolic instructions relative to itproved the constant winner. The reasons for this advan-tage apparently are the same as for Brooks’s results

described previously, provided of course that the timespent in the teaching loop is sufficiently short.

There are many programming languages for indus-trial robots, but a lack of standardization of program-ming methods for robots poses challenges. For example,there are over 30 different manufacturers of industrialrobots, so there are also 30 different robot programminglanguages required.

Some robot programming languages are essentiallyvisual. The software system for the Lego MindstormsNXT robots is worthy of mention. It is based on andwritten by Labview. The approach is to start with theprogram rather than the data. The program is constructedby dragging icons into the program area and adding orinserting them into a sequence. For each icon you thenspecify the parameters (data). For example, for the motordrive icon you specify which motors and by how muchthey move.

A scripting language is a high-level programminglanguage that is used to control the software applicationand is interpreted in real time, or “translated on the fly,”instead of being compiled in advance. A scripting lan-guage may be a general-purpose programming languageor it may be limited to specific functions used to aug-ment the running of an application or system program.Some scripting languages, such as RoboLogix, have dataobjects residing in registers, and the program flow rep-resents the list of instructions, or instruction set, that isused to program the robot. The RoboLogix instructionset is shown in Figure 8.

Programming languages are generally designed forbuilding data structures and algorithms from scratch,while scripting languages are intended more for connect-ing, or “gluing,” components and instructions together.Consequently, the scripting language instruction set isusually a streamlined list of program commands that areused to simplify the programming process and providerapid application development.

Teaching airplane autopilots is a good example of theteaching role in supervisory control. Modern airplanescan now adjust their throttle, pitch, and yaw dampingcharacteristics automatically. They can take off andclimb to altitude autonomously or fly to a given latitudeand longitude and can maintain altitude and directiondespite wind disturbances. They can approach and landautomatically in zero-visibility conditions. To do thesetasks, airplanes make use of artificial sensors, motors,and computers programmed in supervisory fashion bypilots and ground controllers. In this sense airplanes aretelerobots in the hands of their pilot teachers. In theaviation world the supervising pilot is called a flightmanager.

The flight management system (FMS) is the aircraftembodiment of the HIC discussed previously and cur-rently is where the supervisory teaching is done. Thetypical FMS has a cathode ray tube (CRT) displayand both generic and dedicated keysets. More than1000 modules provide maps for terrain and navigationalaids, procedures, and synoptic diagrams of variouselectrical and hydraulic subsystems. Proposed electronicmaps show planned flight route, weather, and othernavigational aids. When the pilot enters a certain flight

Page 13: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

1002 PERFORMANCE MODELING

Figure 8 RoboLogix instruction set (from Wikipedia).

plan, the FMS can visualize the trajectory automaticallyand call attention to any waypoints that appear tobe erroneous on the basis of a set of reasonableassumptions. Conflict probe displays call the groundcontroller’s attention to incipient separation violations,while cockpit traffic displays do the same for the pilot.

The problem of authority is one of the most difficult(Boehm-Davis et al., 1983). Popular mythology is thatthe pilot is (or should be) in charge at all times. Butwhen a human turns control over to an automatic system,it is the exception that she or he can do something elsefor a while (as in the case of setting one’s alarm clockand going to sleep). It is also recognized that there arelimited windows of opportunity for escaping from theautomation (once you get on an elevator you can getoff only at discrete floor levels). People are seldominclined to “pull the plug” unless they receive clearsignals indicating that such action must be taken andunless circumstances make it convenient for them to doso. Examples of some current debates follow:

1. Should there be certain states or a certainenvelope of conditions for which the automationwill simply seize control from the pilot?

2. Should the computer deviate from a pro-grammed flight plan automatically if criticalunanticipated circumstances arise?

3. If the pilot programs certain maneuvers aheadof time, should the aircraft execute these auto-matically at the designated time or location, orshould the pilot be called upon to provide furtherconcurrence or approval?

4. In the case of a subsystem abnormality, shouldthe affected subsystem be reconfigured automat-ically, with after-the-fact display of what hasfailed and what has been done about it? Orshould the automation wait to reconfigure untilafter the pilot has learned about the abnormality,perhaps been given some advice on the options,and had a chance to take initiative?

It is important to emphasize that simple and idealcommand-and-feedback patterns are not to be expectedas systems get more complex. In interactions betweena human supervisor and his or her subordinates, or ateacher and the students, it can be expected that theteaching process will not be a one-way communication.Some feedback will be necessary to indicate whetherthe message is understood or to convey a request forclarification on some aspect of the instructions. Further,when the subordinate or student does finally act on theinstruction, the supervisor may not understand from theimmediate feedback what the subordinate has done andmay ask for further details. This is illustrated in Figure 9by the light arrows, where the bold arrows characterizethe conventional direction of information in feedbackcontrol.

Teaching a computer for supervisory control actuallygoes beyond what can be thought of as providing if–then–else instructions for mechanical actions (as witha robot or vehicle), usually called the control law . Italso includes setting or changing parameters of howproperties of the system (states) are measured, how suchinformation is displayed, how the interaction of thesystem with its environment is modeled or simulatedfor planning future actions, as well as properties ofthe control interface. These many options for systemparameter change are illustrated in Figure 10.

8 MONITORING OF DISPLAYS ANDDETECTION OF FAILURESThe human supervisor monitors the automated executionof the task to ensure proper control (Parasuraman, 1987).This includes intermittent adjustment or trimming ifthe process performance remains within satisfactorylimits. It also includes detection of if and when it goesoutside limits and the ability to diagnose failures orother abnormalities. The subject of failure detectionin human–machine systems has received considerableattention (Rasmussen and Rouse, 1981). Moray (1986)regards such failure detection and diagnosis as the most

Page 14: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 1003

(1) Generation of command

Command by supervisor

FUNCTION OF HUMAN FUNCTION OF COMPUTERFORM OFCOMMUNICATION

(analogic or symbolic)

(4) Clarification of command

(2) Understanding of commandPrincipal direction

Principal direction

Secondary direction

Secondary direction

(3) Display of understanding

(2) Understanding of state

feedback of state

(3) Query about state

(1) Display of system state

(4) Display of clarification

Figure 9 Intermediate feedback in command and display. Heavy arrows indicate the conventional understanding offunctions. Light arrows indicate critical additional functions that tend to be neglected. (From Sheridan, 1984.)

Controlinterface

Simulationof problem

space

Controllaw

u* r

r

u

X

X *

YMeasure-

ment

Boundedproblemspace

Displayinterface7

6

5

Humansupervisor(changes

parameters)

Knowledge ofexternalities

4

3

2

1

Figure 10 Set of options for changing parameters in supervisory control.

important human supervisory role. I prefer the view thatall five supervisory roles are essential and that no onecan be placed above the others.

The supervisory controller tends to be removed fromfull and immediate knowledge about the controlledprocess. The physical processes that he or she must mon-itor tend to be large in number and distributed widelyin space (e.g., around a ship or plant). The physicalvariables may not be immediately sensible by him or

her (e.g., steam flow and pressure) and may be computedfrom remote measurements on other variables. Sittingin the control room or cockpit, the supervisor isdependent on various artificial displays to give feedbackof results as well as knowledge of new referenceinputs or disturbances. These factors greatly affect howhe or she detects and diagnoses abnormalities in theprocess, but whether removal from active participationin the control loop makes it harder (Ephrath and

Page 15: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

1004 PERFORMANCE MODELING

Young, 1981) or easier (Curry and Ephrath, 1977)remains an open question. Gai and Curry (1978) andWickens and Kessell (1979, 1981) have studied variouspsychophysical aspects of this problem.

Ferris et al. (2010) call attention to what they callobservability (i.e., lack of adequate feedback abouttargets, actions, decision logic, or operational limits).The concept comports with the same term in controltheory, which has to do with how well the internal statesof a system can be inferred by knowledge of its externaloutputs:

Low observability has been shown to lead to a lackor loss of mode awareness, that is, a lack of knowl-edge and understanding of the current and futureautomation configuration and behavior . . . One man-ifestation of reduced mode awareness are automa-tion surprises, in which pilots detect a discrepancybetween actual and expected or assumed automa-tion behavior . . . often resulting from uncommandedor indirect (i.e., without an explicit instruction by thepilot) mode transitions.

These authors also discuss the serious problem ofmode awareness and errors. Mode errors of commissionoccur when a pilot executes an action that is appropriatefor the assumed but not the actual current mode ofthe system. Mode errors of omission take place whenthe pilot fails to take an action that would be requiredgiven the currently active automation configuration andbehavior (Abbott et al., 1996).

Sarter et al. (2007) describe a study in which airlinepilots participated in a full-mission 747-400 simulationthat included a variety of challenging automation events.Using eye motion instrumentation, they found that pilotsmonitor basic flight parameters to a much greater extentthan visual indications of the automation configuration.More specifically, pilots frequently fail to verify manualmode selections or notice automatic mode changes. Inother cases, they do not process mode annunciationsin sufficient depth to understand their implications foraircraft behavior.

In traditional control rooms and cockpits the ten-dency has been to provide the human supervisor withan individual and independent display of each variableand for a large fraction of these to provide a separateadditional alarm display that lights up when the corre-sponding variable reaches or exceeds some value. Thus,modern aircraft may easily have over 1000 displaysand modern chemical or power plants 5000 displays.In the writer’s experience, in one nuclear plant trainingsimulator, during the first minute of a “loss of coolantaccident,” 500 displays were shown to have changed ina significant way, with 800 more in the second minute.

Clearly, no human being can cope with so much in-formation coming simultaneously from so many seem-ingly disconnected sources. Just as clearly, such signalsin any real operating system actually are highly corre-lated. In real-life situations in which we move amongpeople, animals, plants, or buildings, our eyes, ears, andother senses easily take in and comprehend vast amountsof information just as much as in the power plant. Our

genetic makeup and experience enable us to integratethe bits of information from different parts of the retinaand from different senses from one instant to the next,presumably because the information is correlated. Wesay we “perceive patterns” but do not pretend to under-stand how. In any case the challenge is to design dis-plays in technological systems to somehow integrate theinformation to enable the human operator to perceivepatterns in time and space and across the senses. Aswith teaching (command), the forms of display may beeither analogic (e.g., diagrams, plots) or symbolic (e.g.,alphanumerics) or some combination.

In the nuclear power industry the safety parameterdisplay system (SPDS) is now required of all plants insome form. The idea of the SPDS is to select a smallnumber (e.g., 6–10) of variables that tell the most aboutplant safety status and to display them in integratedfashion such that by a glance the human operator cansee whether something is abnormal and, if so, what andto what relative degree. Figure 11 shows an exampleof an SPDS. It gives the high-level or overview display(a single computer “page”). If the operator wishes moredetailed information about one variable or subsystem, heor she can page down (select lower levels). These canbe diagrams having lines or symbols that change coloror flash to indicate changed status and alphanumericto give quantitative or more detailed status. Thesecan also be bar graphs or cross plots or integrated inother forms. One novel technique is the Chernoff face(Figure 11c), in which the shapes of eyes, ears, nose,and mouth differ systematically to indicate differentvalues of variables, the idea being that facial patternsare easily perceived. Allegedly, the Nuclear RegulatoryCommission, fearful that some enterprising designermight employ this technique before it was proven,formally forbade it as an acceptable SPDS.

As noted previously (Figure 3), an important poten-tial of the HIC is for modeling the controlled process.Such a model may then be used to generate a displayof observed state variables that cannot be seen or mea-sured directly. Another use is to run the model in fasttime to predict the future, given of course that the modelis calibrated to reality at the beginning of each such pre-dictive run. A third use, now being developed for appli-cation to remote control of manipulators and vehiclesin space, helps the human operator cope with teleme-try time delays (as shown in Figure 12, wherein videofeedback is necessarily delayed by at least several sec-onds). By sending control signals to a computer modelas a basis for superposing the corresponding graphicmodel on the video, the graphic model will “lead” thevideo picture and indicate what the video will do sev-eral seconds hence. This has been shown to speed upthe execution of simple manipulation tasks by 70–80%(Noyes and Sheridan, 1984).

Advances in computer graphics, as driven by thecomputer game industry, film animation and specialeffects, and other simulations (virtual reality), havemeant that computer displays are theoretically limitlessin what they can display: dynamically at high resolution,in color, and on a head-mounted display if that is calledfor. The challenge for the display designer is then:

Page 16: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 1005

Radioactivity control–RAD

RCS integrity–RCS

Containment integrity–CNT

Secondary heat removal–SHR

Core cooling and heat removal–CCR

Reactivity control–RTV

Model 1 Unit 1

RAD RCS CNT SHR CCR RTV

MSIS

CREFAS

AFAS 2

CIAS

RXTRIP

RAS

FBEVAS

SIAS

CPIAS

CRVIAS

(a)

(b)

(c)

AFAS-1

CSAS

Waterstorage tank (BWST)

LP1B BHX(heat exchanger)

LPIS Bpump

LPISautomaticcrosstie

LPIS Apump

LPIS AHX(heat exchanger)

Lowerplenuminjection

Blowdownloop

Blowdownnot leg Blowdown

suppressiontank (BST)

Pressurereductionpump

Coldleginjection

Primarycoolantloop

Reactorvessel

Downcomerinjection

INEL 2 1662

Selected success path Unavailable componentsAvailable components

Figure 11 Safety parameter display system for a nuclear power plant.

What will provide the most effective interaction withthe human supervisor?

A final aspect of supervisory monitoring and displayconcerns format adaptivity—the ability to change theformat and/or the logic of the displays as a functionof the situation. Displays in aerospace and industrialsystems now have fixed formats (e.g., the labels, scales,and ranges are designed into the display). Alarms havefixed set points. However, future computer-generateddisplays even for the same variables may be different atvarious mission stages or in various conditions. Thus,formats may differ for aircraft takeoff, landing, andon-route travel and be different for plant startup, full-capacity operation, and emergency shutdown. Somealarms have no meaning or may be expected to go offwhen certain equipment is being tested or taken out ofservice. In such a case adaptive formatted alarms maybe suppressed or the set points changed automaticallyto correspond to the operating mode. Future displaysand alarms could also be formatted or adjusted to thepersonal desires of the supervisor to provide any timescale, degree of resolution, and so on, necessary at thetime. Ideally, some future displays could adapt basedon a running model of how the human supervisor’sperception was being enhanced.

A currently popular research challenge is to measurehighway vehicle driver task workload and whetherdriver’s use of the potential in-vehicle informationdistracters, such as cell phone, radio, navigation system,and so on, should be prohibited during busy demands

of traffic (Boer, 2000; Llaneras, 2000; Lee et al., 2002).This would make the driver interfaces adaptive.

There are hazards, of course, in allowing emergencydisplays to be too flexible, to the point where they causeerrors rather than preventing them. Mode errors, wherethe operator believes that he or she is operating in onemode but actually is operating in a different mode,can be dangerous. An example of where flexibilityin monitoring displays went awry was in an aircraftaccident that occurred in Europe several years ago. Inthis instance the pilot could ask to have either descentrate (thousands of feet per minute) or descent angle(degrees) presented, and depending on how the modelcontrol panel had been set, the number was indicated bytwo digits displayed at the same location. In this casethe pilot forgot which mode he had requested (althoughthat information was also displayed but at a differentlocation). The result was a misreading and a tragic crash.

9 INTERVENING AND HUMAN RELIABILITYSarter and Woods (2000) and Wiener (1988) write aboutautomation surprises, the tendency of automatic systemsto catch the human supervisor off-guard such that thehuman thinks: What is the automation doing now? Whatwill it do next? How did I get into this mode? Why didit do this? How do I stop the machine from doing this?Why won’t it do what I want?

The challenge of surprise is a great one, and thereare no easy answers. Computers do what they have been

Page 17: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

1006 PERFORMANCE MODELING

Positioncommands

Computermodel

Transmitter

Receiver Spacecraft

Videocamera

Graphics

Video(As shownbelow)

Arm position command at time t

Graphicssuperposed

on video

Actual arm position(at time t which was

commanded at t − Δt )

Computer-generatedgraphics: arm positioncommanded at t, implemented at t + Δt,feedback at t + 2Δt

Feedback display: Arm positionat t − Δt, commanded at t − 2Δt(all shaded areas represent video)

Figure 12 Predictor display for delayed telemanipulator.

programmed to do, which is not always what the userintended. User education—toward better understandingof how the system works—is one remedy. Another is toprovide error messages that are couched in a languageunderstandable to the operator (not in the jargon of thecomputer programmer, a problem so familiar to all usersof computers). Generally, the solution lies in some formof feedback—to lead the human in making a mild orradical intervention, as appropriate.

The supervisor decides to intervene when the com-puter has completed its task and must be retaught for thenext task, when the computer has run into difficulty andrequests of the supervisor a decision as to which wayto go, or when the supervisor decides to stop automaticaction because he or she judges that system performanceis not satisfactory. Intervention is a problem that reallyhas not received as much attention as teaching and mon-itoring. Yet systems are being planned in which thesupervisory operator is expected to receive advice from acomputer-based system about remote events and withinseconds decide whether to accept the computer’s advice(in which case the response is commanded automati-cally) or reject the advice and generate his or her owncommands (in effect, intervene in an otherwise auto-matic chain of events).

It is at the intervention stage that human error mostreveals itself. Errors in learning from past experience,planning, teaching, and monitoring will surely exist.Many of these are likely to be corrected as the supervisor

notes them “during the doing.” It is after the automaticsystem is functioning and the supervisor is monitoringintermittently that those human errors make a differenceand where it is therefore critical that the human su-pervisor intervene in time and take appropriate actionwhen something goes wrong. Thus, the interventionstage is where human error is most manifest.

If human error is not caught by the supervisor, it isperpetuated slavishly by the computer, much as hap-pened to the Sorcerer’s Apprentice. For this reasonsupervisory control may be said to be especially sen-sitive to human error. Several factors affect the supervi-sor’s decision to intervene and/or his or her success indoing so.

1. Trade-Off between Collecting More Data andTaking Action in Time. The more data collectedfrom the more sources, the more reliable is thedecision of what, if anything, is wrong and whatto do about it. Weighed against this is that ifthe supervisor waits too long, the situation willprobably get worse, and corrective action maybe too late. Formally, the optimization of thisdecision is called the optional stopping problem .

2. Risk Taking . The supervisor may operate fromeither risk-averse criteria such as minimax (min-imize the worst outcome that could happen)or more risk-neutral criteria such as expectedvalue (maximize the subjectively expected gain).

Page 18: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 1007

Depending on the criterion, the design of asupervisory control system may be very differ-ent in complexity and cost.

3. Mental Workload . This problem is aggravatedby supervisory control. When a supervisory con-trol system is operating well in the automaticmode, the supervisor may have little concern.When there is a failure and sudden interventionis required, the mental workload may be con-siderably higher than in direct manual control,where in the latter case the operator is alreadyparticipating actively in the control loop. In theformer case the supervisor may have to undergoa sudden change from initial inattention, movingphysically and mentally to acquire informationand learn what is going on, then making a deci-sion on how to cope. Quite likely this will bea rapid transient from very little to very highmental workload.

Although the subject of human error is currently ofgreat interest, there is no consensus on either a taxonomyor a theory of causality of errors. One common errortaxonomy relates to locus of behavior: sensory, memory,decision, or motor. Another useful distinction is betweenerrors of omission and those of commission. A third isbetween slips (correct intentions that inadvertently arenot executed) and mistakes (intentions that are executedbut that lead to failure).

In supervisory control there are several problems ofhuman error worth particular mention. One is the type ofslip called capture. This occurs when the intended taskrequires a deviation from a well-rehearsed (behaviorally)and well-programmed (in the computer) procedure.Somehow habit, augmented by other cues from thecomputer, seems to capture behavior and drive it onto the next (unintended) step in the well-rehearsed andcomputer-reinforced routine.

A second supervisory error, important in both plan-ning and failure diagnosis, results from the humantendency to seek confirmatory evidence for a singlehypothesis currently being entertained (Gaines, 1976).It would be better if the supervisor could keep inmind a number of alternative hypotheses and let bothpositive and negative evidence contribute symmetricallyin accordance with the theory of Bayesian updating(Sheridan and Ferrell (1974). Norman (1981), Reasonand Mycielska (1982), Rasmussen (1982), and Rouseand Rouse (1983) provide reviews of human errorresearch from their different perspectives.

Theoretically, anything that can be specified in analgorithm can be given over to the computer. However,the reason the human supervisor is present is to addnovelty and creativity: precisely those ingredients thatcannot be prespecified. This means, in effect, thatthe best or most correct human behavior cannot beprespecified and that variation from precise proceduremust not always be viewed as errant noise. The humansupervisor, by the nature of his or her function, mustbe allowed room by the system design for what may becalled trial and error (Sheridan, 1983).

What training should the human supervisory con-troller receive to do a good job at detecting failuresand intervening to avoid errors? As the supervisor’stask becomes more cognitive, is the answer to providetraining in theory and general principles? Curiously, theliterature seems to provide a negative answer (Duncan,1981). In fact, Moray (1986), in his review, concludesthat there seems to be no case in the literature wheretraining in the theory underlying a complex system hasproduced a dramatic change in fault detection or diagno-sis. Rouse (1985) similarly concludes “that the evidence[e.g., Morris and Rouse (1985)] does not support a con-clusion . . . that diagnosis of the unfamiliar requirestheory and understanding of system principles.” Appar-ently, frequent hands-on experience in a simulator (i.e.,with simulated failures) is the best way to enable a su-pervisor to retain an accurate mental model of a process.

A final issue to be mentioned in conjunction withhuman reliability is that of trust. It comes in twoforms: overtrust, also called automation bias, and undertrust. Ferris et al. (2010) provide illustrations of both.An example of overtrust occurred in the fatal 1995accident over Cali, Columbia, where pilots got confusedover waypoint indications and were far off course butneverthess trusted the FMS to take care of them as theyflew into a mountain. An example of untertrust was asurvey of fighter pilots who opinied that UAVs couldnever replace human piloted aircraft in various searchand other missions.

10 MODELING SUPERVISORY CONTROL

For 35 years various models of supervisory control havebeen proposed. Most of these have been models ofparticular aspects of supervisory control, not apparentlyclaiming to model all or even very many aspects of it.The simplest model of supervisory control might be thatof nested control loops (Figure 13), where one or more

Humansupervisor

Navigationcomputer

Guidancecomputer

Controlcomputer

Attitude

Course, altitudeWaypoints

Figure 13 Nested control loops of aerospace vehicle.

Page 19: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

1008 PERFORMANCE MODELING

inner loops are automatic and the outer one is manual. Inaerospace vehicles the innermost of four nested loops istypically called “control,” the next “guidance,” and thenext “navigation,” each having a set point determinedby the next outer loop. Hess and McNally (1997) haveshown how conventional manual control models can beextended to such multiloop situations. The outer loopin this generic aerospace vehicle includes the humanoperator, who, given mission goals, programs in thedestination. In driving a car the functions of navigation,guidance, and control are all done by a person and can beseen to correspond roughly to knowledge-based, skill-based, and rule-based behavior.

Figure 14 is a qualitative functional model ofsupervisory control, showing the various cause–effectloops or relationships among elements of the systemand emphasizing the symmetry of the system as viewedfrom top and bottom (human, task) of the hierarchy.

Figure 15 extends Rasmussen’s model of skill-based, rule-based, and knowledge-based behavior toshow various interactions with computer aids havingcomparable levels of intelligence.

One problem the supervisor faces is allocating atten-tion between different tasks, where each time that heor she switches tasks there is a time penalty in trans-fer, typically different for different tasks and possibly

Human operator

Displays

HIScomputer

Controls

9

8

1. Task is observed directly byhuman operator's own senses.

2. Task is observed indirectlythrough artificial sensors,computers, and displays. ThisTIS feedback interacts withthat from within HIS and isfiltered or modified.

3. Task is controlled within TISautomatic mode.

4. Task is affected by theprocess of being sensed.

5. Task affects actuators andin turn is affected.

6. Human operator directly affectstask by manipulation.

7. Human operator affects taskindirectly through a controlsinterface, HIS/TIS computers,and actuators. This controlsinteraction with that from withinTIS and is filtered or modified.

8. Human operator gets feedbackfrom within HIS, in editing aprogram, running a planningmodel, etc.

9. Human operator orients him or herselfrelative to control or adjustscontrol parameters.

10. Human operator orients him or herselfrelative to display or adjustsdisplay parameters.

7 6

10

TIScomputer

21 3

4 5

TaskSem

iaut

omat

ic s

ubsy

stem

(T

IS)

Hum

an-in

tera

ctiv

e su

bsys

tem

(H

IS)

Sensors Actuators

Figure 14 Multiloop model of supervisory control. (From Sheridan, 1984.)

Page 20: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 1009

High-levelsymbolic

information

Patterns,signs

Rule-basedbehavior

Knowledge-basedbehavior

Human supervisor

Advice

If-then commands,requests for rules

RulesRule-based

aiding

Computer aiding

Knowledge-basedaiding

High-level goals,requests for advice

Skill-basedaiding

Detailed control commands,requests for demonstrations

Demonstrations

Controlled process

Automaticcontrol

Manualcontrol

Skill-basedbehavior

Signals

Figure 15 Supervisor interactions with computer decision aids at knowledge, rule, and skill levels.

involving uses of different software procedures, differentequipment, and even bodily transportation of himself orherself to different locations. Given relative worths fortime spent attending to various tasks, it has been shown(Sheridan, 1970) that dynamic programming enablesthe optimal allocation strategy to be established. Morayet al. (1982) applied this model to deciding whetherhuman or computer should control various variables ateach succeeding moment. For simpler experimental con-ditions, the model fit the experimental data (subjectsacted like utility maximizers), but as task conditionsbecame complex, apparently it did not. Wood and Sheri-dan (1982) did a similar study where supervisors couldselect among alternative automatic machines (differingin both rental cost and productivity) to do assigned tasksor do the tasks themselves. Results showed the supervi-sors to be suboptimal, paying too much attention to costsand too little to productivity, and in some cases using theautomation when they could have done the tasks moreefficiently manually. Govindaraj and Rouse (1981) mod-eled the supervisor’s decisions to divert attention froma continuous task to perform or monitor a discrete task.

Rouse (1977) utilized a queueing theory approach tomodel whether from moment to moment a task should beassigned to a computer or to the operator. The allocationcriterion was to minimize service time under costconstraints. Results suggested that human–computer“misunderstanding” of one another degraded efficiencymore than limited computer speed. In a related flightsimulation study, Chu and Rouse (1979) had a computer

perform those tasks that had waited in the queue beyonda certain time. Chu et al. (1980) extended this idea tohave the computer learn the pilot’s priorities and latermake suggestions when the pilot was under stress.

Tulga and Sheridan (1980) and later Pattipatti et al.(1983) utilized a model of allocation of attention amongmultiple task demands, a task displayed on the com-puter screen to the subject as is represented in Figure 16.Instead of being stationary, these demands appear at ran-dom times (not being known until they appear), existfor given periods of time, then disappear at the end ofthat time with no more opportunity to gain anything byattending to them, While available, they take differingamounts of time to complete and have differing rewardsfor completion, which information may be availableafter they appear and before they are “worked on.” Thehuman decision maker in this task need not allocateattention in the same temporal order in which the taskdemands become known, nor in the same order in whichtheir deadline will occur. Instead, he or she may attendfirst to that task which has the highest payoff or takesthe least time and/or may plan ahead a few moves so asto maximize gains. The Tulga–Sheridan experimentalresults suggest that subjects approach optimal behavior,which, when heavily loaded (i.e., there are more oppor-tunities than he or she can possibly cope with), simplyamounts to selecting the task with highest payoff regard-less of time to deadline. These subjects also reported thattheir sense of subjective workload was greatest whenby arduous planning they could barely keep up with all

Page 21: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

1010 PERFORMANCE MODELING

Task duration

Deadline

Speed toward deadline(time available)

Valuesof doing

task

Figure 16 Multitask computer display used in the Tulga–Sheridan experiment.

tasks presented. When still more tasks came at them andthey had to select which they could do and which theyhad to off-load, subjective workload decreased.

Researchers and designers of supervisory controlsystems must cope with a number of questions. Amongthese are (1) how much autonomy is appropriate forthe TIC, (2) how much the TIC and the HIC shouldtell the human supervisor, and (3) how responsibilitiesshould be allocated among the TIC, HIC, and supervisor(Johannsen, 1981).

The famous Yogi Berra allegedly counseled: “Nevermake predictions, especially about the future!” Nev-ertheless, it is ethically mandatory that we predict asbest we can. However, recent decades have seen a shiftaway from monolithic, computationally predictive mod-els toward frameworks or categorizations of models,each of which may be quite simple—involving elemen-tary control laws, a few heuristics, or pattern recognitionrules. Thus, as knowledge and understanding of super-visory control have grown, along with its complexity,researchers have come to realize that they need notand cannot be held to comprehensive predictive models,desirable as they may be.

The most difficult, and it might even be said im-possible, aspect of supervisory control to model is thatof setting in goals, conditions, and values. Even thoughoverall goals may be given to an actual system (or givenin an experiment), how those are translated into subgoalsand conditional statements remains elusive. The same istrue for communicating values (criteria, coefficients ofutility, etc.). Although this act of evaluation remains thesine qua non of why human participation in system con-trol must remain, there is little prospect for mathematicalmodeling of this aspect in the near future.

11 POLICY ALTERNATIVES FOR HUMANSUPERVISORY CONTROL

This section confronts the question of what policiesmight be adopted in dealing with the human–automation

interaction challenges that are unavoidable in thesystems discussed here. Dilemmas will surely arise withregard to (a) when not to follow the recommendation ofa decision support tool or when to bypass automationwhen either is believed to have failed or not be ap-propriate to the current situation and (b) how longto wait for automation to act before interveningmanually.

The public will still demand both safety and effi-ciency and may continue to place stringent expectationson the human to provide both unless and until automa-tion can prove itself sufficiently robust and reliable. Newtechnology permits closer surveillance of human behav-ior. These facts could even exacerbate the pressures tomaintain the “blame game” of punishing what may beseen as errant behavior, even though it is fully recog-nized that all human beings are inclined to err from timeto time (Kohn et al., 2000). Institutional cultures changeslowly, even given that large human–machine systemdevelopments such as NextGen have explicitly embod-ied efforts to work toward a “just culture” (Dekker,2007) in dealing with human frailties. That puts a newemphasis on learning from mistakes rather than on met-ing out punishment.

It seems that five alternative policy approaches withregard to human operator roles and responsibilities canbe distinguished (Sheridan, 2010). These are offered ascontrasting approaches. Most likely some amalgam ofthese will be adopted by management in different systemcontexts.

1. Maintain the typical status quo of full humanoperator responsibility. Operators wouldundergo extensive training in decision supporttools and automation so they could understandtheir use and limitations. Accordingly, theywould be expected to use them wisely andcontinue to be responsible for safety andoperations. The advantage of this approach

Page 22: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 1011

would be to minimize the need for policy andtraining changes in an evolving system. Thedisadvantage would be in leaving open dilem-mas faced by the human operators as to whatto do when automation seems to be inadequateto handle a situation or the controller is unsureof whether the automation will act before it istoo late.

2. Define explicit behavior thresholds and crite-ria to determine when controllers would be heldresponsible. For example, in NextGen the auto-mation will assume certain control functionspreviously performed by ground controllerscommunicating with pilots and vectoring air-craft, so controllers might be instructed notto bypass or override the automation unlessand until certain explicit criteria (with respectto time, distance, etc.) are met. The groundcontrollers would be trained accordingly. Theadvantage of this approach would be that thecontroller would have very specific rules asto his or her responsibility. The disadvantagewould be that defining such rules might be dif-ficult to agree on and in any case seem to limitthe controller’s discretion. Further, the moredetailed the rules are that the controller is askedto commit to memory, the more likely that somedetails will be forgotten or confused.

3. Define and emphasize in both training andoperation the ideal behavior and rationale tobe used for each operation . The advantagewould be that with operators understandingand appreciating the basic operational conceptsthey could make best use of their professionalexpertise and experience. The disadvantagewould be that there may not be uniformity intheir response to events, particularly off-nominalevents.

4. Expect operators to “always do their best” indeciding when and how to employ automationor to bypass or override. The (refined) record-keeping would determine whether they wouldbe exonerated in any mishap. The advantage ofthis approach would be that the operator wouldbe somewhat protected if the evidence showedhe or she was really trying but the constraints ofthe situation were just too much to handle. Thedisadvantage would be that operators might bemotivated toward laxity, hoping in any case toclaim they were doing their best based on theevidence.

5. Expect operators to “always do their best” butallow them to signal in real time when they feelthey must intervene in an automated process .Encourage them to announce when they have adecision dilemma or regard the situation asuntenable. The advantage would be to addevidence to the record of what happened. Thedisadvantage would be the same as that under 4above.

12 SOCIAL IMPLICATIONS AND THE FUTUREOF HUMAN SUPERVISORY CONTROL

One near certainty is that, as technology of computers,sensors, and displays improves, supervisory control willbecome more prevalent. This should occur in two ways:(1) a greater number of semiautomated tasks will be con-trolled by a single supervisor (a greater number of TICswill be connected to a single HIC) and (2) the sophis-tication of cognitive aids, including expert systems forplanning, teaching, monitoring, failure detection, andlearning, will increase and include more of what wenow call knowledge-based behavior in the HIC.

The World Wide Web has enabled easy world-wide communication (for those properly equipped). Oneaspect of that communication that up to now has hardlybecome manifest is the ability to exercise remote con-trol. A number of experimental demonstrations havebeen performed on controlling robots between conti-nents, and in military operations UAVs are being con-trolled this way, but delayed feedback still poses adifficulty for continuous control, so supervisory con-trol clearly has an advantage here. In the future weshould see many more applications of moderate- andlong-distance remote control.

Concurrently, understanding by the layperson (in-cluding those of both corporate and government bureau-cracies) should come to understand the potential ofsupervisory control much better. At the present timethe layperson tends to see automation as “all or none,”where a system is controlled either manually or automat-ically, with nothing in between. In robotized factoriesthe media tend to focus on the robots, with little men-tion of design, installation, programming, monitoring,fault detection and diagnosis, maintenance, and variouslearning functions that are performed by people. In thespace program the same is true; options are seen to beeither “automated,” “astronaut in extravehicular activity(EVA),” or “astronaut or ground controlling telemanip-ulator” without much appreciation for the potential ofsupervisory control.

In considering the future of supervisory controlrelative to various degrees of automation and to thecomplexity or unpredictability of task situations to bedealt with, a representation such as Figure 17 comesto mind. The meaning of the four extremes of thisrectangle are quite identifiable. Supervisory control maybe considered to be a frontier (line) advancing graduallytoward the upper right-hand corner.

For obvious reasons, the tendency has been to auto-mate what is easiest and to leave the rest to the human.This has sometimes been called the technological imper-ative. From one perspective this dignifies the humancontribution; from another it may lead to a hodge-podgeof partial automation, making the remaining humantasks less coherent and more complex than need be,resulting in overall degradation of system performance(Bainbridge, 1983; Parsons, 1985)

“Human-centered automation” has become a popularphrase (Billings, 1991) and is often used in relationto human supervisory control. Therefore, to end thischapter, we might consider its alternative meanings.Below are 10 alternative meanings (stated in italics) that

Page 23: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

1012 PERFORMANCE MODELING

Humandoing

dignifiedwork

Surgery

Humanslave

Supervisorycontrol

Presentrobots

Clockwork

EntirelyEntirely Degree of

Prespecified

Unpredictable

Taskentropy

Perfectrobot

Figure 17 Combinations of human and computer control to achieve tasks at various levels of difficulty.

the author has gleaned from current literature. In everycase the meaning must be qualified, as is done by theone or two sentences following each particular meaningof the phrase.

1. Allocate to the human the tasks best suited to thehuman, allocate to the automation the tasks bestsuited to it. Yes, but for some tasks it really iseasier to do them manually than to initialize theautomation to do them. And at the other end ofthe spectrum are tasks that require so much skillor art or creativity that it simply is not possibleto program a computer to do them.

2. Keep the human operator in the decision andcontrol loop. That is a good idea provided thatthe control tasks are of appropriate bandwidth,attentional demand, and so on.

3. Maintain the human operator as the final author-ity over the automation. Realistically, this is notalways the safest solution. It depends on the taskcontext. In nuclear plants, for example, there aresafety functions that cannot be entrusted to thehuman operator and cannot be overridden byhim or her. Examples have been given previ-ously in the case of aircraft automation.

4. Make the human operator’s job easier, moreenjoyable, or more satisfying through friendlyautomation. That is fine if operator ease andenjoyment are the primary considerations andif ease and enjoyment necessarily correlate withoperator responsibility and system performance,but often these conditions are not the case.

5. Empower the human operator to the greatestextent possible through automation. Again onemust remember that operator empowerment isnot the same as system performance. Maybe thedesigner knows best. Don’t encourage megalo-maniacal operators.

6. Support trust by the human operator. Trust ofthe automation by the operator is often a goodthing, but not always. Too much trust is just asbad as not enough trust.

7. Give the operator computer-based informationabout everything that he or she should wantto know. We now have many examples ofwhere too much information can overwhelm theoperator to the point where performance breaksdown and even when the operator originallywanted “all” the information.

8. Engineer the automation to reduce human errorand keep response variability to the minimum.This, unfortunately, is a simplistic view ofhuman error. Taken literally it reduces theoperator to an automaton, a robot. Modestlevels of error and response variability enhancelearning (Darwin’s requisite variety).

9. Make the operator a supervisor of subordinateautomatic control system(s). Although this is achapter on supervisory control, it must be notedthat for some tasks direct manual control maybe best.

10. Achieve the best combination of human andautomatic control, where best defined by explicit

Page 24: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 1013

system objectives. Again, in some ideal case,where objectives can reliably be reduced tomathematics, this would be just fine. Unfortu-nately, automatic judgment of what is good andbad in a particular situation is seldom possible,even for a machine programmed with the bestavailable algorithms or heuristics. Fortunately,judgment of what is good and bad in a particu-lar situation is almost the essence of what it isto be human.

The bottom line is that proper use of automationdepends upon context, which in turn depends upondesigner and operator judgment.

I have written elsewhere about the long-term socialimplications of supervisory control (Sheridan, 1980;Sheridan et al., 1983). My concerns are reviewed herevery briefly:

1. Unemployment . This is the factor most often con-sidered. More supervisory control means moreefficiency, less direct control, and fewer jobs.

2. Desocialization. Although cockpits and controlrooms now require two- to three-person teams,the trend is toward fewer people per team,and eventually one person will be adequate inmost installations. Thus, cognitive interactionwith computers will replace that with otherpeople. As supervisory control systems areinterconnected, the computer will mediate moreand more interpersonal contact.

3. Remoteness from the Product . Supervisory con-trol removes people from hands-on interactionwith the workpiece or other product. They be-come not only separated in space but also desyn-chronized in time. Their functions or actions nolonger correspond to how the product itself isbeing handled or processed mechanically.

4. Deskilling . Skilled workers “promoted” tosupervisory controller may resent the transitionbecause of fear that when and if called on totake over and do the job manually they may notbe able to. They also feel loss of professionalidentity built up over an entire working life.

5. Intimidation by Higher Stakes . Supervisory con-trol will encourage larger aggregations of equip-ment, higher speeds, greater complexity, highercosts of capital, and probably greater economicrisk if something goes wrong and the supervisordoes not take the appropriate action.

6. Discomfort in the Assumption of Power . Thehuman supervisor will be forced to assume moreand more ultimate responsibilities. Dependingon one’s personality, this could lead to insen-sitivity to detail, anxiety about being up to thejob requirements, or arrogance.

7. Technological Illiteracy . Supervisory controllersmay lack the technological understanding ofhow the computer does what it does. They maycome to resent this and resent the elite class whodo understand.

8. Mystification . Human supervisors of computer-based systems could become mystified about thepower of the computer, even seeing it as a kindof magic or “big brother” authority figure.

9. Sense of Not Being Productive. Although theefficiency and mechanical productivity of a newsupervisory control system may far exceed that ofan earlier manually controlled system that a givenperson has experienced, that person may come tofeel no longer productive as a human being.

10. Eventual Abandonment of Responsibility . Asa result of the factors described previously,supervisors may eventually feel that they areno longer responsible for what happens; thecomputers are.

These 10 potential negatives may be summarizedwith a single word: alienation. In short, if human super-visors of the new breed of computer-based systems arenot given sufficient familiarization with and feedbackfrom the task, sufficient sense of retaining their oldskills, or ways of finding identity in new ones, theymay well come to feel alienated. They must be trainedto feel comfortable with their new responsibility, mustcome to understand what the computer does and not bemystified, and must realize that they are ultimately incharge of setting the goals and criteria by which thesystem operates. If these principles of human factorsare incorporated into the design, selection, training, andmanagement, supervisory control has a positive future.

13 CONCLUSIONS

Computer technology, both hard and soft, is drivingthe human operator to become a supervisor (planner,teacher, monitor, and learner) of automation and anintervener within the automated control loop for abnor-mal situations. A number of definitions, models, andproblems have been discussed. There is little or nopresent consensus that any one of these models char-acterizes in a satisfactory way all or even very much ofsupervisory control with sufficient predictive capabilityto entrust to the designer of such systems. It seems thatfor the immediate future we are destined to run breath-less behind the lead of technology, trying our best tocatch up.

REFERENCES

Abbott, K., Slotte, S., Stimson, D., Amalberti, R. R., Bollin, G.,Fabre, F., Hecht, S., Imrich, T., Lalley, R., Lydanne, G.,Newman, T., and Thiel, G. (1996), The Interfaces betweenFlightcrews and Modern Flight Deck Systems , reportof the FAA Human Factors Team, Federal AviationAdministration, Washington, DC.

Bainbridge, L. (1983), “Ironies of Automation,” Automatica ,Vol. 19, pp. 775–779.

Billings, C. S. (1991), Human-Centered Aircraft Automation: AConcept and Guideline, NASA TM-103885, NASA AmesResearch Center, Moffett Field, CA.

Page 25: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

1014 PERFORMANCE MODELING

Boehm-Davis, D., Curry, R., Wiener, E., and Harrison, R.(1983), “Human Factors of Flight Deck Automation:Report on a NASA-Industry Workshop,” Ergonomics ,Vol. 26, pp. 953–961.

Boer, E. R. (2000), “Behavioral Entropy as an Index ofWorkload,”in Proceedings of the 44th Annual Meeting ofthe Human Factors and Ergonomics Society , San Diego,CA, July 30-August 4.

Brooks, T. L. (1979), “SUPERMAN: A system for SupervisoryManipulation and the Study of Human-Computer Inter-actions,”S.M. Thesis, Massachusetts Institute of Technol-ogy, Cambridge, MA.

Chu, Y., and Rouse, W. B. (1979), “Adaptive Allocationof Decision Making Responsibility between Human andComputer in Multi-task Situations,” IEEE Transactionson Systems, Man and Cybernetics , Vol. 9, pp. 769–778.

Chu, Y. Y., Steeb, R., and Freedy, A. (1980), Analysis andModeling of Information Handling Tasks in SupervisoryControl of Advanced Aircraft , PATR-1080-80-6, Percep-tronics, Woodlands, CA.

Cummings, M. L. (2005), “Human Supervisory Control Chal-lenges in Network Centric Operations,”in Proceedingsof the Unmanned Vehicle Systems Canada Conference,Banff, CA.

Curry, R. E., and Ephrath, A. R. (1977), “Monitoringand Control of Unreliable Systems,” in MonitoringBehavior and Supervisory Control , T. B. Sheridan andG. Johannsen, Eds., Plenum, New York, pp. 193–203.

Degani, A. (2004), Taming HAL: Designing Interfaces Beyond2001 , Palgrave, New York.

Dekker, S. (2007), Just Culture: Balancing Safety and Account-ability , Ashgate, Surrey, United Kingdom.

Duncan, K. D. (1981), “Training for Fault Diagnosisin Industrial Process Plants,” in Human Detectionand Diagnosis of System Failures , J. Rasmussen andW. B. Rouse, Eds., Plenum, New York, pp. 553–524.

Edwards, E., and Lees, F. (1974), The Human Operator inProcess Control , Taylor & Francis, London.

Ephrath, A. R., and Young, L. R. (1981), “Monitoring vs.Man-in-the-Loop Detection of Aircraft Control Failures,”in Human Detection and Diagnosis of System Failures ,J. Rasmussen and W. B. Rouse, Eds., Plenum, New York,pp. 143–154.

Falzon, P. (1982), “Display Structures: Computability withthe Operator’s Mental Representation and ReasoningProcesses,”in Proceedings of the 2nd Annual Confer-ence on Human Decision Making and Manual Control ,pp. 297–305.

Ferrell, W. R. (1966, October), “Delayed Force Feedback,”Human Factors , October, pp. 449–455.

Ferrell, W. R., and Sheridan, T. B. (1967), “Supervisory Con-trol of Remote Manipulation,” IEEE Spectrum , Vol. 4,No. 10, pp. 81–88.

Ferris, T., Sarter, N., and Wickens, C. D. (2010), “CockputAutomation: Still Struggling to Catch Up,” in HumanFactors in Aviation , E. Salas and D. Maurino, Eds.,Academic, San Diego, CA, pp. 479–504.

Gai, E. G., and Curry R. E. (1978), “Preservation Effects inDetection Tasks with Correlated Decision Intervals,” IEEETransactions on Systems, Man and Cybernetics , Vol. 8,pp. 93–110.

Gaines, B. R. (1976), “On the Complexity of Casual Models,”IEEE Transactions on Systems, Man and Cybernetics ,Vol. 6, pp. 56–59.

Gentner, D., and Stevens, A. L., Eds. (1983), Mental Models ,Lawrence Erlbaum Associates, Mahwah, NJ.

Govindaraj, T., and Rouse, W. B. (1981), “Modeling theHuman Controller in Environments That Include Contin-uous and Discrete Tasks,” IEEE Transactions on Systems,Man and Cybernetics , Vol. 11, pp. 411–417.

Hess, R. A., and McNally, B. D. (1997), “AutomationEffect in a Multi-Loop Manual Control System,” IEEETransactions on Systems, Man Cybernetics , Vol. SMC-16,No. 1, pp. 111–121.

Johannsen, G. (1981), “Fault Management and SupervisoryControl of Decentralized Systems,” in Human Detectionand Diagnosis of System Failures , J. Rasmussen andW. B. Rouse, Eds., Plenum, New York.

Kohn, L. T., Corrigan, J. M., and Donaldson, M. S., Eds.(2000), To Err Is human: Building a Safer Health System ,National Academies Press, Washington, DC.

Lee, J. D., McGehee, D., Brown, T. L., and Reyes, M.(2002), “Collision Warning Timing, Driver Distraction,and Driver Response to Imminent Rear End Collisionin a High Fidelity Driving Simulator,” Human Factors ,Vol. 44, No. 2, pp. 314–334.

Llaneras, R. E. (2000), “NHTSA Driver DistractionInternet Forum,” available: www.nrd.nhtsa.dot.gov/departments/nrd-13/DriverDistraction.html.

Machida, K., Toda, Y., Iwata, T., Kawachi, M., and Nakamura, T.(1988), “Development of a Graphic Simulator AugmentedTeleoperator System for Space Applications,”in Proceed-ings of the 1988 AIAA Conference on Guidance, Naviga-tion, and Control , Part I, pp. 358–364.

Moray, N. (1982), “Subjective Mental Workload,” HumanFactors , Vol. 24, pp. 25–40.

Moray, N. (1986), “Monitoring Behavior and Supervisory Con-trol,” in Handbook of Perception and Human Perfor-mance, Vol. 2, K. Boff, L. Kaufman, and J. P. Thomas,Eds., Wiley, New York.

Moray, N. (1997), “Models of Models of . . . Mental Models,”in Perspectives on the Human Controller , T. Sheridanand T. van Lunteren, Eds., Lawrence Erlbaum Associates,Mahwah, NJ.

Morris, N. M., and Rouse, W. B. (1985), “The Effects of Typeof Knowledge upon Human Problem Solving in a ProcessControl Task,” IEEE Transactions on Systems, Man andCybernetics , Vol. 15, pp. 698–707.

Nandi, H., and Ruhe, W. (2002), “On Line Modeling and NewGeneration of Supervisory Control System for SinteringFurnaces,” Comp US Controls, Indiana, PA.

Norman, D. A. (1981), “Categorization of Action Slips,”Psychological Review , Vol. 88, pp. 1–15.

Norman, D. A. (1986), “Cognitive Engineering,” in UserCentered System Design: New Perspectives in Human-Computer Interaction , D. A. Norman and S. Draper, Eds.,Lawrence Erlbaum Associates, Hillsdale, NJ.

Noyes, M. V., and Sheridan, T. B. (1984), “A NovelPredictor for Telemanipulation through a Time Delay,”in Proceedings of the Annual Conference on ManualControl , NASA Ames Research Center, Moffett Field,CA, Human Factors and Ergonoomics Society, SantaMonica, CA.

Parasuraman, R. (1987), “Human-Computer Monitoring,”Human Factors , Vol. 29, pp. 695–706.

Parasuraman, R., Sheridan, T. B., and Wickens, C. D. (2000),“A Model for Types and Levels of Human Interactionwith Automation,” IEEE Transactions on Systems, Manand Cybernetics , Vol. 30, No. 3, pp. 286–297.

Page 26: Handbook of Human Factors and Ergonomics (Salvendy/Handbook of Human Factors 4e) || Human Supervisory Control

HUMAN SUPERVISORY CONTROL 1015

Park, J. H. (1991, August), “Supervisory Control of RobotManipulators for Gross Motions,” Ph.D. Dissertation,Massachusetts Institute of Technology, Cambridge, MA.

Parsons, H. M. (1985), “Automation and the Individual: Com-prehensive and Comparative Views,” Human Factors ,Vol. 27, No. 1, pp. 99–111.

Pattipatti, K. R., Kleinman, D. L., and Ephrath, A. R. (1983),“A Dynamic Decision Model of Human Task SelectionPerformance,” IEEE Transactions on Systems, Man andCybernetics , Vol. 13, pp. 145–166.

Rasmussen, J. (1982), “Human Errors: A Taxonomy forDescribing Human Malfunction in Industrial Instal-lations,” Journal of Occupational Accidents , Vol. 4,pp. 311–335.

Rassmussen, J., and Rouse, W. B., Eds. (1981), HumanDetection and Diagnosis of System Failures , Plenum,New York.

Reason, J. T., and Mycielska, K. (1982), Absent Minded?The Psychology of Mental Lapses and Everyday Errors ,Prentice-Hall, Englewood Cliffs, NJ.

Rouse, W. B. (1977), “Human-Computer Interaction in Multi-Task Situations,” IEEE Transactions on System, Man andCybernetics , Vol. 7, No. 5, pp. 384–392.

Rouse, W. B. (1985), “Supervisory Control and Display Sys-tems,” in Human Productivity Enhancement , J. Zeidner,Ed., Praeger, New York.

Rouse, W. B., and Morris, N. M. (1984), On Looking into theBlack Box: Prospects and Limits in the Search for MentalModels , Search Technology, Norcross, GA.

Rouse, W. B., and Rouse, S. H. (1983), “Analysis andClassification of Human Error,” IEEE Transactionson System, Man and Cybernetics , Vol. 13, No. 4,pp. 539–599.

Sarter, N. B., and Amalberti, R., Eds. (2000), CognitiveEngineering in the Aviation Domain , Lawrence ErlbaumAssociates, Mahwah, NJ.

Sarter, N. B., and Woods, D. D. (2000), “Learning fromAutomation: Surprises and ‘Going Sour’ Accidents,”in Cognitive Engineering in the Aviation Domain ,N. B. Sarter and R. Amalberti, Eds., Lawrence ErlbaumAssociates, Mahwah, NJ, p. 327.

Sarter, N. B., Mumaw, R. J., and Wickens, C. D. (2007), “Pilots’Monitoring Strategies and Performance on AutomatedFlight Decks: An Empirical Study Combining Behav-ioral and Eye-Tracking Data,” Human Factors , Vol. 49,pp. 347–357.

Seiji, T., Ohga, Y., and Koyama, M. (2001), “AdvancedSupervisory Control Systems for Nuclear Power Plants,”Hitachi Review , Vol. 50, No. 3.

Sheridan, T. (2010), Controller-Automation Issues in the NextGeneration Air Transportation System (NextGen), DOT-VNTSC-FAA-10-11, Volpe Center, Cambridge, MA.

Sheridan, T., and Parasuraman, R. (2006), “Human-AutomationInteraction,” Reviews of Human Factors and Ergonomics ,Vol. 1, pp. 89–129.

Sheridan, T. B. (1960), “Human Metacontrol,” in Proceedingsof the Annual Conference on Manual Control , Wright-Patterson Air Force Base, OH, Institute of Electrical andElectronics Engineers, New York.

Sheridan, T. B. (1970), “Optimum Allocation of PersonalPresence,” IEEE Transactions on Human Factors inElectronics , Vol. 10, pp. 242–249.

Sheridan, T. B. (1976), “Toward a General Model of Super-visory Control,” in Monitoring Behavior and SupervisoryControl , T. B. Sheridan and G. Johannsen, Eds., Plenum,New York.

Sheridan, T. B. (1980, October), “Computer Control andHuman Alienation,” Technology Review , Vol. 83,pp. 60–73.

Sheridan, T. B. (1983), “Measuring, Modeling and AugmentingReliability of Man-Machine Systems,” Automatica , Vol.19, pp. 637–645.

Sheridan, T. B. (1984), Interaction of Human Cognitive Mod-els and Computer-Based Models in Supervisory Control,Man-Machine Systems Laboratory , Massachusetts Insti-tute of Technology, Cambridge, MA.

Sheridan, T. B. (1992), Telerobotics, Automation, and HumanSupervisory Control , Massachusetts Institute of Technol-ogy, Cambridge, MA.

Sheridan, T. B. (1997, October), “Function Allocation: Algo-rithm, Alchemy or Apostasy?”paper presented at the Con-ference on Function Allocation, ALLFN 97, Galway,Ireland

Sheridan, T. B. (2002), Humans and Automation , Wiley,Hoboken, NJ.

Sheridan, T. B., and Ferrell, W. R. (1974), Man-Machine Sys-tems , Massachusetts Institute of Technology, Cambridge,MA.

Sheridan, T. B., and Hennessy, R. T., Eds. (1984), Researchand Modeling of Supervisory Control Behavior , NationalResearch Council, Committee on Human Factors,National Academy Press, Washington, DC.

Sheridan, T. B., and Johannsen, G., Eds. (1976), MonitoringBehavior and Supervisory Control , Plenum, New York.

Sheridan, T. B., and Verplank, W. L. (1978), Human andComputer Control of Undersea Teleoperators , Man-Machine Systems Laboratory, Massachusetts Institute ofTechnology, Cambridge, MA.

Tulga, M. K., and Sheridan, T. B. (1980), “Dynamic Decisionsand Workload in Multi-Task Supervisory Control,” IEEETransactions on Systems, Man and Cybernetics , Vol. 10,No. 5, pp. 217–231.

Wickens, C. D., and Kessell, C. (1979), “The Effects ofParticipatory Model and Task Workload on the Detectionof Dynamic System Failures,” IEEE Transactions onSystems, Man and Cybernetics , Vol. 9, pp. 24–34.

Wickens, C. D., and Kessell, C. (1981), “Failure Detection inDynamic Systems,” in Human Detection and Diagnosisof System Failures , J. Rasmussen and W. B. Rouse, Eds.,Plenum, New York.

Wickens, C., Mavor, A., and McGee, J. (1997), Flight to theFuture, National Academy Press, Washington, DC.

Wiener, E. L. (1988), “Cockpit Automation,” in Human Factorsin Aviation , E. L. Wiener and D. Nagel, Eds., Academic,San Antonio, TX, pp. 433–461.

Wiener, E. L., and Curry, R. E. (1980), “Flight Deck Auto-matic: Promises and Problems,” Ergonomics , Vol. 23,pp. 995–1011.

Wood, W., and Sheridan, T. B. (1982), “The Use of MachineAids in Dynamic Multi-Task Environments: A Com-parison of an Optimal Model to Human Behavior,” inProceedings of the IEEE International Conference onCybernetics and Society , Seattle, WA, pp. 668–672, Insti-tute of Electrical and Electronics Engineers, New York.

Yoerger, D. (1982), “Supervisory Control of Underwater Tele-manipulators: Design and Experiment,”Ph.D. Disserta-tion, Massachusetts Institute of Technology, Cambridge,MA.