integrating cognitive task analysis into instructional systems development

22
Integrating Cognitive Task Analysis into Instructional SystemsDevelopment [] Joan M Ryder Richard E, Redding Traditional methods for task analysis and training design, such as those embodied in Instructional Systems Development (ISD), decompose jobs into discrete tasks composed of specific action sequences and identi~y prereq- uisite knowledge and skills/or each task. Al- though these methods have been effective/or designing training for simple procedural skills, they offer little insight for analysis or training o/jobs involving complex cognitive skills, which increasingly require training today. Because of this, cognitive considera- tions need to be incorporated into ISD, partic- ularly in the task analysis phase. Recenthy, cognitive methods have begun to be used to conduct task analysis for training program development and human-computer system development. In this article, recent develop- ments in cognitive task analysis are reviewed, and The Integrated Task Analysis Model (ITAM), a framework for integrating cogni- tive and behavioral task analysis methods within the ISD model, is presented. Discussed in detail are ITAM's three analysis stages-- progressive cycles o/data collection, analysis, and decision making--in three components o/ expertise: skills, knowledge, and mental models. [] Substantial changes have taken place in the last twenty years in the nature of jobs, due primarily to proliferation of computer-based technology. The former prevalence of mechan- ical systems has given way to the dominance of electronic systems and the increasing auto- mation of previously manual functions (Van Cott, 1984). These changes have shifted the demands on human performance from pri- marily physical to primarily cognitive. Some examples of jobs with strong cognitive compo- nents are situation assessment and intelligence analysis, aviation and air traffic control, pro- cess control (including nuclear power plant operation), sensor data interpretation, and equipment maintenance and troubleshooting. Instructional Systems Development (ISD) methodologies, which guide most military and industrial training design today, offer few insights or guidelines for the analysis or train- ing of jobs involving complex cognitive skills. Jobs that are especially problematic are those that involve a high degree of decision making; require large amounts of knowledge to be as- similated during training; demand high per- formance skills (see Schneider, 1985); or take place in high workload, multiple-task environ- ments. The inadequacies of current training design procedures, coupled with the trend to- ward use of computer-based part-task train- ers, training embedded in operational systems, and intelligent tutoring systems, ETR&D, Vol.41, No. 2, pp. 75-96 ISSN 1042-1629 7~

Upload: independent

Post on 26-Nov-2023

3 views

Category:

Documents


0 download

TRANSCRIPT

Integrating Cognitive Task Analysis into Instructional Systems Development

[ ] Joan M Ryder Richard E, Redding

Traditional methods for task analysis and training design, such as those embodied in Instructional Systems Development (ISD), decompose jobs into discrete tasks composed of specific action sequences and identi~y prereq- uisite knowledge and skills/or each task. Al- though these methods have been effective/or designing training for simple procedural skills, they offer little insight for analysis or training o/jobs involving complex cognitive skills, which increasingly require training today. Because of this, cognitive considera- tions need to be incorporated into ISD, partic- ularly in the task analysis phase. Recenthy, cognitive methods have begun to be used to conduct task analysis for training program development and human-computer system development. In this article, recent develop- ments in cognitive task analysis are reviewed, and The Integrated Task Analysis Model (ITAM), a framework for integrating cogni- tive and behavioral task analysis methods within the ISD model, is presented. Discussed in detail are ITAM's three analysis stages-- progressive cycles o/data collection, analysis, and decision making--in three components o/ expertise: skills, knowledge, and mental models.

[] Substantial changes have taken place in the last twenty years in the nature of jobs, due primarily to proliferation of computer-based technology. The former prevalence of mechan- ical systems has given way to the dominance of electronic systems and the increasing auto- mation of previously manual functions (Van Cott, 1984). These changes have shifted the demands on human performance from pri- marily physical to primarily cognitive. Some examples of jobs with strong cognitive compo- nents are situation assessment and intelligence analysis, aviation and air traffic control, pro- cess control (including nuclear power plant operation), sensor data interpretation, and equipment maintenance and troubleshooting.

Instructional Systems Development (ISD) methodologies, which guide most military and industrial training design today, offer few insights or guidelines for the analysis or train- ing of jobs involving complex cognitive skills. Jobs that are especially problematic are those that involve a high degree of decision making; require large amounts of knowledge to be as- similated during training; demand high per- formance skills (see Schneider, 1985); or take place in high workload, multiple-task environ- ments. The inadequacies of current training design procedures, coupled with the trend to- ward use of computer-based part-task train- ers, t raining embedded in operat ional systems, and intelligent tutoring systems,

ETR&D, Vol. 41, No. 2, pp. 75-96 ISSN 1042-1629 7~

76 ETR&D, Vol. 41, No. 2

point to the need for new cognitive-based approaches to task analysis and instructional design.

INSTRUCTIONAL SYSTEMS DEVELOPMENT (ISD)

ISD is a systematic approach for the design, development, and management of training materials and programs. A variety of ISD mod- els exists, to support both instructional devel- opment in the military (the most common of which is the Interservice Procedures for In- structional Systems Development [Branson et al., 1975]) and in industry (e.g., Dick & Carey, 1990). These models all follow a generic se- quence of steps representing a systems ap- proach to instructional design (Andrews & Goodson, 1980). The ISD approach itself is atheoretic; however, the procedures specified to complete each step in training development are intended tobe based on current theory and research in applicable fields.

As ISD models came into wide use in the early 1970s, the procedures developed for task analysis were based on the experimental learn- ing psychology of the time, which focused on observable behaviors. The procedures for im- plementing ISD steps have not been updated to incorporate findings from subsequent re- search on cognition and information process- ing. The traditional methods have proved adequate for designing training for tasks with fixed procedural sequences, but are not easily applied to the analysis of complex cognitive skills (Halff, Hollan, & Hutchins, 1986). In fact, R. Branson, one of the primary contributors to the military ISD model, states that

it addresses training primarily in single and mul- tiple procedural tasks. That model does not take into account . . , transfer tasks. The learning of single and multiple procedural tasks involves practicing the tasks themselves, whereas transfer tasks occur under such a variety of circumstances that only attributes of tasks are learned (princi- ples, rules, concepts) rather than specific tasks. (Branson & Grow, 1987, p. 403)

Complex cognitive tasks usually have sig- nificant "transfer task" components in addi- tion to procedural components.

DEVELOPMENTS IN COGNITIVE SCIENCE

Recent advances in cognitive science provide new ways of characterizing learning and skill development that are more appropriate for complex cognitive tasks than the behavioral constructs on which ISD procedures were based. Pertinent research deals with memory structure (Bobrow & Norman, 1975; Rumel- hart & Ortony, 1977); attention and automaticity (Fisk, Ackerman, & Schneider, 1985; Lintern & Wickens, 1987); skill acquisition (Anderson, 1982; Rasmussen, 1986); expertise (Chi, Glaser, & Farr, 1988; Ericsson & Smith, 1991); and mental models in problem solving (Gentner & Stevens, 1983; Johnson-Laird, 1983), among others (cL Gagn~ & Glaser, 1987). This research provides the theoretical underpinnings for de- veloping methods to analyze and train com- plex cognitive skills more efficiently.

Methods derived from cognitive science have begun to be used to conduct cognitive task analysis (CTA) for training research pro- grams (Fisk & Eggemeier, 1988; Lesgold et aL, 1986); curriculum redesign (Redding, Cannon, & Seamster, 1992; Redding, Ryder, Seamster, Purcell, & Cannon, 1991; Seamster, Redding, Cannon, Ryder, & Purcell, in press); and com- puter-based training development (Blackman, 1988; Redding & Lierman, 1990). Others have used cognitive methods in classroom settings, inferring students" mental models for a course and their development throughout the course (Naveh-Benjamin, McKeachie, Lin, & Tucker, 1986). Cognitive task analysis has also been used in cognitive engineering of human-ma- chine systems (Rasmussen, 1986; Woods & Roth, 1988); intelligent tutoring system devel- opment (Means & Gott, 1988); decision sup- port system design (Zachary, 1988); and knowledge elicitation for expert systems (Cooke & McDonald, 1987; Gammack, 1987).

TASK ANALYSIS REQUIREMENTS

Thus far, CTA has shown promising results. Most researchers, however, recognize that cog- nitive methods supplement rather than re- place traditional methods. Johnson (1988), for example, analyzed the performance of trou-

COGNITIVE TASK ANALYSIS 77

bleshooters using both cognitive and behav- ioral methods, demonstrating the utility of using cognitive methods in conjunction with traditional ones. He used traditional methods to identify tasks used during troubleshooting and then collected verbal protocols as techni- cians went about the job. Research on real- time, high-performance jobs, such as air traffic control, has shown that both behavioral and cognitive analyses are required to understand performance (Schlager, Means, & Roth, 1990). ISD has proved useful for analyzing psycho- motor and procedural tasks, and ensures good quality control by requiring objective perfor- mance criteria (McCombs, 1986).

What is needed, therefore, is the revision of ISD task analysis to include methods for an- alyzing complex cognitive skills (Ryder, Redd- ing, & Beckschi, 1987). Task analysis is the most crucial and resource-intensive phase in the development of any training program; nevertheless, some have observed that this phase is often neglected in ISD (McCombs, 1986; Wileman & Gambill, 1983). Task analysis provides the basis for all subsequent design decisions, including instructional sequencing and structure, and media selection.

ISD methods have been evolving continu- ally. Indeed, ISD was developed with the no- tion that the model would be responsive to new research developments by incorporating new analytic methods (McCombs, 1986). As some have observed, ISD is "long on what to do but short on how to do it" (Montague, Ellis, & Wulfeck, 1983). Hence, incorporating cogni- tive analysis methods should improve ISD. Many aspects of the ISD process would benefit from an integrated cognitive and behavioral approach. In this article, however, we concen- trate on the specification of methods for exam- ining cognitive structures and processes

within the task analysis component of ISD. We review recent developments in CTA and pres- ent a framework for the integration of both behavioral and cognitive considerations and methods for task analysis.

COGNITIVE TASK ANALYSIS

In traditional ISD task analysis, tasks are eval- uated in terms of the behavioral responses that must be made to each stimulus encountered during the process of completing the task. In contrast, the goal of CTA is to delineate the mental processes and skills needed to perform a task at high proficiency levels, and the changes in knowledge structure and process- ing as the skill develops over time. Cognitive task analysis differs from traditional methods in a number of ways, as outlined in Table 1.

While the traditional approach emphasizes the target performance desired, CTA ad- dresses expertise: the knowledge structure and information processing strategies in- volved in task performance. Traditional meth- ods (see Meister, 1989) focus on identifying the knowledge required for each individual task element. In contrast, the cognitive approach emphasizes the knowledge base for the whole job--its organization and the interrelations among concepts. This approach provides use- ful information for structuring training to fa- cilitate initial learning as well as progression to the knowledge organization used by experts or good performers. Similarly, within the cog- nitive approach, skills are identified for the job as a whole, not just for each separate task. Skills may or may not be temporally distinct, as tasks, by definition, are. Another difference is that CTA includes determination of mental

TABLE 1 [ ] Task Analysis Comparison

Traditional Task Analysis Cognitive Task Analysis

Emphasizes behavior

Analyzes target performance

Evaluates knowledge for each task separately

Behaviorally based task segmentation

Mental models not addressed

Emphasizes cognition

Analyzes expertise

Evaluates knowledge for whole job

Skill-based task segmentation

Mental models addressed

78 I:TR&D, Vol. 41, No. 2

models used in task performance; nothing an- alogous is contained in traditional methods. (See also the recent critique by Merrill Li, & Jones, 1990, of First Generation Instructional Design, which is roughly equivalent to ISD.)

Traditional approaches also fail to charac- terize variability in performance, both within and between individuals (Vicente, 1990). Be- cause a single individual can perform a task in many different ways, using different se- quences and methods (Rasmussen, 1986), some commentators argue that "adopting a traditional task analysis methodology based on a single sequence of behaviors . . . will support [only] one way of performing the task" (Vicente, 1990, p. 3). In contrast, CTA methods attempt to identify problem-solving strategies that may be manifest in variable action sequences depending on the environ- mental dynamics in each task, as well as im- portant individual differences.

Cognitive task analysis draws upon labora- tory and field research in cognitive science to obtain methods and techniques for eliciting and analyzing the knowledge and perfor- mance requirements for jobs that involve com- plex cogni t ive skills. In the fol lowing discussion, some of the most promising CTA methods currently in use are described.

Protocol Analysis

Protocol analysis involves having persons think aloud while performing or describing a task, and then using the verbalizations to infer the participants' cognitive processing (see Er- icsson & Simon, 1984). The usefulness of ver- bal reports in eliciting certain types of knowledge, particularly problem-solving methods, has been demonstrated (Kuipers & Kassirer, 1987; Lesgold et al., 1986). Although the validity of verbal reports of thought pro- cesses has been questioned (Nisbett & Wilson, 1977), recent research has found that concur- rent verbalizations accurately reflect concur- rent thought (Rhenius & Definer, 1990). The accuracy of verbal reports depends on the pro- cedures used to elicit them (e.g., the instruc- tions given, whether reports are retrospective or concurrent). Also, as job skills become more

automatic, as they do with experts, the inter- mediate mental processes become unavailable for verbal report. Thus, while protocol analy- sis is a valuable analytical tool it must, in some cases, be used in conjunction with other methods.

Many variations in the types of problems to pose in protocol analysis have been suggested. Hoffman (1987) suggests: familiar or typical problems, to generate general tactics and pro- cedures, basic conceptual categories used, and a general feel for the knowledge and skills involved in the domain; limited-information problems, to provide information about strat- egies and heuristics in particular subdomains; constrained-processing problems (i.e., those that involve limited time or information), to provide further information about reasoning strategies in particular subdomains; and "tough cases," to get evidence about subtle or refined aspects of reasoning. Several varia- tions to pure think-aloud protocols have proved useful. Specific probe questions can be asked at various points in the problem-solving process (Lesgold et al., 1986; Means & Gott, 1988) in order to test specific hypotheses about knowledge or strategies. Question/answer protocols have also been used successfully in- stead of think-aloud protocols (Graesser & Murray, 1990). In one promising variation pro- posed by Means and Gott (1988), problems are developed by an expert and a researcher and then given to another expert to solve. The ad- vantages of this technique are that the experts enjoy the game-like environment and stim- ulate each other to provide a richer descrip- tion of the domain than a single expert would provide.

Protocol analysis is used primarily to de- rive information about reasoning processes, decision-making strategies, and mental mod- els. Possible outputs include, for example, sets of production rules, decision trees, heuristics, algorithms, systematic grammar networks (e.g., Johnson & Johnson, 1987), and GOMS means-ends hierarchies (discussed later). Spe- cific comparisons (e.g., types of hypotheses generated, errors made, procedures used) can be made between protocols obtained from in- dividuals of varying experience levels (see Lesgold et al., 1986), which is useful for deter- mining training progressions.

COGNITIVE TASK ANALYSIS 79

Cognitive Interviewing Techniques

Cognitive task analysis does not necessarily require the use of complicated, time-consum- ing techniques; much useful information can be obtained through interviews. For instance, some researchers (e.g., Leinhardt & Greeno, 1987) have used interviewing to study exper- tise in teaching. Cognitive interviewing tech- niques involve a structured or semi-structured process in which the selection of questions is based on a cognitive theory of expertise and a desired framework for representing or using the results, such as a specific modeling tech- nique or instructional design framework. The characteristic that makes an interview "cogni- tive" is its focus on the knowledge and cogni- tive processes under ly ing performance: concepts and their interrelationships, deci- sion-making strategies, and so on. For exam- ple, requesting drawings of physical or functional relationships among the compo- nents of a system can yield information about the mental models governing the person's rea- soning (Tenney & Kurland, 1988).

Many interviewing techniques include ret- rospective protocols, that is, discussions of past task performance. The critical decision method (Klein, Calderwood, & MacGregor, 1989) is a technique that uses recollection of actual nonroutine incidents requiring expert judgment or decision making as its starting point. Once the incident has been described, probe questions are used to elicit information about goals that guided performance, options evaluated, perceptual cues used, and relevant situational factors. Zachary (t986) uses a de- tailed interviewing protocol for analyzing de- cision-making tasks that allows identification of the decision maker's problem representa- tion, goal structure, and information-process- ing strategies.

In a recent effort to apply methods from expert system development and ethnography, as well as cognitive science, Ford and Wood (in press) developed a four-phase approach for structuring interviews with domain experts. Their method uses the sequencing of types of questions to obtain expert knowledge and problem-solving strategies that are often hard for a subject matter expert (SME) to describe directly. In the first two phases, concepts and

entities in the domain are identified and the relationships among concepts are elaborated. In the last two phases, procedural knowledge is derived by interpreting case protocols using the conceptual knowledge obtained pre- viously. These are but a few examples of inter- viewing techniques that provide information for CTA.

Psychological Scaling Techniques

These methods involve statistical techniques which derive structure from, or impose orga- nization on, subjects' judgments about con- cepts in a domain. The judgments are obtained from tasks such as sorting, recalling, ranking, rating, or comparing domain concepts. Prox- imity estimates for all pairs of concepts are then derived by measuring interresponse times, output order, confusion probabilities, similarity ratings, etc., and are used as input to the chosen analysis technique. The assump- tion is that concepts that are more closely re- lated psychologica l ly will have closer proximity estimates. Some of the data analysis techniques include: multidimensional scaling (e.g., Kruskal & Wish, 1978); hierarchical clus- ter analysis (e.g., Johnson, 1967); network an- alysis (e.g., Cooke & McDonald, 1986; Schvaneveldt, 1990); repertory grid (e.g., Shaw & Gaines, 1987); and drawing ordered trees (e.g., Naveh-Benjamin et al., 1986).

Discussions of the s t rengths and weaknesses of various techniques and guid- ance in selecting the appropriate one for the application can be found in a number of critical reviews (e.g., Cooke & McDonald, 1987; Gammack, 1987; Olson & Biolsi, 1991; Schvaneveldt, 1990). The techniques vary in the type of representation they produce, the statistical assumptions that must be met, the sample size required, and how the results can be interpreted. Possible outputs include tree- type diagrams showing hierarchical organiza- tion, produced from hierarchical cluster analysis or ordered trees; n-dimensional spa- tial representations showing object clusters or salient distinctions, produced from multidi- mensional scaling; and networks showing the interrelationships among concepts, produced from network analysis.

80 ETR&D, Vol. 41, No. 2

These techniques are particularly valuable for determining the organization of concep- tual knowledge. Also, since much expert skill is automatic and not available for conscious introspection, experts cannot always describe the knowledge or cognitive processing under- lying skilled performance. By inferring struc- ture from experts' judgments, this implicit knowledge is made explicit.

Skill Automaticity and Consistent Component Identification

Skill acquisition is postulated to comprise three stages (Anderson, 1982; Rasmussen, 1986), beginning with the learning of verbal knowledge and elemental rules. As the skill develops, larger task components are com- posed and gradually automated. At high ex- pertise levels, automaticity develops; that is, conscious attention is no longer required to perform routine tasks. The benefits of achiev- ing skill automaticity can be substantial and include ease and reliability of performance and the freeing of mental resources to allow for skill refinement and the simultaneous perfor- mance of other tasks. One key difference be- tween novices and experts is that experts can perform many aspects of a job automatically (Shiffrin & Schneider, 1977). The potential for training of high performance skills toward au- tomaticity has been demonstrated (Myers & Fisk, 1987), and the importance of automatic- ity training is now well recognized in the in- structional design community (e.g., Gagn6, 1982).

Automatic and controlled processing the- ory (see Schneider, 1985) provides the basis for designing training to develop skill automatic- ity, and CTA methods to support automaticity training have recently been developed (Fisk & Eggemeier, 1988; Myers & Fisk, 1987). The methodology involves identifying the consis- tent components of tasks. Automaticity re- quires a consistency in responses made to a stimulus (local consistency), or consistency in the rules, context, and relationships among stimuli (global consistency) (Fisk & Gallini, 1989; Fisk, Oransky, & Skedsvold, 1988). Thus, either global or local consistencies (i.e., "con- sistent task components") must be identified

and distinguished from task components that require conscious attention even for experts. This analysis does not necessarily correspond to the task elements and activities identified in ISD task analysis. A variety of techniques is available for determining consistent task com- ponents. These techniques include measuring reaction times (Fisk & Gallini, 1989; Fisk et al., 1988); measuring the degree of interference between tasks performed simultaneously (Fisk & Gallini, 1989); and interviewing sub- ject-matter experts (SMEs) about task compo- nents, decision points, time pressures, etc. (Fisk & Eggemeier, 1988).

Cognitive and Performance Modeling

Another recent development in CTA involves constructing cognitive and/or performance models for complex domains. Cognitive mod- eling attempts to capture the cognitive pro- cesses that accurately describe or predict human performance. The models, which are either computational (computer simulation) or conceptual (process flow), usually include characteristics of the physical environment, the operator's knowledge, and the operator's methods and strategies for task performance. Performance models usually have no compo- nents representing the operator's knowledge; they have only sequences of task elements, although they may include both behavioral and cognitive task elements. The models may be exercised by varying inputs, operator char- acteristics, or task characteristics in order to evaluate their effect on simulated perfor- mance. This process allows for systematic in- vest igat ion of variabi l i ty in behavior, performance, and cognitive processes as con- ditions change.

To the extent that a model accurately de- scribes performance, the knowledge represen- tations, reasoning strategies, or task decompositions incorporated can be assumed to model human cognitive structures and pro- cesses. Thus, such models provide a specifica- tion of the knowledge and skill requirements for training. Jones and Mitchell (1987) provide a good discussion of characteristics of different modeling techniques.

COGNITIVE TASK ANALYSIS 81

One methodology that has proved useful is GOMS, developed by Card, Moran, and New- ell (1983). GOMS models represent tasks as means-end hierarchies consisting of:

1. Goals: desired end states

2. Operators: elementary perceptual, cogni- tive, and motor actions

3. Methods: sequences of actions that consti- tute procedures for accomplishing goals

4. Selection rules: criteria for selecting among competing methods

High-level task goals are decomposed into lower-level subgoals, and then into the "meth- ods" and "operators." Such models can be used to represent procedural skills for training design (Elkerton & Palmiter, 1991).

The COGNET (COGnitive NETwork of tasks) modeling framework (Zachar36 Ryder, Ross, & Weiland, 1992) builds on the GOMS methodology and extends it to allow modeling of jobs involving real-time multi-tasking, that is, jobs in which the operator has many com- peting tasks to perform and must shift atten- tion among them based on moment- to- moment situation changes. In addition to a set of GOMS-like task models, COGNET includes a global knowledge representation (equivalent to the operator's mental model of the domain and task situation) that is used in performance of all tasks, and a mechanism for changes in the problem context to be added to the prob- lem representation. Because it models the role of knowledge, experience, and situational changes in task performance, COGNET pro- vides significant additions to a purely goal- based modeling methodology such as GOMS. Each of the COGNET components, the prob- lem representation and the task models (which include subgoals and conditions for initiation) can be used to explicitly train the cognitive aspects of a domain. (See Redding et al., 1991, for an example of the use of COGNET to derive training requirements for air traffic control.)

Other cognitive modeling methodologies used in CTA include Rasmussen's decision ladder and Woods' goals-means framework. Rasmussen 's (1986) decision ladder is a method for conceptually modeling human be- havior in controlling a complex dynamic sys- tem. The decision process is analyzed by

representing a small number of typical deci- sion sequences. Each representation shows in- formation-processing activities, resultant changes in knowledge states, and ways in which intermediate stages in the sequence can be bypassed as the skill develops. Goals- means analysis (Woods & Roth, 1988) involves constructing a representation of the domain task(s) in terms of the goals that need to be accomplished, the relationships between goals, the means to achieve goals, and the information that is either needed or available to carry out the task. This goal-directed repre- sentation models the cognitive demands of the problem-solving environment.

Computational models have been con- structed based on COGNET (Zachary, Ross, & Weiland, 1991) and goals-means analysis (Woods, Pople, & Roth, 1990). The specificity required in these cases is much greater than the purely conceptual models, requiring more ef- fort for development but providing greater opportunity for model exercise and validation.

Although models are difficult and time- consuming to construct, performance model- ing may p rove useful for ana lyz ing perceptual-motor and complex procedural tasks, which are not easily analyzed by other methods, and cognitive modeling is useful for deriving the knowledge and skill require- ments for complex cognitive tasks. Further- more, the derived models (if computational) may be useful as components of computer- based training systems.

AN INTEGRATED TASK ANALYSIS MODEL

The preceding sections of this article have summarized evidence that behavioral meth- ods are not sufficient for analysis of complex cognitive tasks. Similarly, the CTA methods described cannot provide all the information necessary for training development, nor are they applicable to all types of jobs. Thus, cog- nitive methods should supplement behavioral methods rather than replace them.

In a number of recent cognitive training research efforts, the cognitive analysis fol- lowed a traditional behavioral task analysis that was found to be inadequate for all aspects of the job (Knerr et al., 1985; Redding et al.,

82 ErR&D, Vol, 41, No. 2

1991). Also, some cognitive task analysis meth- ods (e.g., Lesgold et al., 1986) propose that a behavioral (or rational) task analysis precede the cognitive task analysis, allowing the cog- nitive analysis to be targeted to the "tough cases" that behavioral analysis cannot handle. Rather thah conduct two sequential analyses, it seems preferable to develop an integrated approach. The benefits of an integrated ap- proach are that it would shorten the total amount of time required prior to beginning training design, and it would conserve analyti- cal effort by allowing the appropriate analyses tobe targeted to each aspect of the job. Further- more, cognitive methods should be integrated with behavioral methods within the context of the systems approach to training development that is embodied in ISD, rather than conducted independently.

Additionally, behavioral and cognitive an- alyses may yield mutually useful data. For instance, a study of air traffic control team communications that collected behavioral data on controller speech and correlated the communica t ion data with the cognitive and /or behavioral tasks being performed at the time, providing useful information about the cognitive aspects of controller team com- munications (Seamster, Cannon, Pierce, & Redding, 1992). Moreover, the data provided further validation for the task decomposition of air traffic control that was derived from the cognitive analysis (Seamster et al., in press).

We thus present a framework for integrat- ing cognitive concepts and methods with tra- di t ional methods currently used in ISD procedures. The Integrated Task Analysis Model (ITAM) deals specifically with the task analysis component of ISD; however, task an- alysis is defined more broadly than in ISD,* incorporating analysis of learning and skill development that is considered part of "devel- oping objectives" in some variants of ISD (e.g., Branson et al., 1975). ITAM addresses tradi- tional ISD task analysis concerns such as iden- tification of the observable components of task performance, but also incorporates cognitive concepts such as knowledge organization,

*This broader definition of task analysis is also sub- scribed to in a recent review of task analysis proce- dures (Jonassen, Hannum, & Tessmer, 1989).

mental models, and skill development, as well as CTA methods.

Framework of the Model

Research in cognitive analysis has shown that determining knowledge structure and deci- sion-making procedures is not a one-pass pro- cess (Breuker & Wielinga, 1987; Gammack, 1987); rather, it involves progressive refine- ment (Redding, 1992). The need for multiple analysis stages is analogous to the iterative nature of any system design process, in which design begins at the top level and becomes progressively more specific, and in which un- derstanding of one area leads to ideas for, and constraints on, other areas. We have success- fully used this progressive refinement ap- proach for determining the skill, knowledge, and mental model underlying expertise in air traffic control (Redding et al., 1991; Seamster et al., in press).

The organizing principle for ITAM is that there are three analysis stages--progressive cycles of data collection, analysis, and decision making---conducted for three components of expertise---knowledge, skills, and mental models. The initial analysis stage, orientation, is devoted largely to developing an overall understanding of the job and the components of expertise that comprise the job, and to de- termining the methods for analysis of each component of expertise in subsequent stages. The intermediate stage, basic analysis, is de- voted to analysis of competent performance. The final stage, skill acquisition and refinement analysis, is devoted to analysis of the progression of skill acquisition from novice to expert.*

*Some researchers (e.g., Bonar et al., 1986) have pro- posed two types of subject groups for use in CTA, depending upon project constraints and the analytic goals: comparisons between good and poor perform- ers, and comparisons among expert, intermediate, and novice performers. The ITAM framework deals primarily with the latter, because a review of the literature on novice-expert differences and theoreti- cal and computational models of skill development indicate that three stages are an appropriate number to characterize skill development (Ryder, Zachary, Zaklad, & Purcell, in press). Furthermore, we believe it to be of greater value in determining learning progressions. However, ITAM can readily accom- modate analysis of good versus poor performers.

COGNITIVE TASK ANALYSIS 83

The three-stage progressive analysis of ITAM allows analysis to be carried only to the depth necessary to support instructional de- sign. It also allows selection of analytic tech- niques that are appropriate to the components of expertise of the job being analyzed. The application of CTA techniques, because of their resource-intensive nature, must be tar- geted to those job tasks that cannot be an- alyzed with behavioral methods. Job tasks that are particularly suitable for cognitive an- alysis include, among others:

• tasks that involve a high degree of problem solving and decision making;

• tasks that place high workload or attention- switching requirements on the individual;

• tasks that involve high performance skills (i.e., require massive amounts of practice to obtain proficiency, with many trainees never acquiring proficient performance; see Schneider, 1985);

• tasks that require large amounts of infor- mation to be assimilated during training (often referred to as "knowledge-rich" tasks);

• tasks that experts have considerable diffi- cul ty ve rba l i z ing or demons t r a t i ng through overt actions; or

• tasks that lead to considerable variability among individuals due to the number of cognitive performance strategies available.

Thus, in order to conserve analytic effort as well as to fine-tune the analysis to address likely distinctions between experts and less experienced personnel as well as important cognitive aspects of job performance, ITAM is an iterative process.

Components of Expertise

The three components of expertise to be an- alyzed are: skills, knowledge, and mental models. Skills and knowledge are typically analyzed in most ISD methods; mental models have been added as a separate category for reasons which are discussed later.

The cognitive literature makes a distinction between declarative and procedural knowl-

edge (Anderson, 1982). Declarative knowl- edge is knowledge about the job domain, while procedural knowledge is knowing how to do the job tasks. A further distinction is that declarative knowledge is encoded and re- trieved directly, while procedural knowledge is executable and, in fact, must be executed to produce the outcome. In ITAM, the "knowl- edge" category corresponds to declarative knowledge and the "skill" category corre- sponds to procedural knowledge.

Skills

The skills component of expertise includes all types of procedural knowledge as defined above. Analysis of skills involves determining and analyzing the skills required for the job to be trained and determining the associated learning and performance strategies. Tradi- tional task analysis segments a job into behav- iorally distinct tasks and their component activities, and then determines the skills needed for each task activity. In ITAM, compo- nent skills are analyzed for the job as a whole; they may or may not correspond with distinct tasks or task activities. For example, the ability to recognize and classify symbols might be necessary for many different tasks in a partic- ular job; ITAM considers symbol recognition as a perceptual skill and analyzes it as one skill component. To illustrate this point, consider a military tactician who views an evolving situ- ation on a display containing a variety of sym- bols, each of which represents a type of aircraft. The tactician's tasks include, among other things, determining hostile actions, plan- ning responses to different types of hostile actions, and maintaining a complete picture of the situation. Each of these tasks requires the individual to be able to recognize and classify aircraft symbols correctly.

Component skills that are primarily behav- ioral ("procedural" and "gross motor" skills in ITAM's taxonomy) are analyzed by decom- posing the tasks involved into a sequence of task steps, as is done in traditional ISD proce- dures. However, skills that are primarily cog- nitive are analyzed using CTA methods. This change in emphasis on skill components allows concentration on mental processes,

84 Er~&D, Vol. 41, No. 2

behavioral activities, or both--whatever is im- portant for a particular job.

The ITAM framework uses a skill taxon- omy (shown in Table 2) derived from current research and theory in human information processing. The ITAM skill taxonomy is de- signed for the classification of job skills into categories that are relevant to the analysis of cognitive as well as behavioral tasks. Different skill types require different conditions for pro- moting acquisition (Gagn6, 1985), different methods for testing (Kyllonen & Shute, 1989), different techniques for automaticity training (see Fisk & Eggemeier, 1988), and different analysis techniques. Some analytical tech- niques, however, particularly interviewing and modeling, can yield results that are appli- cable to multiple skill components. The COGNET modeling technique, for example, includes derivation of a goal hierarchy for the job (decision-making skill), both behavioral and cognitive operations comprising each task (decision-making, procedural, gross motor skills, and in some cases perceptual-motor or interactive skills), and a global knowledge rep- resentation of the domain and task situation (with indications of how it is updated during task execution).

In the seven-category ITAM taxonomy, all skills have perceptual (input), cognitive (cen- tral processing), and motor (output) compo- nents. The skill taxonomy is based on the

degree of importance of each component in combination with other information-process- ing characteristics, including demands on working memory, knowledge requirements (long-term memory), internal code (verbal or spatial), stimulus complexity and predictabil- ity, and overall mental workload. Descriptions of each skill category follow.

Perceptual skills deal with interpretation of sensory input (most commonly visual).

Decision-making skills are those involving central processing and deal primarily with verbal data and/or unpredictable stimuli, in- cluding problem-solving, planning, and deci- sion-making. (The term "decision making" rather than "cognitive" is used to refer to this skill category so as not to imply that other skills have no cognitive components.)

Gross motor skills involve physical move- ment which requires little decision making, usually executed in response to a relatively static stimulus situation.

Perceptual-motor skills are placed in a sepa- rate category from gross motor skills because these skills have integrated perceptual and motor components.

Procedural skills form another category be- cause they involve rote motor output in re- sponse to predictable initiating cues and have relatively low cognitive demands.

Interactive skills form a category that is sug- gested by Romiszowski (1981) as missing from

TABLE 2 [ ] ITAM Skill Taxonomy

Skill Category Definition Perceptual

Decision Making Gross Motor

Perceptual-Motor

Procedural

Interactive

Skill Integration and Tune-Sharing

Identification and classification of sensor), information Decision making and problem solving Overt movements with standard components in which performance is guided primarily by kinesthetic cues Continuous tracking or control in which the movements required depend on dynamic perceptual input Constrained sequence of actions in predictable situations Interpersonal skills including communication, persuasion, supervision Integration of various skills into a single task performance and attention-switching strategies in complex multi-task environments

COGNITIVE TASK ANALYSIS 85

most taxonomies. It involves skills that would otherwise be omitted in a purely cognitive or behavioral approach, such as the supervisory, communication, and persuasion skills that are important components of many jobs.

Skill integration and time-sharing is a cate- gory suggested by work in the area of automa- ticity and multiple resource theory (Lintem & Wickens, 1987; Schneider, 1985), based on transfer studies from single to dual task condi- tions. Schneider (1985) pointed out the need to train toward integrating and coordinating skills in addition to training skills indepen- dently. This is particularly important under high workload conditions, when there is a need to coordinate tasks and skills.

The substitution of skill analysis for task decomposition does not eliminate part-task training, nor does it disregard the use of task activity lists when constructing job perfor- mance measures. Rather, training techniques and performance requirements should be con- structed around the cognitive and/or behav- ioral operations and skills which contribute to the job. This approach has been used success- fully in a CTA of air traffic control (see Red ding et al., 1991; Seamster et al., in press). Using the COGNET modeling procedure, the job was decomposed into tasks involving groups of related decisions or activities that exist inde- pendently across a wide spectrum of situa- tions or scenarios. Some of the tasks involved primarily cognitive skills; others involved both cognitive and behavioral skills; still oth- ers involved mainly behavioral skills.

Knowledge

In ITAM, the "knowledge" component is equivalent to declarative knowledge, and equates to Gagn6's (1985) "verbal informa- tion," or to what is sometimes referred to as "content." The knowledge component in- cludes domain concepts and their interrela- tionships as well as rules and procedures for job accomplishment in verbal form. It does not indude the ability to apply the rules or execute the procedures: those skills fall under the ap- propriate skill area. The knowledge compo- nent also does not deal with how knowledge

is organized into a deductive framework; that is the mental model component. Knowledge about learning strategies or reasoning pro- cesses is also not included within this category, because such knowledge will vary depending on the type of skill involved (see discussion above).

As with the skill analysis, ITAM analyzes the knowledge requirements for the whole job rather than for each task step. This approach allows analysis of the interrelat ionships among concepts and of changes in organiza- tion of the knowledge base as the trainee pro- gresses from novice to expert. Derivation of knowledge organization and novice-to-expert progression is valuable for determining in- structional sequencing. An expert's knowl- edge organization usually differs from that of a novice, in that it is more hierarchical, more abstract, and more elaborated but less complex (Bransford, Sherwood, Vye, & Rieser, 1986). Behavioral approaches design instruction to match the task performance sequence. How- ever, with some cognitive tasks, particularly "knowledge-rich" tasks, it is more appropriate for instructional design to match the structure of the knowledge base, with sequencing based upon knowledge progression.

Mental Models

Mental models are defined as functional ab- stractions about a job or job task which provide a deductive framework for problem solving. A mental model can be described as a domain- specific problem representation that contains and integrates:

• conceptual knowledge about components of a system or situation;

• procedural knowledge about how to use the system or act in the situation;

• decision-making skills for reasoning about the system or situation; and

• strategic knowledge about when and why different procedures and decision-making skills should be used and how tasks com- ponents interact or are related.

Mental models are important in maintain- ing awareness about an evolving job situation

86 ETR&D, Vol. 41, No. 2

(Sarter & Woods, 1991) and in being able to draw inferences in the task domain; conse- quently, they are important for most complex cognitive tasks.

Since mental models often contain declara- tive, procedural, and strategic knowledge, the mental models component of expertise may overlap significantly with the knowledge and skill components. Considerable confusion ex- ists in the literature as to the definition of a mental model (see Wilson & Rutherford, 1989, for a review). One area of confusion has to do with whether conceptual knowledge struc- tures (sometimes called "schemata") are men- tal models. Although this issue is not readily resolvable, one useful distinction may be that schemata provide the building blocks for func- tional mental models (Johnson-Laird, 1983; Kyllonen & Shute, 1989). According to Wilson and Rutherford (1989), "it is the dynamic com- putational ability of a mental model beyond tha t . . , background knowledge that provides the notion with its theoretical utility" (p. 625). Mental models are treated separately in ITAM because of research showing their unique im- portance. For example, the majority of errors in problem solving occur prior to actually at- tempting the solution and can be traced to faulty mental models (Rumelhart & Norman, 1981). Expert mental models are typically more abstract and easier to apply than novice mental models (Bransford et al., 1986).

What should be determined during task analysis is not the exact form of any one individual's mental model, but characteristic mental models that can be used as a frame- work for teaching the domain. Whenever a model is presented as part of instruction, it should clearly and succinctly represent the task characteristics and constraints. The mod- els will often represent various levels of ab- straction (Rasmussen, 1986), making explicit important principles and conceptual relations that are otherwise difficult to understand. The format of the model will vary depending on the domain, but will make use of graphic rep- resentations (maps, Venn diagrams, flow charts, etc.) whenever possible to aid compre- hension and conceptualization. There is a growing body of research on the relative effec- tiveness of different types of mental models in training (see Hsu &Chen, 1990, for an example).

Example of Components of Expertise

One of the principles differentiating ITAM from traditional ISD procedures is the substi- tution of skill analysis for the detailed task/subtask/task step analysis. In the first stage of ITAM, the types of skills, knowledge, and mental models used in the job are deter- mined, in addition to a list of job tasks. Then, the subsequent analysis procedures are geared toward the specific components of expertise involved. Those tasks that involve primarily gross motor or procedural skills are analyzed using behavioral techniques, including deter- mination of subtasks and task steps. However, those tasks that primarily involve decision making are analyzed using cognitive tech- niques. Thus, behavioral and cognitive tech- niques are used within the same analysis but are targeted to the appropriate aspects of the job.

Suppose a job involves monitoring and troubleshooting equipment in a process con- trol environment. The analysis would most likely reveal that four components of expertise are involved:

Procedural skills: step-by-step procedures for equipment operation and repair;

Decision-making skills: heuristics and strategies for troubleshooting;

Knowledge: facts and principles about process control and the equipment involved; and

Mental model: a model of equipment operation that links physical controls and displays of the equipment to underlying principles, including causal relationships, thus supporting decision making.

Each of these areas would then be analyzed using techniques targeted to them,and train- ing design would provide for the separate and integrated learning and practice of the skills.

Analysis Stages

Figure I provides an overview of ITAM, indi- cating the information that is derived for each component of expertise at each stage of analy- sis. Each component of expertise is analyzed to a deeper level at successive stages. The in-

COGNITIVE TASK ANALYSIS

FIGURE 1 [ ] Overv iew of the In tegra ted Task Analysis Mode l (ITAM)

87

STAGE 1: ORIENTATION

SKILLS (S) KNOWLEDGE (K) MENTAL

MODELS (MM)

• Major Duties and Tasks • Tasks Requiring Training

i

t • Skills Involved in

each Task • Domain Concepts

• Domain Rules and Procedures

ii

• Applicability of Mental Model Component

• Likely Model Format and Organizing Principles

STAGE 2: BASIC

ANALYSIS

STAGE 3: SKILL

ACQUISITION AND

REFINEMENT ANALYSIS

I i Overall Characterization of Job and Expertise Components "~ Selection of S/K/MM for Analysis J Selection of Analytic Technique(s) for Each

,, ,, , , , ,

° Analysis of Each Skill Area

Expert Declarative Knowledge Structure

• Expert Model Components and Constraints

- Characterization of Competent Performance

i • Refinement of Skill Analysis

• Novice-to-Expert Skill Progression

) ° Refinements to Expert

Knowledge Organization

* Novice and Intermediate Knowledge Organization

• Refined Expert Model

• Novice-to-Expert Model Progression

( • Characterization of Novice-to-Expert Progression

88 ETR&D, Vol. 41, No. 2

formation derived at one stage is used to de- termine the specific data gathering and analy- sis that should be done at the subsequent stage.

In the following sections we describe the general purpose of each stage of the ITAM model and then describe in detail the analyti- cal procedures used and resulting products for each component of expertise within each stage (i.e., for each of the sequential blocks shown in Figure 1).

Stage One: Orientation

The goal of the first stage of analysis is to identify and gain an overall familiarity with the job tasks to be analyzed and trained and the major components of expertise involved. The environment and constraints within which job tasks are performed should also be defined at this stage (Vicente, 1990). In addi- tion, the analyst should acquire the vocabulary and the background needed to communicate with SMEs and to conduct more detailed an- alyses in subsequent stages. The method for performing Stage One analysis should not vary much from job to job because it involves initial data gathering and is not based on any preceding analysis. It usually includes review- ing existing course materials, technical manu- als, operation manuals, occupational surveys, previous research, and, most importantly, talk- ing with a variety of SMEs. Pilot data for some Stage Two analyses may also be gathered.

As a preliminary step in Stage One, it is necessary to determine which tasks compose the job and which tasks require training. To- ward this goal the job as a whole must be described and the major duties and tasks de- termined. Each task should be evaluated to determine its frequency of performance, its importance to the overall job, and its learning difficulty. As in current ISD procedures, job tasks or functions selected for detailed analysis should be those that are difficult, critical, and/or frequently performed. This analysis corresponds to what is commonly called job analysis/description, except that no descrip- tion of task activities is performed at this point. Rather than expending analytical effort de- scribing the behavioral steps of all tasks, sub- sequent effort is concentrated on an analysis of

skills, knowledge, and mental models. As part of the Stage Two skill analysis, behavioral steps are defined for gross motor and proce- dural skills--the two skill categories that are primarily behavioral. However, cognitive techniques would be used for tasks that in- volve decision making.

Because analysis methods used in later stages will vary greatly according to the types of skills required for the job, the amount and types of knowledge, and the type of mental model utilized, an important part of Stage One involves selecting the most appropriate an- alytic methods. Due to the resource-intensive nature of CTA, it is generally feasible to use cognitive methods to analyze only selected job components or functions. Thus, Stage One also includes selection of candidate tasks appropri- ate for cognitive analysis. Tasks selected for cognitive analysis should be:

• tasks that represent major training prob- lems

• tasks that tend to create job bottlenecks

• tasks on which operators make frequent errors.

The selection of cognitive versus behavioral analysis techniques may also be dictated by practical considerations such as the level of training of the analyst(s), availability of com- puter resources, and time and budgetary con- straints (see Redding, 1990b). However, use of multiple techniques is strongly advised in order to obtain convergent validation for the findings. Each analytic method has unique ad- vantages, disadvantages, and appropriate uses. Use of only one method may yield lim- ited and potentially misleading results.

Skills

Skill analysis involves identification of the major skills involved in the job. The job tasks are used to identify the required skills from the skill taxonomy discussed above (see Table 2). Some skills may contribute to more than one task, some may only be a part of the task, and others may have one-to-one correspondence with a task.

COGNITIVE TASK ANALYSIS 89

Knowledge

Stage One involves identification of domain concepts, rules, and procedures. This serves three purposes: first, it provides the analyst with some background on the domain, includ- ing a vocabulary for discussions with SMEs; second, it provides the building blocks for de- veloping the knowledge organization during Stage Two. For example, preparations for a psychological scaling procedure involving card-sorting would begin in Stage One by re- viewing training manuals, interviewing, etc., to determine the relevant domain concepts which should be included as stimuli. Third, it can provide useful information for later deter- mination of the organizing principles of the mental model.

Mental Models

Stage One involves determining whether a mental model is an important component of expertise for the job in question, and, if so, the likely format of the model and its main organ- izing principles. Once the analyst has an un- derstanding of both the job as a whole and the major skills and domain knowledge involved, it should be possible to determine the model format: physical, functional, conceptual, etc. For example, a troubleshooting job would most likely require a mental model of equip- ment functioning, in which case, the model must represent the functional knowledge of the equipment at various levels and how those functions are carried out by the equipment components. A navigation job would most likely require several mental models---a visual- spatial model of the relevant cartography and a mental model of how the navigation equip- ment functions.

Stage Two: Basic Analysis

Stage Two involves the first pass at character- izing competent performance of the job: iden- tifying the skills, knowledge, and mental models that contribute to job performance. This analysis is targeted at those areas deter- mined during Stage One to be components of

the job. Each component of expertL~c ~kiUs, knowledge, and mental modelsmis analyzed only to the extent that it is important to the job in question; not all components of expertise will be important for every job.

Skills

Stage Two involves analysis of each skill area identified during Stage One across the job tasks selected for analysis. This is a significant aspect of the ITAM analysis, since it is the detailed analysis of the skills composing all the job tasks and replaces the task step breakdown of traditional ISD task analysis.

Different types of analysis techniques are used for each skill area (see Table 3). For in- stance, perceptual skills involving pattern rec- ognition might be examined by psychological scaling analysis of the level of confusion be- tween different patterns to determine which features are most difficult to identify. Decision- making tasks might be analyzed using proto- col analysis of representative or difficult problems to determine the heuristics used. A perceptual-motor skill might be decomposed into consistent skill components that need to be trained to automaticity. The analysis of gross motor skills might involve using obser- vational techniques, as in traditional task an- alysis. Various procedures could be used in an analysis of skill integration and time-shar- ing to identify consistent task components and those task components that experts can per- form automatically.

Knowledge

Stage Two analysis is concerned with under- standing how the domain concepts are related and structured. Of the key analytical methods shown in Table 3, psychological scaling tech- niques are particularly valuable. An important part of this stage involves visually represent- ing the conceptual structure, primarily for training communication and validation pur- poses. The primary types of structures serving this purpose are tree- or ne twork- type diagrams or multidimensional spatial repre- sentations

90 ETR&D, Vol. 41, No. 2

TABLE 3 [ ] Techniques for Analysis of Components of Expertise

Component of Expertise Key Analysis Techniques

SKILLS Perceptual

Decision Making

Gross Motor

Perceptual-Motor

Procedural

Interactive

Skill Integration and Time-Sharing

KNOWLEDGE

MENTAL MODELS

Psychological Scaling Observation Interviews

Protocol Analysis Cognitive Interviewing Cognitive Modeling

Observation Performance Modeling

Observation Performance Modeling

Observation Interviews

Observation Interviews

Cognitive/Performance Modeling Automaticity Analysis Protocol Analysis Interviews

Psychological Scaling Cognitive Interviewing

Psychological Scaling Protocol Analysis Cognitive Interviewing Cognitive Modeling

Subordinate levels (i.e., lower-level charac- teristics) of the expert knowledge organization should be defined to the extent possible. Here, the primary level of organization is considered as being representative of what Rosch, Mervis, Gra~ Johnson, and Boyes-Braem (1976) call "basic-level" concepts. The human informa- tion-processing system organizes concepts hierarchically in terms of subordinate, basic- level, and superordinate concepts (e.g., "col- lie," then "dog," then "animal"). Basic-level concepts ("dog" in" this example), being nei- ther too specific nor too general, provide the most relevant information in the most concise manner (Rosch et al., 1976). Thus, the basic level of knowledge organization is generally the most useful level for training sequencing and organization.

Avariety of methods are available for iden- tifying basic-level concepts and organizers.

Research shows that individuals naturally tend to name objects and concepts at the basic level of generality (Rosch et al., 1976), which facilitates structuring the knowledge base.

Mental Models

During Stage Two, the primary components and constraints of the expert model are defined and the most useful level(s) of abstrac- tion for training are determined (see Rasmussen, 1986). The main organizing principles of the expert model should be fully defined. The key analytical methods for deriving mental model structure and content are shown in Table 3. Mental model analysis should include deci- sion-making skill and knowledge organiza- tion, since mental models encompass aspects of both these components of expertise.

COGNITIVE TASK ANALYSIS 91

Stage Three: Skill Acquisition and Refinement Analysis

Stage Three is concerned with determining the way skills, knowledge, and mental models dif- fer among levels of expertise. This involves understanding how less experienced personnel (novice and intermediate levels) perform job tasks; how expertise develops; and the specific heuristics, strategies, and skill refinements that differentiate true experts from competent performers. The same methods that were used in Stage Two are used in Stage Three. By ob- taining this information subsequent to the basic analysis, data collection can be finely targeted to those aspects that appear to differ- entiate experts from less experienced or less capable performers. Furthermore, it allows re- fining the basic understanding of expertise that was obtained during Stage Two.

Skills

Stage Three analysis builds on Stage Two an- alysis to refine the understanding of each skill and to determine how skill performance dif- fers along the expertise continuum from nov- ice to expert. As expertise develops, specific skills components become more automatic, task procedures become "chunked" into larger components, and performance typically be- comes faster and more finely tuned (Ander- son, 1982; Chi et al., 1988). Analysis of skill progression is critical for providing the condi- tions for skill development during training.

Knowledge

During the Stage Three analysis, the knowl- edge organization developed during Stage Two is refined and evaluated. In addition, the knowledge organization for novices and, where feasible, those at intermediate skill lev- els is determined. Differences between novices and experts provide the basis for determining instructional sequencing and indicate poten- tial misconceptions that commonly occur in novices so that they can be cleared up. Novices' errors might include overextending or underextending concepts or conceptual re- lations, overlooking concepts, using unneces-

sary concepts, or making inappropriate inter- relations, or overlooking interrelations among concepts.

Mental Models

Stage Three analysis should concentrate on refining and validating the expert mental model and determining progressions from novice to expert. Validation of the expert model can be accomplished in several ways, including obtaining convergent validity through use of multiple methods. For exam- ple, a mental model derived from a cognitive modeling procedure during Stage Two could be cross-validated through use of psychologi- cal scaling during Stage Three. Additionally, alternative models could be correlated with objective performance measures to determine which model is most predictive of the best performance. Likely errors or bugs within novice mental models should be identified in order to provide feedback to students by ex- plicitly comparing the student's model to the expert model.

Mental Models~Knowledge

During Stage Three, some effort should be directed toward identifying antecedents lead- ing to expert task performance (diSessa, 1982). This entails specification of both developmen- tal and objective prerequisites. It implies that, during the course of skill acquisition, certain aspects of knowledge or mental models devel- oped by or represented to learners are approx- imations that will likely contain inaccuracies. Such approximations are useful for initially learning the skill, but will need to be refined as the individual gains expertise (Redding, 1990a).

This approach represents a developmental framework for the analysis of mental models and knowledge organization. Several investi- gators (Lucas, 1987; Whiteside & Wixon, 1984) have proposed using Piaget's structuralist approach to psychological development as a guide for understanding the development of mental models. Piagetian and neo-Piagetian theories view knowledge development as pro-

92 ETR&D, Vol. 41, No. 2

ceeding through a fixed sequence of relatively invariant stages; thus, it is necessary to master the preceding stage or skill level before pro- gressing to the next stage. Further evidence for stages in the development of problem-solving expertise comes from the work of artificial intelligence practitioners in modeling cogni- tive processes. Amarel's (1969, 1982) work demonstrates how problem-solving perfor- mance evolves through a series of stages in which increasingly more efficient algorithms are used. He has further shown that the in- creasing efficiency in problem-solving perfor- mance is driven by development of the problem representation, or mental model.

The research of Elio and Scharf (1990) leads to the same conclusions. Each stage of knowl- edge has unique qualitative characteristics which govern the organization of the knowl- edge base. For instance, the mental models of novices tend to be organized primarily in terms of what Rasmussen (1986) calls rule- based characteristics, whereas mental models of experts tend to be organized according to theoretical or conceptual principles (see Lucas, 1987). Chi, Feltovich, and Glaser (1981) found evidence that the basic level for novices is equivalent to the subordinate level of experts within a domain, suggesting that the qualita- tive aspects of knowledge organization change as expertise develops.

If knowledge and mental models undergo qualitative transformations in sequential stages, the implications for training could be significant. This implies that expertise cannot be taught by using the expert mental model and knowledge organization as the starting point, but by following a series of progressive refinements. Successful computer-based train- ing systems and instructional strategies have been developed based on this view of sequen- tial stages in skill acquisition (e.g., Fischer, 1988; White & Frederiksen, 1987). This re- search, which goes by the rubric "increasingly complex microworlds," indicates the practical application of this aspect of cognitive theory to training development. While we recognize that this issue will continue to be debated in the literature, mental model and knowledge development is an important area for investi- gation in training program development.

Modi f ied ITAM Approaches

A fourth stage was not included in ITAM be- cause of practical constraints upon the number of iterations possible in any field analysis. Where feasible, however, analytic effort subse- quent to Stage Three should be devoted to further elaboration of a model of skill progres- sion (i.e., progressions from novice to expert).

Alternatively, due to the relatively costly and time-consuming nature of CTA, in some cases it may be desirable to limit the scope of the analysis of skills, knowledge, and mental models. Such an analysis would include Stage One and Stage Two, but the methods used for data collection and analysis during Stage Two, such as interviewing, observation, and limited protocol analysis, would be less complex. This should result in a first-cut model of expertise for the task and a delineation of performance objectives and requisite abilities, skills, and knowledge. A limited cognitive analysis has been shown to produce valuable data and training recommendations, even after Stage One (see Redding & Lierman, 1990).

Another possible modification of ITAM is to do the first two stages, but to collect data about novices in parallel with derivation of the expert model. This would allow some analysis of skill progression without undergoing a third analysis stage. The drawback to this ap- proach is that there is no way to target data collection to novices, meaning that time might be wasted collecting irrelevant data and that relevant data might be overlooked.

The tradeoff in such decisions is between the reduced number of data collection trips in the one case and the more closely targeted data collection in the other. The decision depends on the total project schedule and accessibility of subjects.

CONCLUSIONS

Many of the job skills required today are pri- marily cognitive in nature. Consequently, an approach to task analysis and training devel- opment that includes cognitive concepts and methods would be applicable to many prob-

COGNITIVE TASK ANALYSIS 93

lem areas in which a traditional approach would fall short. An integrated approach could support development of training pro- grams that build a flexible knowledge base, automated skill components for high perfor- mance tasks, and efficient mental models for task understanding and decision making. Trainees would be provided with better tools for mastering the complex tasks that are in- creasingly required of workers today.

Advances in CTA technology represent substantial progress toward these goals. How- ever, a number of practical problems associ- ated with some of these methods remain to be solved (see Redding, 1989). Because of the complexity of obtaining and analyzing data, CTA is often not considered cost-effective for applied purposes. Highly specialized opera- tions are required to design measurement in- struments, to obtain data from personnel of varying experience or at different performance levels, and to analyze and interpret that data. Interpretation of results from psychological scaling procedures requires trained analysts. Other approaches to analysis, such as perfor- mance modeling, are labor intensive at their present stage of development. Substantial fu- ture research will be required to advance these procedures to levels of practical efficiency. Fu- ture development also needs to address incor- porating methods for evaluating cognitive skills within the testing and evaluation phases of ISD. This is important for determining the acquisition of complex cognitive skills.

The Integrated Task Analysis Model pro- posed here provides a framework for integrat- ing cognitive and behavioral task analysis methods within the ISD process. In the pro- posed model, the overall job is evaluated to determine the expertise components involved and analysis methods to be used in the subse- quent stages. Component skills that are pri- marily behavioral are analyzed according to traditional ISD procedures, while skills that are primarily cognitive are analyzed according to CTA methods. In addition, both cognitive and behavioral techniques can be included in a single data gather ing cycle (Stage Two of ITAM), with some analyt ical techniques applicable to both behavioral and cognitive components.

A number of CTA techniques that may be beneficial for analyzing complex cognitive skills have been summarized. Further research is necessary to verify the utility of specific techniques for analyzing specific skills, to ex- pand the ITAM framework into a completely integrated methodology, and to validate the framework as a whole. We hope the approach is of value in achieving the transition of cogni- tive theor~ research, and methods from the research and development arena into the main- stream of training development technology.

This research was supported in part by the U.S. Air Force Armstrong Laboratory, Human Resources Directorate, Operational Training Division. The views contained herein are solely those of the authors and do not necessarily reflect the views of the U.S. Air Force or the Department of Defense.

Both authors contributed equally to this article. Joan M. Ryder is with CHI Systems, Inc. Richard E. Redding is with Human Technology, Inc. Drs. Bernell Edwards and Thomas Killion were the tech- nical monitors and provided valuable guidance. The authors thank John Cannon, Sharon Fisher, Wayne Zachary, and the participants of an AL/HR Work- shop on Cognitive Skills Acquisition for helpful comments.

Correspondence should be directed to Joan Ryder at CHI Systems, Gwynedd Plaza ItI, Spring House, PA 19477.

REFERENCES

Amarel, S. (1969). On the representation of problems and goal-directed procedures for computers. Com- munications of the American Society for Cybernetics, 1.

Amarel, S. (1982). Expert behavior and problem repre- sentations (Tech report CBM-TR-126). New Bruns- wick, NJ: Rutgers University, Laboratory for Computer Science Research.

Anderson, J.R. (1982). Acquisition of cognitive skill. Psychological Review, 89, 396-406.

Andrews, D. H., & Goodson, L. A. (1980). A compar- ative analysis of models of instructional design. Journal of Instructional Development, 3(4), 2-16.

Blackman, H. S. (1988). The use of think-aloud verbal protocols for the identification of mental models. In 32nd Annual Proceedings of the Human Factors Society (pp. 872-874). Santa Monica, CA: Human Factors Society.

Bobrow, D. G., & Norman, D. A. (1975). Some prin- ciples of memory schemata. In D. G. Bobrow & A. M. Collins (Eds.), Representation and understanding: Studies in cognitive science. New York: Academic Press.

9 4 ETR&D, Vol. 41, No, 2

Bonar, J., Collins, J., Curran, K., Eastman, R., Gito- mer, D., Glaser, R., Greenberg, L., Lajoie, S., Logan, D., Magone, M., Shalin, V., Weiner, A., Wolf, R., & Yengo, L (1986). Guide to cognitive task analysis. Pittsburgh: University of Pittsburgh, Learning Re- search and Development Center.

Bransford, J., Sherwood, R., Vye, N., & Rieser, J. (1986). Teaching thinking and problem solving. American Psychologist, 41, 1078-1089.

Branson, R. K., & Grow, G. (1987). Instructional sys- tems development. In R. Gagn~ (Ed.), Instructional technology: Foundations. Hillsdale, NJ: Lawrence Erlbaum.

Branson, R. K., Rayner, G. T., Cox, J. L, Furman, J. R, King, E J., & Hannum, W. H. (1975). lnterservice procedures for instructional systems development (NAVEDTRA 106A, NTIS No. ADA-019 486 through ADA-019 490). Fort Monroe, VA: U.S. Army Training and Doctrine Command.

Breuker, J., & Wielinga, B. (1987). Uses of models in the interpretation of verbal data. In A. L Kidd (Ed.), Knowledge acquisition for expert systems. New York: Plenum Press.

Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum.

Chi, M. T. H., Feltovich, P. l., & Glaser, R. (1981). Categorization and representation of physics problems by experts and novices. Cognitive Science, 5,121-152.

Chi, M. T. H., Glaser, R., & Farr, M. J. (Eds.) (1988). The nature of expertise. Hillsdale, NJ: Lawrence Erlbaum.

Cooke, N. M., & McDonald, J. E. (1986). A formal methodology for acquiring and representing ex- pert knowledge. Proceedings of the IEEE, 74, 1422- 1430.

Cooke, N. M., & McDonald, J. E. (1987). The appli- cation of psychological scaling techniques to knowledge elicitation for knowledge-based sys- tems. International Journal of Man-Machine Studies, 26, 533-550.

Dick, W., & Carey, L. (1990). The systematic design of instruction (3rd ed.). Glenview, IL: Scott, Foresman.

diSessa, A. A. (1982). Unlearning Aristotelian phys- ics: A study of knowledge-based learning. Cogni- tive Science, 6, 37-75.

Etio, R., & Scharf, P. B. (1990). Modeling novice-to- expert shifts in problem-solving strategy and knowledge organization. Cognitive Science, 14, 579-639.

Elkerton, ]., & Palmiter, S. (1991). Designing help systems using the GOMS model: An information retrieval evaluation. Human Factors, 33, 185-204.

Ericsson, K. A., & Simon, H. A. (1984). Protocol analy- sis: Verbal reports as data. Cambridge, MA: MIT Press.

Ericsson, K. A., & Smith, J. (1991). Toward a general theory of expertise: Prospects and limits. New York: Cambridge University Press.

Fischer, G. (1988). Enhancing incremental learning processes with knowledge-based systems. In H.

Mahdi & A. Lesgold (Eds.), Learning issues for intel- ligent tutoring systems. New York: Springer-Verlag,

Fisk, A. D., Ackerman, P., & Schneider, W. (1985). Automatic and controlled information processing in human factors. In P. S. Hancock (Ed.), Human factors psychology. New York: North Holland.

Fisk, A. D., & Eggemeier, R. T. (1988). Application of automatic/controlled processing theory to train- ing of tactical command and control skills: I. Back- ground and task analytic methodology. In 33rd Annual Proceedings of the Human Factors Society (pp. 281-285). Santa Monica, CA: Human Factors Society.

Fisk, A. D., & GaUini, J. K. (1989). Training consistent components of tasks: Developing an instructional system based on automatic/controlled processing principles. Human Factors, 31,453-463.

Fisk, A. D., Oransky, N. A., & Skedsvold, E R. (1988). Examination of the role of "higher-order" consis- tency in skill development. Human Factors, 30, 567- 581.

Ford, J. M., & Wood, L. E. (in press). Structuring and documenting interactions with subject-matter ex- perts. Performance Improvement Quarterly.

Gagn(~, R. M. (1982). Developments in learning psy- chology: Implications for instructional design and effects of computer technology on instructional design and development. Educational Technology, 22(6), 11-15.

Gagn~, R. M. (1985). The conditions of learning and theory of instruction. New York: Holt, Rinehart & Winston.

Gagn~, R. M., & Glaser, R. (1987). Foundations in learning research. In R.M . Gagn~ (Ed.), Instruc- tional technology: Foundations. Hillsdale, NJ: Law- rence Erlbaum.

Gammack, J. G. (1987). Different techniques and different aspects on declarative knowledge. In A. L Kidd (Ed.), Knowledge acquiSition for expert sys- tems. New York: Plenum Press.

Gentner, D., & Stevens, A. L. (Eds.). (1983). Mental models. Hillsdale, NJ: Lawrence Erlbaum.

Graesser, A. C., & Murray, K. (1990). A question answering methodology for exploring a user's ac- quisition and knowledge of a computer environ- ment. In S. Robertson, W. Zachary, & J. Black (Eds.), Cognition, computing, and cooperation (pp. 237-267). Norwood, NJ: Ablex.

Halff, H. M., Hollan, J. D., & Hutchins, E. L. (1986). Cognitive science and military ti'aining, American Psychologist, 41, 1131-1139. Hoffman, R. R. (1987). The problem of extracting the knowledge of ex- perts from the perspective of experimental psy- chology. AI Magazine, 26, 53-67.

Hsu, S. H., & Chen, J. C. (1990). Effects of mental models of a computer-aided instruction system on the acquisition of cognitive skills. In 34th Annual Proceedings of the Human Factors Society (pp. 244- 248). Santa Monica, CA: Human Factors Society.

Johnson, L., & Johnson, N. E. (1987). Knowledge elicitation involving teachback interviewing. In A. L. Kidd (Ed.), Knowledge acquisition for expert sys- tems: A practical handbook. New York: Plenum Press.

COGNITIVE TASK ANALYSIS 9 5

Johnson, S. C. (1967). Hierarchical clustering schemes. Psychometrika, 32, 241-254.

Johnson, S. D. (1988). Cognitive analysis of expert and novice troubleshooting performance. Perfor- mance Improvement Quarterly, 1(3), 38-54.

Johnson-Laird, P. N. (1983). Mental models. Cam- bridge, U.K.: Cambridge University Press.

Jonassen, D. J., Hannum, W. H., & Tessmer, M. (1989). Handbook of task analysis procedures. New York: Praeger.

Jones, E M., & Mitchell M, (1987). Operator model- ing: Conceptual and methodological distinctions. In 31st Annual Proceedings of the Human Factors Society (pp. 31-35). Santa Monica, CA: Human Factors Society.

Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method for eliciting knowledge. IEEE Transactions on Systems, Man, and Cybernetics, 19(3), 462-472.

Knerr, C. M., Morrison, J. E., Mumaw, R. J., Stein, D. J., Sticha, P. J., Hoffman, R. G., Buede, D. M., & Holding, D. H. (1985). Simulation-based research in part-task training (FR-PRD-85-11). Alexandria, VA: HumRRO.

Kruskal, J. B., & Wish, M. (1978). Multidimensional scaling. Beverly Hills, CA: Sage University Press.

Kuipers, B., & Kassirer, J. E (1987). Knowledge ac- quisition by analysis of verbatim protocols. In A. L. Kidd (Ed.), Knowledge acquisition for expert sys- tems. New York: Plenum Press.

Kyllonen, P. C., & Shute, V. J. (1989), Taxonomy of learning skills. In P. L. Ackerman, R. J. Sternberg, & R. Glaser (Eds.), Learning and individual differ- ences. San Francisco: Freeman.

Lesgold, A., Lajoie, S., Eastman, R., Eggan, G., Gito- mer, D., Glaser, R., Greenberg, L., Logan, D., Mag- one, M., Weiner, A., Wolf, K., S., Yengo, L. (1986). Cognitive task analysis to enhance technical skills train- ing and assessment. Pittsburgh: University of Pitts- burgh, Learning Research and Development Center.

Leinhardt, G., & Greeno, J. G. (1987). The cognitive skill of teaching. Journal of Educational Psychology, 78, 75-95.

Lintern, G., & Wickens, C. D. (1987). Attention the- ory as a basis for training research (Report No. ARL-87-2/NASA-87-3 prepared for NASA Ames Research Center). Urbana-Champaign, IL: Univer- sity of Illinois, Institute of Aviation.

Lucas, D. A. (1987). Mental models and new technol- ogy. InJ. Rasmussen, K. Duncan, & J. Leplat (Eds.), New technology and human error. New York: Wiley.

McCombs, B. L. (1986). The instructional systems development (ISD) model: A review of those fac- tors critical to its successful implementation. Edu- cational Communications and Technology Journal, 34(2), 67-81.

Means, B., & Gott, S. P. (1988). Cognitive task analy- sis as a basis for tutor development: Articulating abstract knowledge representations. In J. Psotka, L. D. Massey, & S. A. Mutter (Eds.), Intelligent tutoring systems: Lessons learned. Hillsdale, NJ: Lawrence Erlbaum.

Meister, D. (1989). Behavioral analysis and measure- ment methods. New York: Wiley.

Merrill, M. D., Li, Z., & Jones, M. K. (1990). Limita- tions of first generation instructional design. Edu- cational Technology, 38(1), 7-11.

Montague, W. E., Ellis, J. A., & Wulfeck, W. H. (1983). The instructional quality inventory (IQI): A formative evaluation tool for instructional systems development (NPRDC-TR-83-21). San Diego, CA: Naval Person- nel Research and Development Center.

Myers, G. L., & Fisk, A. D. (1987). Application of automatic and controlled processing theory to in- dustrial training: The value of consistent compo- nent training. Human Factors, 29, 255-268.

Naveh-Benjamin, J., McKeachie, W. J., Lin, Y., & Tucker, D. G. (1986). Inferring students' cognitive structures and their development using the "or- dered tree technique." Journal of Educational Psy- chology, 78, 130-140.

Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental pro- cesses. Psychological Review, 8, 231-259.

Olson, J. R., & Biolsi, K. J. (1991). Techniques for representing expert knowledge. In K. A. Ericsson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits. New York: Cambridge Univer- sity Press.

Rasmussen, J. (1986). Information processing and human-machine interaction: An approach to cognitive engineering. New York: North-Holland.

Redding, R. E. (1989). Perspectives on cognitive task analysis: The state of the state of the art. In 33rd Annual Proceedings of the Human Factors Society (pp. 1348-1352). Santa Monica, CA: Human Factors Society.

Redding, R. E. (1990a). Metacognitive instruction: Trainers teaching thinking skills. Performance Im- provement Quarterly, 3(1), 27-41.

Redding, R. E. (1990b). Taking cognitive task analy- sis into the field: Bridging the gap from research to application. In 34th Annual Proceedings of the Human Factors Society (pp. 1304-1308). Santa Mon- ica, CA: Human Factors Society.

Redding, R. E. (1992). A standard procedure for con- ducting cognitive task analysis (ERIC Documenta- tion Reproduction Service No. ED 340-847).

Redding, R. E., Cannon, J. R., & Seamster, T. L. (1992). Expertise in air traffic control (ATC): What is it, and how can we train for it?. In 36th Annual Proceedings of the Human Factors Society (pp. 1326- 1330). Santa Monica, CA: Human Factors Society.

Redding, R. E., & Lierman, B. C. (1990). Develop- ment of a part-task CBI trainer based upon a cog- nitive task analysis. In 34th Annual Proceedings of the Human Factors Society (pp. 1337-1341). Santa Monica, CA: Human Factors Society.

Redding, R. E., Ryder, J. M., Seamster, T. L., Purcell, J. A., & Cannon, J. R. (1991). Cognitive task analysis of en route air traffic control: Model extension and validation (Report to the Federal Aviation Admin- istration). McLean, VA: Human Technology, Inc. (ERIC Documentation Reproduction Service No. ED 340-848 (1992)).

96 ETR&D, Vol. 41, No. 2

Rhenius, D., & Definer, G. (1990). Evaluation of con- current thinking aloud using eye-tracking data. In 34th Annual Proceedings of the Human Factors Society (pp. 1265-1269). Santa Monica, CA: Human Fac- tors Society.

Romiszowski, A. J. (1981). Designing instructional systems. London: Kogan Page.

Rosch, E., Mervis, C., Gray, W., Johnson, D., & Boyes- Braem, P. (1976). Objects in categories. Cognitive Psychology, 8, 382-439.

Rumelhart, D. C., & Norman, D. A. (t981). Analogi- cal processes in learning. In J. R. Anderson (Ed.), Cognitive skills and their acquisition. Hillsdale, NJ: Lawrence Erlbaum.

Rumelhart, D. E., & Ortony, A. (1977). The represen- tation of knowledge in memory. In R. C. Anderson (Ed.), Schooling and the acquisition of knowledge. Hillsdale, NJ: Lawrence Erlbaum.

Ryder, J. M., Redding, R. E., & Beckschi, P. E (1987). Training development for complex cognitive tasks. In 31st Annual Proceedings of the Human Factors Society (pp. 1261-1265). Santa Monica, CA: Human Factors Society.

Ryder, J. M., Zachar~ W. W., Zaklad, A. L., & Purcell, J. A. (in press). A cognitive model for integrated deci- sion aiding/training embedded systems (IDATES) (Technical Report 92-XXX). Orlando, FL: Naval Training Systems Center. (Also available as CHI Systems Technical Report 910625.9006)

Sarter, N. B., & Woods, D. D. (1991). Situation awarenes s : A critical but ill-defined phenome- non. International Journal of Aviation Psychology, 1(1), 45-57.

Schlager, M. S., Means, B., & Roth, C. (1990). Cogni- tive task analysis for the real (-time) world. In 34th Annual Proceedings of the Human Factors Society (pp. 1309-1313). Santa Monica, CA: Human Factors Society.

Schneider, W. (1985). Training high-performance skills: Fallacies and guidelines. Human Factors, 27, 285-300.

Schvaneveldt, R. W., (Ed.). (1990). Pathfinder associa- tive networks: Studies in knozoledge organization. Nor- wood, NJ: Ablex.

Seamster, T. L., Cannon, J. R., Pierce, R. M., & Redd- ing, R. E. (1992). Analysis of en route air traffic controller team communication and controller resource management (CRM). In 36th Annual Pro- ceedings of the Human Factors Society (pp. 66-70). Santa Monica, CA: Human Factors Society.

Seamster, T. L., Redding, R. E., Cannon, J. R., Ryder, J. M., & Purcell, J. A. (in press). Cognitive task analysis of expertise in air traffic control. Interna- tional Journal of Aviation Psychology.

Shaw, M. L. G., & Gaines, B. R. (1987). An interactive knowledge-elicitation technique using personal construct technology. In A. L. Kidd (Ed.), Knowl- edge acquisition for expert systems. New York: Ple- num Press.

Shiffrin, R. M., & Schneider, W. (1977). Controlled and automatic human information processing: If. Perceptual learning, automatic attending, and a general theory. Psychology Review, 84, 127-190.

Tenney, Y. J., & Kurland, L. J. (1988). The develop- ment of troubleshooting expertise in radar me- chanics. In J. Psotka, L. D. Massey, & S. A. Mutter (Eds.), Intelligent tutoring systems: Lessons learned. Hfllsdale, NJ: Lawrence Erlbaum.

Van Cott, H. P. (1984). From control systems to knowledge systems. Human Factors, 26, 115-122.

Vicente, K. J. (1990). A few implications of an ecolog- ical approach to human factors. Human Factors Society Bulletin, 33(11), 1-4.

White, B., & Frederiksen, J. (1987). Qualitative mod- els and intelligent learning environments. In R. W. Lawler & M. Yazdani (Eds.), Artificial intelligence and education: Vol. One. Learning environments and tutoring systems. Norwood, NJ: Ablex.

Whiteside, J., & Wixon, D. (1984). Developmental theory as a framework for studying human- computer interaction. In H. Hartson (Ed.), Ad- vances in human-computer interaction. Norwood, NJ: Ablex.

Wileman, R. E., & Gambill, T. G. (1983). The ne- glected phase of instructional design. Educational Technology, 23(11), 25-32.

Wilson, J. R., & Rutherford, A. (1989). Mental mod- els: Theory and application in human factors. Human Factors, 31, 617--634.

Woods, D. D., & Roth, E. M. (1988). Cognitive sys- terns engineering. In M. Helander (Ed.), Handbook of human-computer interface design. Amsterdam: North-Holland.

Woods, D. D., Pople, H., & Roth, E. M. (1990). The cognitive environ ment simulation as a tool for modeling human performance and reliability (NUREG/CR- 5213). Washington, DC: U.S. Nuclear Regulatory Commission.

Zachary, W. W. (1986). A cognitive based functional taxonomy of decision support techniques. Human- Computer Interaction, 2, 25-63.

Zachary, W. W. (1988). Decision support systems: Designing to extend the cognitive limits. In M. Helander (Ed.), Handbook of human-computer inter- face design. Amsterdam: North-Holland.

Zachary, W. W., Ross, L., & Weiland, M. Z. (1991). COGNET and BATON: An integrated approach for embedded user models in complex systems. In Proceedings of the 1991 International Conference on Systems, Man, and Cybernetics (pp. 689-694). New York: IEEE.

Zachary, W. W., Ryder, J. M., Ross, L., & Weiland, M. Z. (1992). Intelligent computer-human inter- action in real-time, multi-tasking process control and monitoring systems. In M. Helander & M. Nagamachi (Eds.), Human factors in design for manufacturability. New York: Taylor & Francis.