rating accident models and investigation methodologies

22
Fall 1985/Volume 16/Number 105 Journal of Safety Research, Vol. 16, pp. 105—126, 1985 0022-4375/85/ $3.00 + .00 © 1985 National Safety Council and Pergamon Press Ltd Printed in the USA (Reproduced with permission) Rating Accident Models and Investigation Methodologies Ludwig Benner, Jr. This is a report of research to identify, rate, and rank accident models and accident investigation methodologies. Models and methodologies used in 17 selected government agencies were examined. The examination disclosed 14 accident models and 17 different accident investigation methodologies in those agencies. To determine their relative merit, evaluation criteria and a rating scheme were developed from user data, statutes, applications, and work products, and each model and methodology was rated. The ratings indicated significant differences in their relative merit. The highest rated model and methodology were tested to determine if the estimated ratings were supported by observable differences in actual performance and to compare investigative results against previously reported cases. Differences found prompted further examination of the benefits and problems that would result from implemen- tation of the preferred model and methodology. Additional exploration of comparative performance measurement techniques disclosed further differences affecting the selection decisions. The models, methodologies, criteria, .ratings, rankings, test results, and initial measurement findings are summarized in this report. Issues ranging from oversimplification to ethical questions were discovered during this work. The findings strongly suggest that significant accident investigation program changes should be considered in agencies and organizations using lower-ranked accident models or investigation methodologies and that a compelling need exists for more exhaustive research into accident model and accident investigation methodology selection decisions. At the time of this research, Ludwig Benner, Jr., was Field Instructor in the Practice of Safety at the University of Southern California. Mr. Benner is currently Senior Scientist for Events Analysis, Inc., 12101 Toreador Lane, Oakton, VA 22124, and an adjunct faculty member for accident investigation with the University of Southern California. Much of this work was performed under contract to the Occupational Safety and Health Administration, U.S. Department of Labor. The views expressed are those of the author and do not necessarily represent the views of the Department of Labor. The author wishes to thank Dr. Roger Stephens of OSHA for his support and valuable contributions to this work. The author also wishes to thank Events Analysis, Inc., for its support of follow-up research. Little guidance exists in the accident inves- tigation field to help managers or investigators identify and choose the best available accident models and accident investigation methodology for their investigation. There was a brief surge of interest in these issues in the early 1960s (Haddon, Suchman, & Klein, 1964; Mayo, 1961), but this primarily involved discussion of behavioral and medical considerations. No comprehensive lists of choices, criteria for their evaluation and selection, or measures of performance emerged to help accident investigators or program managers choose the “best” accident model

Upload: others

Post on 27-Dec-2021

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 105

Journal of Safety Research, Vol. 16, pp. 105—126, 1985 0022-4375/85/ $3.00 + .00© 1985 National Safety Council and Pergamon Press Ltd Printed in the USA(Reproduced with permission)

Rating Accident Models andInvestigation Methodologies

Ludwig Benner, Jr.

This is a report of research to identify, rate, and rank accident models and accidentinvestigation methodologies. Models and methodologies used in 17 selectedgovernment agencies were examined. The examination disclosed 14 accident modelsand 17 different accident investigation methodologies in those agencies. To determinetheir relative merit, evaluation criteria and a rating scheme were developed from userdata, statutes, applications, and work products, and each model and methodology wasrated. The ratings indicated significant differences in their relative merit. The highestrated model and methodology were tested to determine if the estimated ratings weresupported by observable differences in actual performance and to compareinvestigative results against previously reported cases. Differences found promptedfurther examination of the benefits and problems that would result from implemen-tation of the preferred model and methodology. Additional exploration of comparativeperformance measurement techniques disclosed further differences affecting theselection decisions. The models, methodologies, criteria, .ratings, rankings, test results,and initial measurement findings are summarized in this report. Issues ranging fromoversimplification to ethical questions were discovered during this work. The findingsstrongly suggest that significant accident investigation program changes should beconsidered in agencies and organizations using lower-ranked accident models orinvestigation methodologies and that a compelling need exists for more exhaustiveresearch into accident model and accident investigation methodology selectiondecisions.

At the time of this research, Ludwig Benner, Jr., was Field

Instructor in the Practice of Safety at the University of SouthernCalifornia. Mr. Benner is currently Senior Scientist for EventsAnalysis, Inc., 12101 Toreador Lane, Oakton, VA 22124, and anadjunct faculty member for accident investigation with theUniversity of Southern California.

Much of this work was performed under contract to theOccupational Safety and Health Administration, U.S. Departmentof Labor. The views expressed are those of the author and do notnecessarily represent the views of the Department of Labor. Theauthor wishes to thank Dr. Roger Stephens of OSHA for hissupport and valuable contributions to this work. The author alsowishes to thank Events Analysis, Inc., for its support of follow-upresearch.

Little guidance exists in the accident inves-tigation field to help managers or investigatorsidentify and choose the best available accidentmodels and accident investigation methodology fortheir investigation. There was a brief surge ofinterest in these issues in the early 1960s (Haddon,Suchman, & Klein, 1964; Mayo, 1961), but thisprimarily involved discussion of behavioral andmedical considerations. No comprehensive lists ofchoices, criteria for their evaluation and selection,or measures of performance emerged to helpaccident investigators or program managers chooset h e “ b e s t ” a c c i d e n t m o d e l

Page 2: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 106

and investigative methodology. Faced with suchchoices, those responsible for accident investigationfrequently have serious difficulties with bothconcepts and investigative methods — a situation theauthor observed and commented on while with theNational Transportation Safety Board (Benner, 1977,1981a).

Criticisms of accident investigations have beenvoiced by many others as well, includingcongressmen, members of the scientific community,and agency staff. For example, criticisms of theOccupational Safety and Health Administration’s(OSHA) investigations of grain elevator explosionswere extensive, and not untypical (National MaterialsAdvisory Board [NMAB], 1980). If what is beingdone now is not good enough, however, what wouldbe better? What choices are available, and how canthe “best” choice be identified and selected?

OSHA, recognizing that an independentexamination of its accident investigation programmight indicate how to overcome such criticisms,initiated a research project to identify and explore therelative merits of alternative accident models andinvestigation concepts and methods. The following isa report of that research.

THE OSHA RESEARCH_PROJECT

The first goal of the OSHA project was to identifythe accident models and accident investigationmethodologies in use in government agencies anddetermine the “best available” model andmethodology. If OSHA was not using them, thesecond goal was to determine what effects their usewould have on OSHA accident investigations. If theeffects were significant, the next goal was to identifyand analyze the main benefits and risks of theirimplementation by OSHA. If such implementationwould offer significant benefits, the final goal was todetermine the principal steps required to implement apreferred model and investigation methodology inOSHA’s investigations. The results of this work werereported to OSHA in 1983 (Benner, 1983a-d).

The research was based on the hypotheses thatdifferent conceptual views or “models” of theaccident phenomenon existed and that these viewsaffected the investigation methodologies used. It wasfurther hypothesized that

after these models and methodologies wereidentified, a scheme for measuring their relative meritcould be devised.

METHOD

OverviewThe basic research method was a comparative

analysis of accident investigation programs in a broadrange of Federal government agencies in order toidentify and define differences among them. Fromdifferences observed in the agencies’ documentedinvestigation objectives, procedures, and perform-ance, plus comments from personnel familiar withthe accident investigation program, analyses ofagency investigations, and previously publishedworks on the subject, different accident models andinvestigation methodologies would be isolated anddefined. From these and other data, a generalaccident model and investigation methodologyevaluation criteria would be selected or developed.

A rating scale would then be developed, and eachmodel and methodology rated. As a “test” of theratings, different accident cases would bereinvestigated using a high-rated accident model andinvestigation methodology to determine whether theiruse would produce different outcomes. If the resultslooked promising, the findings from these tests wouldbe used to identify and define the benefits andproblems that might arise if a preferred model andinvestigation methodology were adopted by OSHA.

Agencies StudiedThe initial task was to select a sufficient number of

Federal governmental organizations to ensure a broadcross-section of accident investigation programs forthe study. Selection criteria included: (a) agenciesknown to have accident investigation programs; (b) afull range of programs, from formal, sophisticated,and well documented to simple and informalprograms; (c) minimal duplication of programs; and(d) where possible, agencies’ that had experiencedserious accidents after which questions were raisedabout their programs. In addition, the NMAB panelthat had reported on the investigation program forOSHA was selected because it might offer guidancefor the study.

Page 3: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 107

Data AcquisitionDocuments detailing accident investigation

programs, policies, objectives, practices, and outputsfor the agencies selected were then collected andanalyzed to identify and define: (a) the underlyingperception of the accident phenomenon (accidentmodel) that formed the basis for agency accidentinvestigation programs, policy, objectives, and outputrequirements; and (b) the accident investigationmethodology used by the agency.

The preferred sources of information were agencyaccident investigation orders, directives, investigationwork products, and related documents. Data inorganization documents were considered preferableto personal interview data. All governmental agencydocuments involve review processes that invariablyrequire compromises among agency personnel’sviews, so that documents are more likely to reflectagency than individual views of the accidentphenomenon.

Accident investigation documents from theselected agencies were searched for names of orinferences about underlying models and investigationmethodologies. With two exceptions, agencies’accident models and investigation methods had to beinferred, because they were not identified by name.During this task, caution was exercised to isolate theunderlying perceptions of the nature of the accidentphenomenon, per se, rather than the consequences ofthe perceptions. The term “accident model” was usedin the sense of the perceived nature of the accidentphenomenon, which served as the working conceptdriving accident investigation programs andobjectives. The model definitions were based on theauthor’s interpretation of data contained in theagencies’ documents and on literature sourcesdescribing accident models.

Further caution was exercised to distinguishbetween method, e.g., a systematic procedure forperforming a given task, and methodology, e.g., asystem of working concepts, principles, andprocedures employed by a specific branch ofknowledge (McDevitt, 1981). “Accidentinvestigation methodology” is thus the system ofconcepts, principles, and procedures for investigatingaccidents.

For convenient comparison and analysis, themodels, methodologies, and remarks were tabulatedon a matrix as they were identified.

In addition to the document reviews, personalinterviews were conducted with agency officials

who were knowledgeable about their agency’saccident investigation program. Questions wereraised about respondents’ personal views aboutaccidents, their perceptions of their agency’s views ofaccident investigation programs, and their experiencewith investigations. During the interviews, somerespondents became aware of the lack of a designatedaccident model in their program and tried to inventsome name; these spur-of-the moment names weregiven little weight during the analyses. In most cases,the investigation methodology was not named either,but also had to be inferred.1

Whenever possible, models and methodologieswere named on the basis of models found in theliterature. Additional names were chosen by theauthor during the project. Name designations werebased on either: (a) similarities to published modelsand methodologies; (b) the author’s interpretation ofdocumented agency statements, procedures, oroutputs; or (c) in the total absence of documents, theauthor’s interpretation of interview information.

Developing Evaluation CriteriaTo compare and rank the merits of these models

and methodologies from OSHA’s perspective, it wasnecessary to develop evaluation criteria, because nopublished criteria for this purpose existed. Evaluationcriteria generally can be derived from stated programobjectives. Development of these criteria wasapproached by analyzing: (a) accident investigationobjectives found in agency documents; (b) statutorymandates in Public Law 91-596 which establishedOSHA; (c) agency accident investigation workproducts; and (d) information acquired duringinterviews. Previous research discussions were alsoconsidered. By comparing such data, evaluation cri-teria were extracted, analyzed, and defined.

Evaluation criteria for accident models. Surry(1969), Safety Sciences (1980), and Kjellen (1982)have discussed accident models in a context relatedto this study, but they did not develop or describecriteria for evaluating accident models. Other

1 In related research, less than 3% of over 200 investigators couldname an investigation methodology when first asked whatmethodology they used for accident investigation.

Page 4: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 108

authors (Haddon et al., 1964; Mayo, 1961) havealso addressed the conceptual and methodologicalissues, but from a medical and behavioral researchperspective. Therefore, a set of evaluation criteriahad to be developed for this study. These criteriawere based on an analysis of agency programs,the reported and observed performance of andproblems with those programs, analyses ofavailable accident reports, accident investigationliterature, and OSHA’s statutory missions. Thespecific sources for each criterion are presented inthe Project Task 1 Report (Benner, 1983a).

Criteria for evaluating accident investigationmethodologies. Selection of accident investigationmethodologies has been discussed in priorliterature (Safety Sciences, 1980). There were,however, no criteria that were relevant forselecting the “best” methodology for OSHA. Suchcriteria must be tailored to OSHA’s statutorymandates, and statutory mandates were notincorporated in previous research. Therefore,OSHA’s enabling statute was reviewed to identifystatements of the agency’s accident investigationmission. These statements, combined with dataabout other agencies’ programs, were then used todevelop a list of criteria for judging the investi-gation methodologies. Previously identifiedconsiderations about investigation methods(Benner, 1981a) were also used in formulating thecriteria.

Rating Scheme DevelopmentGiven the relatively unrefined rating data

available at that stage of the research, a simplerating scale was sought. The rating systemprovided for 3 possible scores (0 to 2), dependingon whether or not the model or methodology: (a)was likely to satisfy the criterion as is (2); (b)could satisfy the criterion with some modification(1); or (c) was not likely to satisfy the criterionbecause of some inherent shortcoming (0).

The author assigned ratings to each model andmethodology for each criterion. Using data aboutthe agencies’ performance, and consideringobjectives, scope, procedures, and uses of workproducts as additional indicators, it was possibleto apply this rating scale with reasonableconsistency. Ratings were based

on: (a) the performance of agency programs, asreported by interviewees or in related reports; (b)reviews of the procedures described in agencydocuments and their applications as shown byaccident reports; and (c) the author’s directobservations in investigations (Benner, 1981c). A“would” or high rating reflected a conclusion thatthe model would probably satisfy the criterion asnormally considered. A “might” or medium ratingreflected a conclusion that the model might besatisfactory if it were modified to address thedemands of the criterion. A “could not” or lowrating reflected a conclusion that a model simply didnot accommodate the points demanded by thecriterion.

All ratings were conservatively assigned, that is,if a model or methodology was satisfying or couldreasonably be expected to satisfy the criterion, itwas assigned a 2 rating. If with some modificationsit might be adapted to satisfy the criterion, it wasrated 1. If the methodology clearly could not satisfythe criterion, it was rated 0. Uncertainties wereresolved by assigning the higher of the possiblerating choices. This procedure has acknowledgedflaws, and, undoubtedly, ratings contained someauthor bias. Each rating was carefully considered,however, and two sets of ratings were subsequentlychecked, as discussed below.

Ranking MethodA simple unweighted composite ranking scheme,

using the arithmetic total of the scores for all of therating criteria, was used to estimate a rough rankorder for the candidate models and methodologies.This assumes all criteria are of equal importance,which may or may not be a valid assumption. Thereare, however, currently no data or logical bases forassigning an order of importance to the criteria thatwere developed — or any other criteria. In the caseof criteria derived from statutory mandates, nojustification can be advanced for assigning onecriterion greater importance than another for ratingpurposes.

Tests of RankingsIf significant differences in the composite

ratings were found, rankings were tested byreinvestigating previously investigated accidents.Twelve accidents in four categories were selected byOSHA staff as representatives of “good” or

Page 5: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 109

“typical” investigations. These accidents were thenreinvestigated to determine if the higher rankedmodel and methodology would have produced betterresults than the original investigation.

Two tests were actually undertaken to determineif the higher ranked model and methodology wereacceptable for the intended purposes. First, theaccident reports were tested against the criteria forthe models and methodologies to assure the relevanceand applicability of the criteria. Second, the preferredmodel and methodology were used to reinvestigatethe cases. The observations were recorded on formsdesigned for compatibility with the rating criteria.

Exploring ImplementationBenefits and Costs

Project plans included studying the imple-mentation of a better accident model and in-vestigation methodology if the tests indicatedappreciable benefits. Interviews with investigatorswho would have to make the changes weresubsequently undertaken in order to identify potentialproblems or benefits with the new model andmethodology. Further, reports of the investigators’own work were reviewed from a similar perspective.From the interviews, reviews, and analyses of prob-lems with current investigations, a list of trade-offsinvolved in implementing a better model andmethodology was developed.

RESULTS AND DISCUSSION___

Because of the extensive scope of this undertakingand the limited funding to date, much of the researchand findings are admittedly “soft.” Nevertheless, theyclearly indicate that the accident model and accidentinvestigation methodology selection issues requireattention.

Agencies StudiedThe agency selection process resulted in selection

of 17 Federal organizations and a series of relevantNational Materials Advisory Board reports for study.These produced a wide range of models andmethodologies, as well as experiences, that provedvaluable to the study. The agencies studied are listedin alphabetical order in Table 1.

TABLE 1AGENCY PROGRAMS EXPLORED

DURING THE STUDY AGENCIES

Consumer Product Safety CommissionDepartment of AgricultureDepartment of the Air ForceDepartment of the ArmyDepartment of EnergyDepartment of Labor

Mine Safety and Health Administration Department of LaborOccupational Safety and Health AdministrationDepartment of Transportation

Coast GuardDepartment of Transportation

Federal Highway AdministrationBureau of Motor Carrier Safety

Department of TransportationNational Highway Traffic Safety Administration

General Services AdministrationLibrary of CongressNational Aeronautics and Space AdministrationNational Institute of Occupational Safety and HealthNational Transportation Safety BoardNavy DepartmentNuclear Regulatory CommissionNational Materials Advisory Board

Panel on Grain Elevator Explosions ____

Data Acquisition andModel/Methodology Identification

Accident investigation program documentsacquired and analyzed during the study are listed inthe Appendix. In addition, examples of accidentinvestigation reports were made available by mostagency representatives, some on a confidential basis.Information acquired during interviews isdocumented in the Task 1 Report to OSHA (Benner,1983a).

During the analysis, only two agencies identifiedthe accident models or investigation methodologiesby name. Therefore, any language that defined,suggested, or even hinted at a perception of“accident,” “mishap,” or a related term wasconsidered to reflect an accident model. Anylanguage that named, described, or inferred a systemof methods and techniques — when distinguishablefrom all the other systems — was considered a discreteinvestigative methodology.

Fourteen accident models and seventeen accidentinvestigation methodologies were defined from thedata. Sources of the individual models and

Page 6: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 110

references are reported in the Task 1 Report toOSHA (Benner, 1983a). The models andmethodologies identified by the study are listed inTable 2. The models and methodologies are listedin the order identified and are not related to theorder in which the agencies are listed in Table 1.

Two agencies were reviewing or modifying theiraccident investigation program at the time theywere contacted for this research. The informationabout the programs then in effect is reported inTable 2.

Evaluation CriteriaThe data led to the development of 10 criteria

for evaluating accident models. Specific sources ofeach criterion are described in the Task 1 Report ofthis project (Benner, 1983a). The model evaluationcriteria are listed in Table 3.

At the beginning of the study, it was presumedthat all accident investigation programs weredriven by accident models, and that the methodscould therefore be evaluated against commoncriteria. An interesting discovery during the studywas that this assumption was not valid. Threerelationships between the accident models andinvestigation methodologies were discerned:

Case 1: The accident model determines accidentinvestigation methodology.

Case 2: The investigation methodology de-termines the accident model.

Case 3: An analysis method determines theaccident model and investigation methodology, andneither the model nor investigation methodologyparticularly influences the other.

For Case 1, if the accident model satisfies thecriteria, then the resultant investigativemethodology should satisfy the same criteria as themodel. For Case 2, if the investigationmethodology determines the accident model, thenthe criteria for models take on a new role.Indirectly, they should provide a measure of thesuitability of the accident model used for accidentinvestigations. As for Case 3, both of the agenciesinvolved indicated awareness of the need for abetter model and investigation methodology, and atleast one was examining alternatives.

In view of these findings, separate criteria weredeveloped for evaluating the accident investigationmethodologies. These criteria

were derived from the statute establishing OSHA andare listed in Table 4.

Ten criteria were identified for evaluating accidentinvestigation methodologies. There are also tencriteria for evaluating the models. This occurred bychance rather than design; no specific number ofcriteria was targeted at the beginning of the study.

Rating Models and MethodologiesThe 3-element (0, 1, 2) rating scheme was used to

assign ratings to both the models and methodologies.Arguments can be raised about the specific ratingsassigned various criteria. Hopefully, in raisingarguments, advocates of different ratings will beprepared to demonstrate how the criterion is, can be,or can not be served. Ratings for the criteria shownmay vary slightly, but observations and analysesduring the ratings process suggest it is unlikely thatratings will vary more than a point.

Review of the completed ratings suggests areas ofstrength and weakness for each model andmethodology. Experience may indicate additionalcriteria are justified. They should be proposed ifvalid.

Rating models. Each of the 14 accident models wasgiven a numerical rating for each criterion. Theratings were derived by comparing the modelelements with the reported or observed needs thatprecipitated the criterion, determining how the modelmight address the need(s), and then deciding whetherthe model would, might, or couldn’t satisfy thecriterion. The ratings for each model for eachcriterion are shown in Table 5.

Rating methodologies. Each of the 17 methodologieswas similarly rated on each of the 10 criteria. Theneeds of the criterion were contrasted with the abilityof the methodology, conceptually and as practiced, tosatisfy these needs. In addition, the observed resultsof the methodology’s use in the agencies andproblems indicated in the documents were alsoconsidered. These analyses led to conclusions aboutwhether or not, and how, the methodology couldmeet the identified needs. The ratings for eachmethodology for eachcriterion are shown in Table 6. -

No weights were assigned to the criteria because:(a) they were derived from the governing statute; (b)they did not conflict with each other; and

Page 7: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 111

TABLE 2MODELS AND METHODOLOGIES IDENTIFIED BY STUDY

ACCIDENT MODEL INVESTIGATION METHODOLOGY COMMENTS AND NOTES

Epidemiologic + chain-of-events (c-o-e)

Chain of events + all causes +epidemiologic

Chain-of-events + causal factors +epidemiologic with chemicals

Single event + cause factors

Single event + chain-of-events

Violation

Indeterminate

Pentagon + fault tree + process

Actual = events process; policy = singlecause

Stochastic variable + event sequencesc

Events process + generic models +specific accident models

? (scaled abnormalities = incident,abnormal occurrence, accident)

Moving toward events process view

Haddon matrix = epidemiologic +mathematical modeling

“We assume investigator knows what anaccident is”

Chain of events

Risk-oriented events-process energy-flow model with links to work flowdesign and management system

Indeterminate

Epidemiologic + forms using c-o-e+ clinical methods

Personalized, intraorganizationalboards; group + forms + extensivereviews

Systems + intraorganizationalboards; groups + forms + per-sonalized

Investigator’s good judgment withchecklists; forms; review

Kipling model + fault tree analysis+ Gantt chart

Compliance inspection methoda

Indeterminate

Interorganizational group withmultidisciplinary members

Intraorganizational board model +events matrix modeling; eachgroup chooses its method

Gathering data for statistical analysis+ more being exploredd

NTSB board/group/hearings process+ events analysese + forms withc-o-e

Personal

Personal; forms; multilinear eventssequencing, but in transition

Baker police investigationf; em-phasizes crash engineeringmodels; use AIS injury scale ofseverity

Baker modelf dominates + chain ofevents

Kipling + single event + forms

MORT events & causal factorsanalyses; loss tree analysis criterianegotiation, energy trace; changeanalysis methods

NTSB for air; others statistics-driven

Respondent said a model isn’t needed; c-o-e supplements epidemiologic

Need extensive review in lieu oftechnical truth tests; safety + legal

Accident classes used to conserveinvestigative resources

3 types of investigations: safety, JAG,independ7ent safety

Which model does investigation satisfy

Investigation = same as inspection

Advocates NTSB approach + aviation(NTSB) report formatb

Model used vs. policy model differs;actual accident investigation did notreport cause(s); events analysis playskey role

Interesting models of accident, in-vestigation, and event

Methods can encourage cooperation;delegates some investigations andworks in joint investigations

Scaled abnormalities used to exerciseresource control; TMI issues

Widely accessible user accident in-vestigation data; system exposedinvestigation problems

No unifying model to link crashavoidance to crashworthiness; usecontract interdisciplinary teams

Investigations trigger audits and in-spections of forms, records

Includes fires

Uses MORT idealized generic safetycontrol system for both safetyinspections and accident investi-gations

Technical vs. social demands onaccident investigation..

(a) Use compliance inspection procedures for accident investigation.(b) No evidence to indicate NMAB panel used its method, format in its own investigations.(c) Special concept of nature of an “event” includes steady state attributes.(d) includes a blank form in a flow chart( format that reflects one proposed accident model.(e) Events analyses include MES flow charting, fault trees, Time/Loss Analysis, time-sequenced spill maps, etc(f) Baker model extends chain-of-events model significantly

Page 8: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 112

TABLE 3CRITERIA FOR ACCIDENT MODEL EVALUATION

CRITERIA DESCRIPTION Realistic

Definitive

Satisfying

Comprehensive

Disciplining

Consistent

Direct

Functional

Noncausal

Visible

Model must represent reality, e.g., the observed nature of the accident phenomenon; model mustrepresent both sequential and concurrent events and their interactions with time; model mustpermit representation of the risk-taking nature of work processes in which accidents occur.

Model must define nature and sources of data required to describe the phenomenon; model mustdrive the investigation and analysis methods, rather than be driven by those methods; modelmust use definitive descriptive building blocks.

Model must contribute to demonstrable achievement of an agency’s statutory mission and notundermine that mission because of technical inadequacies or inability to satisfy agency perform-ance and credibility demands.

Model must encompass the development and consequences of an accident; model must definethe beginning and end of the phenomenon being investigated and lead to complete description ofevents involved; model must help avoid ambiguity, equivocation, or gaps in understanding.

Model must provide a technically sound framework and building blocks with which all parties toan investigation can discipline their investigative efforts in a mutually supportive manner; modelmust provide concepts for testing the quality, validity, and relationships of data developed duringan investigation.

Model must be theoretically consistent with or provide consistency for agency’s safety programconcepts; model must provide guidance for consistent interpretation of questions arising duringan investigation and for consistent quality control of work products.

Model must provide for direct identification of safety problems in ways that provide options fortheir prompt correction; model must not require accumulation of a lengthy history of accidentsbefore corrective changes can be identified and proposed.Model must provide functional links to performance of worker tasks and work flows involved inan accident; model must make it possible to link accident descriptions to the work process inwhich the accident occurred; model should aid in establishing effective work process monitoringto support high-performance operation.Model must be free of accident cause or causal factors concepts, addressing instead full descrip-tion of accident phenomenon, showing interactions among all parties and things, rather thanoversimplification; model must avoid technically unsupportable fault finding and placement ofblame.

Model must enable investigators and others to sec relevance of model to any accident under in-vestigation easily and credibly; interactions described should be readily visible, easy to com-prehend, and credible to the public and victims as well as investigators

(c) the statute did not assign greater or lessimportance to any of the requirements, based on itslegislative history.

The limitations of this relatively subjectiveapproach and the narrow span of the rating scale areacknowledged. In assigning a rating, publisheddescriptions of the methodologies and documentedcritiques were consulted, comments by intervieweeswere considered, and the author’s observations ofinvestigation problems were weighted. Nevertheless,the ratings shown obviously contain some biases.Further work with more definitive and discriminatingrating scales would be desirable. Any rating changesshould, however, be validated by conducting realinvestigations using competing methods a n dcomparing the outcomes. The methodologies must bemeasured and judged by the

performance results they actually produce. A Delphiapproach to these ratings0would not be acceptable.Any attempt to rate the models and methodologiesstatistically will encounter the same difficulties ex-perienced by the author: No effort to obtain such datahas been attempted in the past, so the literature canoffer little guidance or data of value.

Ranking Models and MethodologiesThe ratings on all criteria were summed for each

model and methodology; the maximum score for anymodel or methodology was 20. The models andmethodologies were then placed in rank order.

Ranking models. The composite ratings for eachmodel are listed in descending order in Table 7,showing their relative ranking. The models with thehighest ratings consider an accident as a process.

Page 9: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 113

The event-based model views an accident as atransformation process by which an activity indynamic equilibrium is transformed, with a harmfuloutcome, by interacting “actors” introducingrigorously timed, undesired cascading changes ofstate (Benner, 1978). The view is similar to that ofthe Department of Energy (DOE) ManagementOversight and Risk Tree (MORT) energy flowprocess model (Johnson, 1973) and to the processmodel shown in the DOE investigation

manual (U.S. Energy, Research and DevelopmentAdministration, 1976). The process model is themodel used informally at the National TransportationSafety Board (NTSB, 1978), the U.S. NuclearRegulatory Commission (Hasselberg, 1983), and theNational Aeronautics and Space Administration(NASA, 1981) at various working levels. It is also themodel being taught most recently at Coast Guardcourses (Hendrick, 1983), and at the Department ofInterior Minerals Management

TABLE 4CRITERIA FOR ACCIDENT INVESTIGATION METHODOLOGY EVALUATION

CRITERIA REQUIREMENTS Encouragement(1)

Independence(2)

Initiatives(4)

Discovery(6)

Competence(8)

Standards(9)

Enforcement(10)

States(11)

Accuracy(12)

Closed Loop (Sec. 26)Methodo logy mustencourage harmoniousparticipation: Does the

investigation methodology promote harmony by encouraging parties to participate ininvestigations and have their views heard, minimize conflict by disclosing gaps in theinvestigation, and efficiently but harmoniously control the presentation of individual viewswith appropriate technical disciplining techniques during the investigation? 2(b)(1)

Methodology must produce blameless outputs: Does the investigation methodology identifythe full scope of the accident, including the role of management, supervisors, and employeesin a way that explains the effects and interdependence of these roles in the accident withoutimputing blame, fault, or guilt?

Methodology must support personal initiatives: Does the methodology provide for positivedescriptions of accidents that show convincingly what is needed to achieve adequate controlof risks in a specific workplace, in a way that promotes informed and valid individualinitiatives, without unnecessarily conveying blame, fault, or guilt?

Methodology must support timely discovery process: Is the investigative methodology ableto discover safety and health problems when applied to these problems areas? Doesmethodology enable timely discovery, or must discovery be delayed until credibility ofsample sizes and casualty requirements are met?

Methodology must increase employee competence: Does the investigation methodologyprovide direct inputs that will increase the competence and safety effectiveness of personnelthrough training in the detection, diagnosis, control, and amelioration of risks? Are outputsresulting from the application of this investigative technology being used in training withdemonstrable safety effectiveness?

Methodology must show definitive corrections: Does the investigation methodology providea timely, comprehensive, credible, and persuasive basis for establishing or reviewing efficacyof safety and health standards? Does it document accidents in a way that countermeasureoptions can be systematically defined, evaluated, and selected, avoiding personal opinionsand judgments during multiple reviews for this purpose?

Methodology must show expectations and behavioral norms: Does the investigationmethodology support the required enforcement program by providing information aboutperceptions of duties under a standard, its practicality, and its effects on risk levels by (a)defining the degree of compliance or nature of compliance problems and (b) showing the roleof a standard in a specific accident in a way that objective observers can trust and rely on?Methodology must encourage States to take responsibility: Does the investigationmethodology encourage States to fulfill their occupational safety an d health mandates byproviding them practical ways to produce consistent, reliable accident reports, pretested forcompleteness, validity, and logic before they are submitted, thus multiplying theeffectiveness of their contributions?Methodology must help test accuracy of outputs: Does the methodology describe eachaccident in a way that can be technically “truth-tested” for completeness, validity, logic, andrelevance during the investigation, to assure the quality of the information in each case?Methodology must be compatible with “pre-investigations” (or safety analyses) of potentialaccidents: Is investigation methodology compatible with the pre-investigation or analysismethodologies so those predictions can be used during investigations, so expected vs. actualperformance of tasks and controls can be measured or validated by investigations, and so theresults can be linked routinely to work flow design improvements?

Note. —Numbers in parentheses refer to PL 91-596, Section (2)(b).

Page 10: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 114

Page 11: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 115

Page 12: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 116

TABLE 7ACCIDENT MODEL RANKINGS

COMPOSITE

RANK AND MODEL RATING

1. Events process model 192. Energy flow process model 183. Fault tree model 144. Haddon matrix model 85. All-cause model 76. Mathematical models 77. Abnormality levels 78. Personal models 59. Epidemiologic model 4

10. Pentagon explosion model 411. Stochastic variable model 312. Violations model’ 313. Single event + cause factors 114. Chain-of-events (c-o-e) model 1 (a)Dominant within OSHA

Service (Benner & White, 1984), as well asnumerous organizations in the private sector.

The MORT energy-flow event-based model alsoviews an accident as a process, but focuses on threeprimary elements — energy, barriers, and targets —supplemented by contributory conditions (Johnson,1973). It is used in the DOE, by investigators in thenuclear power field, and recently at NASA.

Not surprisingly, since OSHA is a standard settingorganization, the model dominating the OSHA FieldOperations Manual (U.S. Department of Labor, 1982)is the violations model, i.e., the view that a violationof a standard will cause an accident. Based on thecomposite ratings, this model ranks among the lowestof the models identified.

Ranking investigation methodologies. Themethodologies and their composite ratings are listedin descending order in Table 8, showing theirrespective rankings. Based on these ratings, theinvestigation methodologies used in OSHA’saccident investigation program manual ranked 13thand 17th of 17.

The highest ranked methodologies were found toreflect application of the events process accidentmodel. These methodologies provide the basis for theenergy trace and barrier analysis (ETBA) and theevents and causal factors charting methods related tothe MORT system (Johnson, 1980). Based on

these rankings, event-based accident investigationmethodologies should clearly be considered thepreferred methodologies for agency accidentinvestigation programs.

Based on the rankings in Tables 7 and 8, it appearsthat adoption of alternative models and investigativemethodologies could improve OSHA’s accidentinvestigation program.

Testing the RatingsBecause of the range of the ratings and the low

ranking of the OSHA model and methodologies, thereinvestigation “tests” were undertaken. The criteriacould be applied in each case and provided a basis fordrawing conclusions about the investigation’s merits.The sample work sheets in Tables 9 and 10demonstrate the ease of applying the criteria toindividual accident cases.

The highly rated event-based accident processmodel was used to establish the scope of theinvestigation and identify factors to be investigatedduring the reinvestigation. Then the event-basedinvestigative methodology was used to perform thereinvestigation and

TABLE 8ACCIDENT INVESTIGATIONMETHODOLOGY RANKINGS

COMPOSITERANK AND METHODOLOGY RATING1. Events Analysis 182. MORT system 183. Fault Tree 164. NTSB board + Interorganizational 13

groups5. Gantt charting 126. Intraorganizational multidisciplinary 11

group7. Personal/good judgment 108. Board with intraorganizational 9

groups9. Baker police system 910. Epidemiologic 911. Kipling’s 5 w’s + how 812. Statistical data gathering 613. Compliance inspectionaa 614. Closed-end flow charts 515. Find chain-of-events (c-o-e) 416. Fact-finding/legal (JAG) 417. Complete the forms b 3

a One dominant method in OSHA investigationsb Second dominant method; used for inspections and

influences OSHA investigations

Page 13: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 117

analysis. The results were documented on eventsflow charts, which provided a direct comparisonbetween what had been done and what was possible.

What did the tests show? A useful description ofwhat was not known about the accident was possiblewith retroactive application of the preferred modeland methodology. This benefit occurred despite thesubstantial periods of time (up to 2 years) that hadelapsed since the accident. Each case was then re-viewed again to determine whether the ratingsresulting from individual cases would be comparableto the model and methodology ratings initiallyassigned. That work disclosed that the ratings wereoverwhelmingly supported by the reinvestigationcases. In a few instances, the case analysis showedthat the ratings assigned to the lower-rated model ormethod were too generous. The results of the testingwere reported in the Task 2 Report to OSHA(Benner, 1983b).

The test accident cases involved citations, no-mystery, and mystery accidents. Retrospectiveapplication of the preferred model and investigationmethodology provided indications of the substantialimpact of the current methodologies on theinvestigation and report.

Probably the most revealing finding was thediscovery that one of the reports, which seemedplausible on first reading and which containedrecommendations to cite a firm, did not describe anyportion of the actual accident process when thepreferred model was applied. In other words, thescope of the original report totally excluded data boutthe actual accident process, which would have sup-ported the citation!

Use of the better model specifically showed whereand how the original investigations were seriouslyflawed with respect to scope, coverage, and othercriteria. Space does not permit a complete listing ofthe detailed findings in each case. Table 9 shows howthe models compared in a sample reinvestigation. Insummary, the comparative application disclosed thatif the preferred model were used, it would:

1. Assure a more realistic description of theaccident process;

2. Help screen irrelevant matter from reports;

3. Help define specific gaps in the accidentprocess understanding;

4. Discipline hypothesis development;5. More effectively support citations;6. Reduce the focus on employee errors as causes;7. Guide report conclusions;8. Show role of management decisions in

accident process;9. Help link accident process events to task

design;10. Show ways to improve corrective action

efforts; and11. Help structure narrative outputs.

Exploration of the impact of the investigativemethodology on the investigations was also aided bythe criteria identified earlier. The results were similarto those found with the model: Significantimprovement seemed possible. The comparativeabilities of the methodologies currently being usedand the proposed preferred methodologies to satisfyspecific criteria are shown in Table 10. It was foundthat, in terms of impact on investigations, thepreferred methodology could:

1. Help create win/win relationships amonginvestigating parties;

2. Show interdependence among parties;3. Assure consideration of relative events timing;4. Link the accident processes to training steps;5. Provide prompt technical self-testing and

validation;6. Show the role of standards in accident

processes;7. Provide a “factual” validated basis for

enforcement;8. Provide a constructive role for States; and9. Generate useful data for subsequent monitoring

of work sites.

In summary, the test results generally supportedthe rankings. These analyses also helped identifysome benefit and risk considerations associated withimplementation of the preferred model andmethodology.

Significantly, despite the elapsed time and thelimits of the data available, every reinvestigation withthe best model and methodology produced betterunderstanding of the accident process

Page 14: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 118

TABLE 9COMPARISON OF ACCIDENT MODELS: CASE I EVALUATION

EVENTSVIOLATIONS PROCESS MODEL MODEL CRITERIA

no yes described accident REALISTICALLY no yes DEFINED the right data needed no yes SATISFIED OSHA’s SAFETY mission no yes covered accident COMPREHENSIVELY no yes provided DISCIPLINING framework no yes produced CONSISTENCY with work flow no yes facilitated DIRECT corrections no yes provided good FUNCTIONAL guidancelargely yes produced NONCAUSAL findings no yes supplied VISIBLE explanation for all

Note. — The reinvestigation data show that, for this representative accident investigation in which citations were issued, thedominant OSHA violations model used by the CSHO Inspector for the accident “inspection” failed to satisfy 9 of 10 accidentmodel methodology evaluation criteria identified during this project. The preferred model, even when applied retroactively,would have satisfied all 10.

and defined what the earlier investigation did and didnot cover. Thus, data for quality control evaluationsof the investigation also became available. Each rein-vestigation also raised serious issues about theaccident and its investigation that were not addressedduring the original investigations. This prompted thenext phase of the project.

Implementation Benefits and CostsIf better investigative technology is possible, as

indicated by the findings thus far, should it beimplemented? The preferred model and methodologywould require investigators to adopt new ways ofthinking about

accidents and to acquire new investigative andanalytical skills. Such significant changes can not beattempted in a vacuum. Therefore, implementation ofthe preferred model and methodology was exploredin interviews with potential users of accidentinvestigation reports and with investigators whowould be required to produce them using the newmodel and methodology. Also, reports produced bythe investigators were reviewed. From theseinterviews and reviews, as well as previouslydocumented problems with current investigations, alist was developed to show benefits and potentialproblems if OSHA were to adopt the “best” modela n d i n v e s t i g a t i v e m e t h o d o l o g y .

TABLE 10COMPARISON OF INVESTIGATION METHODOLOGIES: CASE 1 EVALUATION

EVENTS

COMPLIANCE ANALYSISINSPECTION MODEL CRITERIA

no yes ENCOURAGE harmonious participation no yes produce BLAMELESS outputs no yes support personal INITIATIVES no yes help TIMELY DISCOVERY process no yes Increase employee COMPETENCE no yes show DEFINITIVE corrections no(a) yes show EXPECTATIONS normspartly yes encourage STATES to take responsibility no yes help test ACCURACY no yes enable CLOSED LOOP follow-up

Note. — The table shows that, for this representative accident investigation in which citations were issued, the dominant OSHAcompliance inspection methodology followed by the inspector during the investigation failed to satisfy 9 out of 10 evaluationcriteria identified during this project. The preferred methodology would have satisfied all 10. (a)The citations had to be compromised.

Page 15: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 119

TABLE 11IMPLEMENTATION BENEFITS AND RISK SUMMARY

BENEFIT DESCRIPTION IMPLEMENTATION RISKS

Will raise accident investigation performanceexpectations

Gives own implementation directionBroadens investigator perspectivesSupports CSHO’s initiativesIncreases standards optionsImproves standards evaluationsSets better standards prioritiesSatisfies mission needsImproves OSHA’s external climateRedefines accident investigation purposesShifts focus to safetySupports safety goal settingProvides measures of achievementIdentifies best available methodsControls accident investigation methods usedSupports initiatives awardsManagers accept concepts readilyCuts risk with proven methodMethods demonstrate success quicklyStructures industry role in accident investigationChannels existing energiesSupports accident investigation incentives awardsInvestigators don’t focus on violationsFacilitates better accident investigation trainingResolves past accident investigation problemsProvides skill screening criteriaMakes timely quality checks feasibleProvides for new kinds of actionsProduces condensed accident descriptionsPermits easy recall of descriptionsAllows informed reviewsResults would permit publicityProgram could eclipse NIOSH efforts

Claims that changes not needed

Claims that changes stifle initiativesAllegations of wrong modelConcur but no actionDemands for quick success

Draws “hardliner” statisticians’ fire

Goals can be misusedCreates a me-too problem

Managers too busy to learn

Weak skills could discredit methods Accident investigationleaders must be managers

Some people can’t use methods

Need new skills for accident investigationNeed penalty assessment factorsMay interrupt statistical seriesWorkload trade-offs uncertain

Review group can bias evaluations

These trade-offs are listed in Table 11. The trade-offsare detailed in the Task 3 Report of the project(Benner, 1983c). The substantive benefits seem tooutweigh the risks by a very wide margin, suggestingthat the present OSHA program should be upgradedwithout delay.

Implementation PlansOn the basis of these findings, an implementation

plan for the agency was explored .and developed.That work addresses different kinds of issues and willnot be detailed here. The work is reported in the Task4 Report of this project (Benner, 1983d).

FOLLOW-ON RESEARCH

During the course of this project, the authorexperienced a growing awareness of several broaderaccident investigation issues. Several have majorconsequences and are currently being explored.2 I nthe interest of brevity, only the more significantissues are highlighted.

Accident Investigation RolesOne of these issues emerged from the realization thatthree widely differing views o

f(2)Current work is being sponsored by Events Analysis, Inc.

Page 16: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 120

the role of accident investigations exist. These viewsof accident investigation roles are:

1. A data-gathering effort to support hypothesisvalidation;

2. A form of applied research project; and3. A predictive pre-accident analytical effort.

Accident investigation program documentsindicated that many persons thought of an accidentinvestigation as a data-gathering effort to acquireinformation for later statistical analysis anddetermination of causal factors and trends. A goodexample of this view is found in the National Institutefor Occupational Safety and Health’s (NIOSH) back-ground reports leading to the NIOSH AccidentInvestigation Manual (Safety Sciences, 1980). Asecond group, including NASA, the NTSB, andothers, seems to view each accident investigation asan applied research effort, on which action isexpected to be taken. Both of these views focus onpost-accident investigations, while a third grouplooks at accident investigations as a predictive, pre-accident, analytical effort. This view is expressed inthe DOE MORT program and has been advocated ina System Safety Society presentation to OSHA(Moriarty, 1981). Each view has significant

consequences (Benner, 1984).The major consequence is the timeliness of

corrective safety action. A secondary consequence isthe source and nature of such safety action.Retrospective action tends to be more “experience-oriented” than action based on a predictive analysis.The relationship between accident investigations andsafety action is undergoing substantial changes in agrowing number of fields, as shown in Figure 1. Thetraditional pathways for information influencingsafety actions, shown as a dotted line in Figure 1, areyielding to new pathways to safety actions (solidlines) at a steady but apparently accelerating rate asthe methodological tools to support this changebecome available. Observed reactions to serious acci-dents suggest, however, that both predictive andretrospective efforts are demanded by our societytoday.

Accident Investigation ObjectivesA second issue emerged from the realization that

the acquisition of new knowledge and understandingis a common accident investigative objective. Thissuggests that the capability of alternativemethodologies to discover, define, and present newk n o w l e d g e i s

Page 17: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 121

FIGURE 2DEVELOPMENT OF NEW KNOWLEDGE FROM ACCIDENT INVESTIGATION

CASE 1: EVENT-BASED INVESTIGATIVE TECHNOLOGY

an important consideration in selecting an in-vestigative methodology. Further inquiry intodifferences among methodologies studied in theproject has revealed wide variations in the:

1. Capability to discover new knowledge duringinvestigations;

2. Efficiency of the search for new knowledge;3. Level of detail of the new knowledge

discovered;4. Validation rate for the new knowledge;5. Availability of the new knowledge for

implementation; and6. Applicability of the new knowledge to ongoing

routine activities.

Current research is directed toward providingcomparative measures of each of these attributes.Although numerous obstacles remain, a few attributesseem to be amenable to measurement.

One of the first tasks is to define the newknowledge to be measured. In accident investigation,

new knowledge is a new understanding of anaccident process step that did not exist before theinvestigation began. With event-based analysisprocedures, this can be defined in terms ofrelationships among events sets in a process. Thisapproach permits a distinction between knowledge ofsystem processes and their operation and knowledgeof the accident process. It also has an impact onaccident investigation objectives: Read again therating criteria in Tables 3 and 4, which suggestseveral “new knowledge” objectives.

One approach to measuring several attributesamong methodologies is shown in Figure 2. Thecoordinates of the matrix are the items of newknowledge gained after an accident occurs (at timezero) and the elapsed time after the accident. Figure 2shows a general shape of the cumulative NewKnowledge vs. Time curve for the event-basedinvestigation methodology, drawn from the author’sexperience with the investigation of about 60 majoraccidents in which the methodology was used.

The eventual height of the curve reflects

Page 18: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 122

the capability to discover new knowledge items andprovides a crude measure of that capability. (See 1above.) Note the relatively slow acquisition of newknowledge during the earliest data gathering andanalysis stage. As the data entered in flow chartsincreases, however, the understanding of the accidentprocess interactions grows at a rapidly acceleratingrate. Toward the end of the investigation, the increasein new knowledge tapers off until it flattens out as theinvestigation concludes. The typical elapsed time toidentify new knowledge of an accident and correctiveaction possibilities ranges from 2 days to 2 months,depending on the scope of the accident and thedegree of mystery involved in the case.

The development and validation of safetyhypotheses is of particular significance, in thathypotheses emerge and can be validated during theinvestigation, rather than after sufficient accidentshave occurred to permit trend or other statisticalanalyses.

An especially important aspect of Figure 2 is the“new action” area shown on this chart, whichprovides a measure of attributes 3, 4, and 5 above.Because of the concrete detail, real-time validationmethods, and minimal abstraction used with theevent-based investigative methodology, action onnew knowledge developed during the investigationcan be defined from very early in the investigationuntil the investigation ends. This enables action to betaken during the investigation if desirable.

The efficiency (attribute 2 above) is indicated bythe height of the curve divided by the length of timerequired to reach a point along the curve. The greaterthe ratio, the higher the investigative efficiency.

Figure 3 shows comparable curves for a programusing traditional epidemiological methodology foraccident investigation. Curve 1 is based on theauthor’s observations of experiences with theacquisition and analysis of data for over 100,000accidents covering a 10-year period (MaterialsTransportation Bureau [MTB], DOT, 1971—1981).Note that the methodology is considered here in thecontext of a single accident and the growth of knowl-edge related to that accident. Clinical observationsare credited with some new insights, but they must bevalidated with further experiments. The data gatheredare usuallyextensively influenced by pre-existing hypotheses, so

they serve the primary function of validatinghypotheses. In either case, the duration of thelearning experience depends in part on the frequencyof similar accidents and the availability of data fromthose accidents. It is difficult to generalize a curve forthe methodology without specific observations andmeasurements. Nevertheless, it should be possible toacquire those measurements experimentally and plotthe new knowledge and understanding from aspecific investigation.

An optional curve 2 is drawn on Figure 3 to showthe epidemiological methodology as usually practicedin periodic accident data studies. In those studies,substantial quantities of accident data are gatheredbefore the validation begins. As the data banks are re-viewed, new hypotheses are generated from the data,adding to the store of new but unverified knowledge.The validation requires careful statistical analysesand usually new accident data. At the end of theexperiment, the hypothesis is verified or discarded.The curve assumes that the hypothesis culminates inaction.

During the development of this curve, attemptshave been made to track the reduction in losses insome manner. Although unsuccessful thus far, theeffort has raised the issue of the ethics involved inselecting an investigative method that requiresadditional losses to validate hypotheses. This seemsto place people at unnecessary risk to confirm or re-pudiate a researcher’s hypotheses. It further suggeststhat the generation of hypotheses in this area requiresattention.

In Figure 4, the curves are overlaid on the samecoordinate scales, permitting an estimate of thecomparative merits of several attributes of twomethodologies. Figure 4, for example suggests thatthe event-based analysis methodology should befavored because of (a)the amount of new knowledgediscovered, (b) the relative efficiency of the search,and (c) the timely availability of corrective actionguidance. As additional accidents are investigated,new data could be acquired to provide more validindications of comparative performance.Nevertheless, the author’s experience andobservations support the generalized curves shown. Itis interesting to note that accident preinvestigationscan be shown on this curve too.

Page 19: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 123

FIGURE 3DEVELOPMENT OF NEW KNOWLEDGE FROM ACCIDENT INVESTIGATION

CASE 2: EPIDEMIOLOGIC INVESTIGATIVE TECHNOLOGY

FIGURE 4DEVELOPMENT OF NEW KNOWLEDGE FROM ACCIDENT INVESTIGATION

COMPARISON OF EVENT-BASED AND EPIDEMIOLOGIC INVESTIGATIVE TECHNOLOGY

Page 20: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 124

Because the pre-investigation technology is similar tothe event-based technology, the preinvestigationnew-knowledge curve would be expected to have ashape similar to the event-based curve. The maindistinction relates to time zero. If time zero marks thebeginning of the accident, as well as the beginning ofthe investigation process, it is not difficult to see howpre-investigations shift the acquisition of newknowledge to a time before an accident occurs. Theimplications for controlling losses are obvious.

Investigation Quality ControlA third realization was the almost total lack of

objective measurable criteria for controlling thequality of accident investigations and work products.This is particularly true with respect to safetyperformance improvements resulting from post-investigation safety recommendations. Not one of theprograms surveyed in this research provided for sucha measure! Lack of measurement of the research-defining attributes of investigative methodologiesalso has been reported (King, 1977).

Accident Concepts Underlying StatutesFinally, in deriving rating criteria for models and

methodologies from the safety statutes for theagencies involved, the author observed differences inthe accident concepts underlying the statutorymandates. These differences have not yet been fullydefined, categorized, and assessed, but they do existand their existence should be recognized by thesafety research and accident investigationcommunity.

CONCLUSIONS

The work thus far suggests several conclusions:

1. The number of conceptual accident models thatdrive governmental accident investigation programsseems unnecessarily diverse. Since they conflict, allmodels can not be valid. The continued use of manymodels appears objectionable, given their potentialshortcomings as indicated by their ratings againstperformance criteria and the consequences of thoseshortcomings on the accident data they produce.Lower rated models in governmental safety andaccident investigation

programs should be critically reviewed and probablysupplanted with a better model and methodology.

2. It appears technically feasible to developexplicit criteria for a “preferred” accidentinvestigation methodology for a specific agency fromthe agency’s implementing statute(s). However,differences in the accident models underlying thestatutes indicate a need for caution if this isattempted.

3. The large degree of accident investigationimprovement when the “preferred” accident modeland investigation methodology were implemented inthe OSHA program suggest that a comparable reviewof other agencies’ accident investigation programsshould be initiated without delay.

4. The reinvestigation findings indicate that muchaccident data are conceptually inadequate and flawedbecause of the inadequacies of underlying accidentmodels in existing programs. Because they conflict,all models can not be valid. Therefore, at a minimum,some of the accident data used for analysis and actionin agencies’ safety programs are unacceptablyincomplete and otherwise seriously flawed and areleading to incomplete, misdirected, or delayedremedial action.

5. Based on these findings, more exhaustiveresearch into the measurement and rating of accidentmodels and accident investigative methodologies isrecommended. Such information can offer substantialbenefits to program managers in and out ofgovernment.

REFERENCES

Benner, L. (1977). Accident theory and accident inves-tigators. Hazard Prevention, 13, 18—21.

Benner, L. (1978). Four accident investigation games.Oakton, VA: Lufred Industries, Inc.

Benner, L. (1979). Crash theories and their implications foraccident research. American Association for AutomotiveMedicine Quarterly Journal, 1, 24—27.

Benner, L. (1981a). Accident investigation —A case fornew perceptions and methodologies (Publication No.800387). Warrendale, PA: Society of Automotive En-gineers.

Benner, L. (1981b). Accident perceptions: Their impli-cations for investigators. International Society of AirSafety Investigators Forum, 14, 13—17.

Benner, L. (1981c). Methodological biases which under-mine accident investigations. Proceedings of Interna-tional Society of Air Safety Investigators InternationalSymposium, 1—5.

Benner, L. (1983a). Task report no. 1: Accident models

Page 21: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 125

and investigation methodologies employed by U.S.government agencies in accident investigation programs(Contract No. 41USC252G3). Washington, DC:OSHA.

Benner, L. (1983b). Task report no. 2: Potential impacts ofpreferred accident model and investigation methodologyon OSHA accident investigations (Contract No.41USC252C3). Washington, DC: OSHA.

Benner, L. (1983c). Task report No. 3: Summary report ofprojected benefits and risk associated with OSHA’Sadoption of preferred accident models and investigationmethodology (Contract No. 41USC252C3). Washington,DC: OSHA.

Benner, L. (1983d). Task report No. 4: Upgrading OSHAaccident investigations with best available accidentmodel and investigation methodology:Recommended implementation steps (Contract No.41U5C252C3). Washington, DC: OSHA.

Benner, L. (1984). Safety’s future shock II. Paper presentedat the Department of Energy Environmental, Safety andHealth Seminar Series for 1984, Germantown, MD.

Benner, L., & White, L. (1984). Accident investigatorshandbook for category I and II investigations (FinalReport to Minerals Management Service, Contract No.14-12-0001-30038). Oakton, VA: Events Analysis, Inc.

Haddon, W., Suchman, E. A., & Klein, D. (1964). Accidentresearch. New York: Harper and Row.

Hasselberg, R. (1983). Personal communication. U.S.Nuclear Regulatory Commission, Chattanooga, TN.

Henderick, K. (1983). Personal communication. U.S. CoastGuard.

Johnson, W. (1973). The management oversight and risktree-MORT (SAN 821-2). Germantown, MD: U.S.Atomic Energy Commission.

Johnson, W. (1980). MORT safety assurance system. NewYork: Marcel Dekker.

King, K. (1977). Feasibility of securing research-definingaccident statistics. (Publication 78-180). Cincinnati, OH:NIOSH.

Kjellen, U. (1982). The deviation concept in occupationalaccident research — Theory and method (draft version).Stockholm, Sweden: Occupational Accident ResearchUnit, Royal Institute of Technology.

Mayo, L. W. (Ed.) (1961). Behavioral approaches to ac-cident research. New York: Association for the Aid ofCrippled Children.

McDevitt, J., & Benner, L. (1981). White paper no. 1:System safety methodology for conference on the state ofthe art of system safety. Hazard Prevention, 18,26—31.

Moriarty, B. (1981, April). Statement of the system safetysociety before the U.S. Department of Labor, Oc-cupational Safety and Health Administration on reg-ulation of hazardous materials. Subpart H, 2 CFR 1910.

National Aeronautics and Space Administration. (1981).L39A mishap investigation board report. Cape Kennedy,FL: Author.

National Materials Advisory Board, National ResearchCouncil. (1980). The investigation of grain elevatorexplosions (NMAB-367-1). Washington, DC: Author.

National Transportation Safety Board. (1978). An overviewof a bulk gasoline delivery fire and explosion (ReportHZM 78-1). Washington, DC: Author.

Safety Sciences. (1980). Briefing on procedures manual foruse of the NIOSH accident investigation methodology“AIM” (Contract No. 210-78-0126). Morgan-town, WV:Author.

Surry, J. (1969). industrial accident research, A humanengineering appraisal. Toronto: University of TorontoDept. of Industrial Engineering.

U.S. Congress. (1970, December). Public Law 91-596. U.S.Department of Labor. (1982). OSHA Instruction CPL2.45, Field operations manual with revisions through1982. Washington, DC: Author.

U.S. Department of Transportation. (1982). DOT form5800 report data base. Washington, DC: MaterialsTransportation Bureau.

U.S. Energy Research and Development Administration(now U.S. Dept. of Energy). Accident incident inves-tigation manual (ERDA 76-20). Washington, DC:U.S. Government Printing Office.

APPENDIX:AGENCY DOCUMENTS REVIEWED __

Baker, J. 5. (1975). Traffic accident investigation manual.Evanston, IL: The Traffic Institute, NorthwesternUniversity. (Cited by U.S. DOT/FHWA as basis forinvestigations)

Consumer Product Safety Commission. (1980, October).In-depth accident investigations (Order 9010.24).Washington, DC: Author.

International Civil Aviation Organization. (1979, March).Aircraft accident investigation (Annex 13). (The UnitedStates is a signatory to the ICAO treaty). Montreal,Canada: Author.

Library of Congress. Library of congress regulationsLCR1817-1 and LCRL8L7-4. Washington, DC: Author.

National Aeronautics and Space Administration. (undated).Basic safety requirements (Vol. I) (NASA SHB 1700.1).Washington, DC: Author.

National Institute for Occupational Safety and Health.(1978). Feasibility of securing researching-definingstatistics (Technical Report No. 78-180). Cincinnati, OH:Author.

National Institute for Occupational Safety and Health.(1980). Procedures manual for use of the NIOSH acci-dent investigation methodology (AIM). Morgantown,WV: Author.

National Materials Advisory Board, National Academy ofSciences, Panel on Causes and Prevention of GrainElevator Explosions. (1981a). Investigation of grainelevator explosions (NMAB 367-1). Washington, DC:Author.

National Materials Advisory Board, National Academy ofSciences, Panel on Causes and Prevention of GrainElevator Explosions. (1981b). Prevention of grain ele-vator and mill explosions (NMAB 367-2). Washington,DC: Author.

National Materials Advisory Board, National Academy ofSciences, Panel on Causes and Prevention of GrainElevator Explosions. (1981c). Pneumatic dust control ingrain elevators (NMAB 367-3). Washington, DC:Author.

National Transportation Safety Board. (1975). Inquirymanual, aircraft accidents and incidents (Order 6200.1).Washington, DC: Author.

Page 22: Rating Accident Models and Investigation Methodologies

Fall 1985/Volume 16/Number 126

Navy Department. (1978). Aircraft safety engineeringaccident investigation guide (NAVAIR 00-80T-67-1).Washington, DC: Author.

Navy Department. (1981). Handbook for the conduct offorces afloat safety investigations (NAVSAFCEN5102/29). Washington, DC: Author.

Navy Department. (1982, April). Mishap investigation andreporting (OPNAVINST 5102. lA). Washington, DC:Author.

U.S. Coast Guard. (1982). Mishap investigating and re-porting (COMDTINST 5100.29). Washington, DC:Author.

U.S. Code of Federal Regulations, Titles 10, 14, 16, 29, 30,32, 33, 41, 42, 49 in 1982 issues of CFT. Washington,DC: U.S. Government Printing Office.

U.S. Code of Federal Regulations, Title 49, chapter800—831, Regulations of the National TransportationSafety Board, 1982. Washington, DC: U.S. GovernmentPrinting Office.

U.S. Department of the Air Force. (1972, March). In-vestigation of USAF aircraft accidents (AFR 127-1).Washington, DC: Author.

U.S. Department of the Air Force. (1979, May). The U.S.Air Force mishap prevention program (AFR 127-2).Washington, DC: Author.

U.S. Department of the Air Force. (1980, January). In-vestigating and reporting USAF mishaps (AFR 127-4).Washington, DC: Author.

U.S. Department of the Army. (undated). Procedures forinvestigating officers and boards of officers (ArmyRegulation 15-6). Washington, DC: Author.

U.S. Department of the Army. (1980, September). Accidentreporting and records (Army Regulation 385-40).Washington, DC: Author.

U.S. Department of Defense. (1977, June). Militarystandard 882a, System safety program requirements.Washington, DC: Author.

U.S. Department of Energy. (1976a). Accident/incidentinvestigation manual (ERDA 76.20). Germantown, MD:Author.

U.S. Department of Energy. (1976b). MORT Users manual(ERDA-76-45/4 1976, Rev. 1). Germantown, MD:Author.

U.S. Department of Energy. (1978). Events and causalfactors charting (DOE 76-45/14, Rev. 1). Germantown,MD: Author.

U.S. Department of Energy. (1982a). Environmentalprotection, safety and health protection informationreporting requirements (Order DOE 5484.1). Ger-mantown, MD: Author.

U.S. Department of Energy. (1982b). Environmentalprotection, safety and health protection program forDOE operations (Order DOE 5480.1). Germantown,MD: Author.

U.S. Department of Labor. (1977). OSHA 2288 investi-gating accidents in the workplace (A manual for com-pliance, safety and health officers). Washington, DC:Author.

U.S. Department of Labor. (1982). OSHA instruction CPL2.45 Field Operations Manual (Chapter XVI andChapter V). Washington, DC: Author.

U.S. Department of Labor, Mine Safety and Health Ad-ministration, National Mine Health and Safety Academy.(undated). Special investigations field operationsmanual. Arlington, VA: Author.

U.S. Department of Labor, National Mine Health andSafety Academy. (1978). Safety manuals Nos. 1—10.Beckley, WV: Author.