american journal of medical quality-2016-cohen-1062860616675230 (2)

7
American Journal of Medical Quality 1–7 © The Author(s) 2016 Reprints and permissions: sagepub.com/journalsPermissions.nav DOI: 10.1177/1062860616675230 ajmq.sagepub.com Article The Human Factors Analysis and Classification System (HFACS) is a commonly used tool for analyzing human factors issues associated with accidents across a variety of industries, including aviation, mining, manufacturing, and health care. 1-5 HFACS consists of 4 levels of system failure as originally described by Reason’s “Swiss cheese” model of accident causation: unsafe acts, preconditions for unsafe acts, unsafe supervision, and organizational influences. 6 Shappell and Wiegmann 7 translated this theory into prac- tice by developing a set of causal factors at each level of the model to facilitate the categorization and analysis of human factors associated with accidents (Table 1). Several studies have been conducted to explore the reliability of HFACS as a tool for classifying causal fac- tors associated with accidents. Typically, these studies involve multiple coders who independently read through a set of accident reports and then classify each finding or causal factor based on the HFACS framework. A recent review of this body of literature 8 found that the vast majority of HFACS reliability studies have shown consis- tently high levels of interrater reliability (agreement between independent coders) and intrarater reliability (ability of the same coders to replicate their own classifi- cations after some delay). Explorations of HFACS reliability for coding data other than accident causal factors have been limited. Unlike accident investigations that are reactive, other approaches to safety can be proactive, attempting to identify threats to safety before accidents happen. Sources of such proactive data often include the direct observation of work-related activities and the identification of conditions that poten- tially threaten safe performance. Within health care and surgery in particular, an obser- vational root cause analysis approach will be utilized to identify human factors issues that may lead to patient harm. Several observational studies have documented the myriad work system factors in the operating room that disrupt workflow and surgical team performance. 9-13 However, these studies often involve several hours of observations that produce hundreds of data points or instances of potential human factors problems, which makes the analysis and interpretation of data arduous. Consequently, identifying targeted interventions for reducing specific threats to patient safety is difficult. 675230AJM XX X 10.1177/1062860616675230American Journal of Medical QualityCohen et al research-article 2016 1 Embry-Riddle Aeronautical University, Daytona Beach, FL 2 University of Wisconsin–Madison, Madison, WI 3 Medical University of South Carolina, Charleston, SC Corresponding Author: Scott A. Shappell, PhD, Department of Human Factors, Embry-Riddle Aeronautical University, 600 S Clyde Morris Blvd, Daytona Beach, FL. Email: [email protected] Coding Human Factors Observations in Surgery Tara N. Cohen, MS 1 , Douglas A. Wiegmann, PhD 2 , Scott T. Reeves, MD, MBA 3 , Albert J. Boquet, PhD 1 , and Scott A. Shappell, PhD 1 Abstract The reliability of the Human Factors Analysis and Classification System (HFACS) for classifying retrospective observational human factors data in the cardiovascular operating room is examined. Three trained analysts independently used HFACS to categorize observational human factors data collected at a teaching and nonteaching hospital system. Results revealed that the framework was substantially reliable overall (Study I: k = 0.635; Study II: k = 0.642). Reliability increased when only preconditions for unsafe acts were investigated (Study I: k =0.660; Study II: k = 0.726). Preconditions for unsafe acts were the most commonly identified issues, with HFACS categories being similarly populated across both hospitals. HFACS is a reliable tool for systematically categorizing observational data of human factors issues in the operating room. Findings have implications for the development of a HFACS tool for proactively collecting observational human factors data, eliminating the necessity for classification post hoc. Keywords HFACS, human error, error analysis, latent failures, CVOR by guest on October 26, 2016 ajm.sagepub.com Downloaded from

Upload: albert-boquet

Post on 13-Apr-2017

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: American Journal of Medical Quality-2016-Cohen-1062860616675230 (2)

American Journal of Medical Quality 1 –7© The Author(s) 2016Reprints and permissions: sagepub.com/journalsPermissions.navDOI: 10.1177/1062860616675230ajmq.sagepub.com

Article

The Human Factors Analysis and Classification System (HFACS) is a commonly used tool for analyzing human factors issues associated with accidents across a variety of industries, including aviation, mining, manufacturing, and health care.1-5 HFACS consists of 4 levels of system failure as originally described by Reason’s “Swiss cheese” model of accident causation: unsafe acts, preconditions for unsafe acts, unsafe supervision, and organizational influences.6 Shappell and Wiegmann7 translated this theory into prac-tice by developing a set of causal factors at each level of the model to facilitate the categorization and analysis of human factors associated with accidents (Table 1).

Several studies have been conducted to explore the reliability of HFACS as a tool for classifying causal fac-tors associated with accidents. Typically, these studies involve multiple coders who independently read through a set of accident reports and then classify each finding or causal factor based on the HFACS framework. A recent review of this body of literature8 found that the vast majority of HFACS reliability studies have shown consis-tently high levels of interrater reliability (agreement between independent coders) and intrarater reliability (ability of the same coders to replicate their own classifi-cations after some delay).

Explorations of HFACS reliability for coding data other than accident causal factors have been limited. Unlike

accident investigations that are reactive, other approaches to safety can be proactive, attempting to identify threats to safety before accidents happen. Sources of such proactive data often include the direct observation of work-related activities and the identification of conditions that poten-tially threaten safe performance.

Within health care and surgery in particular, an obser-vational root cause analysis approach will be utilized to identify human factors issues that may lead to patient harm. Several observational studies have documented the myriad work system factors in the operating room that disrupt workflow and surgical team performance.9-13 However, these studies often involve several hours of observations that produce hundreds of data points or instances of potential human factors problems, which makes the analysis and interpretation of data arduous. Consequently, identifying targeted interventions for reducing specific threats to patient safety is difficult.

675230 AJMXXX10.1177/1062860616675230American Journal of Medical QualityCohen et alresearch-article2016

1Embry-Riddle Aeronautical University, Daytona Beach, FL2University of Wisconsin–Madison, Madison, WI3Medical University of South Carolina, Charleston, SC

Corresponding Author:Scott A. Shappell, PhD, Department of Human Factors, Embry-Riddle Aeronautical University, 600 S Clyde Morris Blvd, Daytona Beach, FL. Email: [email protected]

Coding Human Factors Observations in Surgery

Tara N. Cohen, MS1, Douglas A. Wiegmann, PhD2, Scott T. Reeves, MD, MBA3, Albert J. Boquet, PhD1, and Scott A. Shappell, PhD1

AbstractThe reliability of the Human Factors Analysis and Classification System (HFACS) for classifying retrospective observational human factors data in the cardiovascular operating room is examined. Three trained analysts independently used HFACS to categorize observational human factors data collected at a teaching and nonteaching hospital system. Results revealed that the framework was substantially reliable overall (Study I: k = 0.635; Study II: k = 0.642). Reliability increased when only preconditions for unsafe acts were investigated (Study I: k =0.660; Study II: k = 0.726). Preconditions for unsafe acts were the most commonly identified issues, with HFACS categories being similarly populated across both hospitals. HFACS is a reliable tool for systematically categorizing observational data of human factors issues in the operating room. Findings have implications for the development of a HFACS tool for proactively collecting observational human factors data, eliminating the necessity for classification post hoc.

KeywordsHFACS, human error, error analysis, latent failures, CVOR

by guest on October 26, 2016ajm.sagepub.comDownloaded from

Page 2: American Journal of Medical Quality-2016-Cohen-1062860616675230 (2)

2 American Journal of Medical Quality

A more systematic, theory-driven approach, such as HFACS, might prove helpful in classifying and analyzing human factors data from observational studies. The pur-pose of the present article is to explore whether HFACS

can be reliably applied to observational health care data. Specifically, HFACS was applied to observational human factors data previously collected in the operating room during cardiovascular surgeries as part of a larger research

Table 1. Description of the HFACS Categories.

Organizational influences Organizational climate: Prevailing atmosphere/vision within the organization, including such things as policies, command

structure, and culture Operational process: Formal process by which the vision of an organization is carried out, including operations, procedures,

and oversight, among others Resource management: Management of necessary human, monetary, and equipment resourcesUnsafe supervision Inadequate supervision: Oversight and management of personnel and resources including training, professional guidance, and

operational leadership, among other aspects Planned inappropriate operations: Management and assignment of work, including aspects of risk management, crew pairing,

operational tempo, and so on Failed to correct known problems: Those instances when deficiencies among individuals, equipment, training, or other related

safety areas are “known” to the supervisor, yet are allowed to continue uncorrected Supervisory violations: The willful disregard for existing rules, regulations, instructions, or standard operating procedures by

management during the course of their dutiesPreconditions for unsafe acts Environmental factors Technological environment: This category encompasses a variety of issues, including the design of equipment and controls,

display/interface characteristics, checklist layouts, task factors, and automation Physical environment: Included are both the operational setting (eg, weather, altitude, terrain) and the ambient environment,

such as heat, vibration, lighting, and toxins Conditions of the operator Adverse mental states: Acute psychological and/or mental conditions that negatively affect performance, such as mental

fatigue, pernicious attitudes, and misplaced motivation Adverse physiological states: Acute medical and/or physiological conditions that preclude safe operations, such as illness,

intoxication, and the myriad pharmacological and medical abnormalities known to affect performance Physical/mental limitations: Permanent physical/mental disabilities that may adversely affect performance, such as poor vision,

lack of physical strength, mental aptitude, general knowledge, and a variety of other chronic mental illnesses Personnel factors Communication, coordination, and planning: Includes a variety of communication, coordination, and teamwork issues that

affect performance Fitness for duty: Off-duty activities required to perform optimally on the job, such as adhering to crew rest requirements,

alcohol restrictions, and other off-duty mandatesUnsafe acts Errors Decision errors: These “thinking” errors represent conscious, goal-intended behavior that proceeds as designed, yet the

plan proves inadequate or inappropriate for the situation. These errors typically manifest as poorly executed procedures, improper choices, or the misinterpretation and/or misuse of relevant information

Skill-based errors: Highly practiced behavior that occurs with little or no conscious thought. These “doing” errors frequently appear as breakdown in visual scan patterns, inadvertent activation/deactivation of switches, and forgotten intentions; omitted items in checklists often appear. Even the manner or technique with which one performs a task is included

Perceptual errors: These errors arise when sensory input is degraded, as is often the case when flying at night, in poor weather, or in otherwise visually impoverished environments. Faced with acting on imperfect or incomplete information, aircrew run the risk of misjudging distances, altitude, and descent rates as well as responding incorrectly to a variety of visual/vestibular illusions

Violations Routine violations: Often referred to as “bending the rules,” this type of violation tends to be habitual by nature and is often

enabled by supervision/management that tolerates such departures from the rules Exceptional violations: Isolated departures from authority, neither typical of the individual nor condoned by management

Abbreviation: HFACS, Human Factors Analysis and Classification System.

by guest on October 26, 2016ajm.sagepub.comDownloaded from

Page 3: American Journal of Medical Quality-2016-Cohen-1062860616675230 (2)

Cohen et al 3

project. Data were collected from both an academic uni-versity and a community hospital, producing 2 different data sets. Three trained analysts independently classified the data using HFACS, and agreements between the ana-lysts were computed. It was hypothesized, based on pre-vious HFACS reliability studies, that levels of agreement among coders would meet or exceed conventional stan-dards of reliability. Confirmation of this hypothesis would support the future development of a proactive HFACS tool for collecting human factors data, eliminat-ing the necessity for classifying observations post hoc.

Methods

Data Collection

Two data sets originally collected to identify workflow disruptions in cardiac surgery were utilized for this research. Workflow disruptions were defined as any event that may have caused a delay or resulted in a disturbance to the progression of a team member’s task.

Study I utilized data collected from an academic univer-sity hospital.14 Graduate human factors students collected data across 15 surgical cases, totaling 73.08 hours of obser-vation. Each observer was embedded within one of 3 car-diac team areas (anesthesiology, circulating nursing, and perfusion). Because of limited physical space and in an effort to be as unobtrusive as possible, only 2 students observed per case (ie, anesthesia and circulating, anesthe-sia and perfusion, circulating and perfusion). Data were collected from the time patients entered the cardiovascular operating room (CVOR) until they left following the com-pletion of the surgery. Results produced 878 workflow dis-ruptions. These disruptions were coded and classified using HFACS in this study. (A detailed description of the data collection procedures is published elsewhere.14)

The data set used for Study II was collected from a community hospital across 25 surgeries, totaling 145.04 hours of observation. Although data collection was simi-lar to Study I in that only 2 observers were in a room at a given time, in Study II, each observer collected workflow disruptions that affected 2 cardiac team areas. In addition to focusing on the anesthesia team, circulating nursing, and perfusion team, Study II included those observations that may have affected the surgeon. Thus, in every sur-gery, one observer collected data involving the anesthesi-ologist and surgeon, and the other observer collected data involving the perfusionist and circulating nurse. Results produced 4233 observations to be coded using HFACS.

Coder Training

Three human factors graduate students (who were not involved with the data collection of either study) were

involved in the data coding for both data sets. An expert in human factors and the HFACS methodology provided cod-ers with 2 days of general HFACS training. This training included an overview of human error and human factors, a description of the HFACS framework, and an extensive discussion of each causal category with examples of each. The raters participated in hands-on exercises that allowed them to practice classifying generic causal factors within this framework. Following HFACS training, the author provided raters with more detailed instruction on specific CVOR topics, including CVOR terminology; surgical team member titles, roles, and responsibilities; and com-mon procedures and equipment/supplies.

Data Coding/Classification

HFACS consists of 19 causal factor categories across its 4 main tiers/levels (organizational influences, supervi-sory failures, preconditions for unsafe actions, and unsafe acts). The data set from Study I was coded first. In an effort to calibrate the coding process, all 3 raters together classified 100 randomly selected events from the obser-vational data set into the 19 possible HFACS causal fac-tor categories. After this activity, each rater independently classified 50 additional events. Two of the authors (SAS and TNC) and the 3 raters then discussed any disagree-ments. The raters individually coded the remainder of the data and later returned to independently code the 150 events that were used for calibration (100 group coded events and 50 individual coded events) to be included in the analysis. Following this process, the same 3 raters uti-lized during the coding of the first data set coded the data from the second data set. Each rater individually coded all 4233 observations.

Data Inclusion and Reliability

Data inclusion was determined using 3 analytical methods. The first method focused on unanimous agreement—the percentage of events in which all 3 coders concurred on the HFACS classification of the event. The second method focused on majority agreement—the percentage of events in which at least 2 raters (majority) agreed on the appropri-ate HFACS code. The third method focused on reconciled agreement—a subsequent version of majority agreement. For all 3 methodologies, percentage agreement could range from 0% to 100%, with agreement of 70% or higher being considered reliable, 60% to 69% being moderately reli-able, and below 60% being unreliable.8

Although percentage agreement is arguably the most common method for examining reliability, it does not correct for chance agreement among raters. Another more stringent measure of reliability is Fleiss’κ. Fleiss’ κ tests for interrater reliability and measures nominal data on a

by guest on October 26, 2016ajm.sagepub.comDownloaded from

Page 4: American Journal of Medical Quality-2016-Cohen-1062860616675230 (2)

4 American Journal of Medical Quality

scale of 0.0 to 1.0. Here, values between 0.40 and 0.60 are considered moderately reliable, values between 0.60 and 0.80 are considered substantially reliable, and values above 0.80 are reliable.8

Results

Data Inclusion

Study I: Unanimous Method. Of the 878 original flow dis-ruptions identified in Study I, all 3 coders agreed that 867 (99%) of the events could be included/coded using HFACS. Of the 11 events that were not included unani-mously, none were excluded unanimously (ie, there were no events that all 3 coders agreed did not fit within the framework). Of the 867 unanimously “codable” events, all 3 coders agreed 98% of the time (847/867 events) on which HFACS tier an event belonged to (ie, organiza-tional, supervisory, preconditions, or unsafe acts). Of these 847 agreed-on tier-level events, 567 (67%) were unanimously agreed on at the category/causal factor level within the tier. Of the 567 events, 566 were considered preconditions for unsafe acts.

Majority method. Similar to the unanimous method, the first step of the majority method involved inves-tigating how many of the 878 events were considered “codable” by at least 2 of the 3 coders. Results revealed that all events (100%) were considered to fit within the HFACS framework by at least 2 coders. At least 2 cod-ers also agreed 100% of the time on which HFACS tier a particular event belonged. At least 2 coders agreed on which causal factor category an event belonged to within a tier 93.6% of the time (822/878). Of the 822 events, 820 were considered preconditions for unsafe acts.

Reconciled method. Coders met to reconcile the 56 events that were not originally agreed on during the majority method. All events were included in this analy-sis, and of the 878 total events, 862 were considered pre-conditions for unsafe acts.

Study II: Unanimous Method. Of the 4233 original observa-tions from Study II, all 3 coders agreed that 3416 (80.7%) events could be coded using HFACS. Of the 3416 “cod-able” events, all 3 coders agreed on the allocation of 3308 (96.8%) events at the tier level of HFACS. Based on this, there was unanimous agreement that 10 events were con-sidered unsafe acts, whereas the remaining 3298 events were preconditions for unsafe acts. Of the 3288 precondi-tions, 2411 (73.1%) were unanimously agreed on at the category level.

Majority method. In Study II, coders came to major-ity that 3789 (89.5%) of the 4233 events could be coded using HFACS. Of the “codable” events, coders came to majority regarding their appropriate tier allocation for 3775 (99.6%) events. At this point, coders determined that the majority (3729; 98.7%) could be classified as preconditions for unsafe acts, followed by unsafe acts (41; 1.1%), and organizational influences (5; 0.13%). Of the 3729 preconditions, coders came to majority as to which category 3499 events belonged (93.8%).

Reconciled method. Coders met to reconcile the 230 events that were not originally agreed on during the majority method. In all, 156 additional events were included in this analysis; of the 4233 total events, 3655 were considered preconditions for unsafe acts.

Reliability

Fleiss’ κ was used to investigate interrater reliability for each study in which 2 k values were calculated depending on the amount of data included. First, an overall k was calculated to investigate interrater reliability for all poten-tial events (n = 878 in Study I, n = 4233 in Study II). This first method investigates how well the 3 raters agreed on the allocation of a particular event into any of the 19 causal categories represented in HFACS. Here, Fleiss’ κ showed substantial reliability for both Study I (k = 0.635; 95% CI = 0.611-0.659; P = .000) and Study II (k = 0.642; 95% CI = 0.633-0.652; P = .000).

Because an overwhelming majority of the data were considered preconditions for unsafe acts in both studies, k also was calculated based on those events that all raters unanimously agreed were “codable” at the preconditions for unsafe acts tier (n = 846 in Study I, n = 3298 in Study II). This second method investigates how well the 3 raters agreed on the allocation of a particular event into the 7 preconditions for unsafe acts categories represented in HFACS. In this case, Fleiss’ κ also showed substantial reliability for both Study I (k = 0.660; 95% CI = 0.635-0.685; P = .000) and Study II (k = 0.726; 95% CI = 0.713-0.738; P = .000).

Causal Factor Distribution

Study I. Nearly all the observational data analyzed in Study I (99.8% and 99.7% for unanimous and majority methods, respectively) were considered preconditions for unsafe acts by the coders. For comparison, Figure 1A depicts results from the unanimous method, the majority method, and a reconciled version of the majority method (consensus obtained between coders after their initial

by guest on October 26, 2016ajm.sagepub.comDownloaded from

Page 5: American Journal of Medical Quality-2016-Cohen-1062860616675230 (2)

Cohen et al 5

assessment was completed). Most human factors issues contained in the observational data set were classified as adverse mental states (eg, being distracted or interrupted, experiencing frustration, boredom or task fixation) fol-lowed by issues in the physical environment (eg, rear-ranging wires and tubing, repositioning equipment, dealing with inadequate space) and problems with com-munication, coordination, and planning (eg, ineffective communication, nonessential communication, personnel not readily available, issues of confusion). Other less-populated preconditions involved problems with the technological environment (eg, usability problems, inter-face issues, breakdowns in technological devices, setup problems) and physical/mental limitations (eg, lack of procedural/technological knowledge).

Study II. Again, nearly all the data fell into the precondi-tions for unsafe acts category (99.7% and 98.7% for the

unanimous and majority methods, respectively). Because of the similarity in results, data are discussed based on reconciled findings. Most failures resulted from adverse mental states (41.50%) followed by issues with commu-nication, coordination, and planning (31.44%), and prob-lems in the physical environment (20.49%). Other preconditions involved problems with the technological environment (4.13%) and physical/mental limitations (2.35%; Figure 1B).

Study I and Study II Data Comparison

Because both sets of data came from observations of workflow disruptions in the CVOR, there was interest in comparing the 2 data sets with respect to the preconditions for unsafe acts findings. These comparisons are made solely from the reconciled data (Figure 1C). Examination of the data indicated that adverse mental state, physical

Figure 1. (A) Study I unanimous versus majority versus reconciled preconditions for unsafe acts. (B) Study II unanimous versus majority versus reconciled preconditions for unsafe acts. (C) Comparison of Study I and II—reconciled preconditions for unsafe actions.

by guest on October 26, 2016ajm.sagepub.comDownloaded from

Page 6: American Journal of Medical Quality-2016-Cohen-1062860616675230 (2)

6 American Journal of Medical Quality

environment, and communication failures constituted the majority of preconditions in each study. However, events associated with the physical environment tended to be higher in Study I, whereas events associated with com-munication were generally higher for Study II.

Discussion

This study investigated the utility of HFACS for classify-ing observational human factors data collected in the CVOR from 2 hospital systems. Overall, this study found that the HFACS framework can be reliably used to retro-spectively categorize the types of human factors prob-lems observed in the CVOR. Reliability was generally high for classifications at the tier level and among pairs of coders. Agreement levels were typically lower at the spe-cific causal factor level and when reliability was deter-mined by unanimous agreement among all 3 coders. The higher levels of reliability are consistent with previous HFACS reliability studies involving accident data.15-19 The lower reliability levels also are consistent with other studies showing that as the number of subcategories or coders increases, acceptable levels of reliability are gen-erally more difficult to obtain.8,19,20

Several other factors also may have influenced agree-ment levels among coders. Specifically, the data used in this study were not originally collected using HFACS. The data sets consisted of observations that were col-lected with the intent of analyzing flow disruptions. Furthermore, the coders of the observational data were not involved in the data collection. Although the coders were trained in HFACS and general terminology/proce-dures involved in the CVOR, they were not fully exposed to the environment in which the data were collected. Hence, reliability might be improved through the devel-opment of an HFACS observational tool that can be used by observers to identify and classify human factors events in real time (prospectively) rather than ad hoc. Research is needed to further explore this issue.

Nearly all the events observed in the 2 studies examined here were considered preconditions for unsafe acts using HFACS. This finding is not surprising given that precondi-tions often manifest as observable events that typically occur in the immediate environment in which work is per-formed. Other variables such as supervisory and organiza-tional factors are more diffuse, acting as latent conditions that occur outside the operating room, which subsequently influence surgical performance later via preconditions. In contrast, unsafe acts (errors and violations) are active fail-ures that can be observed during surgery.21 However, few of these failures were included in the observational data contained in the data sets analyzed.

There are several possible explanations for this appar-ent void in the data. The observers collecting the data may not have been positioned within the operating room

in a way that allowed specific unsafe acts to be readily observed. Although observers were embedded within a particular area, they often left adequate space between themselves and the providers in an effort to be as unob-trusive as possible. Alternatively, observers may not have been looking for unsafe acts directly, or they may have lacked sufficient knowledge of the surgical procedure to identify these failures. It is also possible that fewer unsafe acts occurred in the observed cases than have been reported previously. Future research is needed to distin-guish between these possibilities.

Although the focus of the present study was not to spe-cially analyze the frequency of causal factors observed in each study, this cursory description of the distribution of each event type did reveal some interesting findings. Observations from both studies indicated that adverse men-tal state, physical environment, and teamwork (communi-cation, coordination, and planning) were primary issues. Adverse mental states included periods in which the indi-viduals were distracted, interrupted, or confused. Environmental factors included inadequate space, ineffi-cient architectural/layout issues, and poor housekeeping/equipment location. Items classified in the communication, coordination, and planning category included ambiguous communication, nonessential communication (comments that were not related to the surgery at hand), and coordina-tion issues (ineffective teamwork, poor planning).

Although both Study I and Study II identified adverse mental states as the greatest area of failure, the physical environment was more of an area of concern in Study I, whereas communication, coordination, and planning issues were heavily represented in Study II. This difference is not surprising considering the hospitals from which each data set was collected. Data from Study I came from an aca-demic university hospital, where there often were several students in the room observing. This situation lends itself to overpopulated operating rooms, which can lead to issues involving the physical layout. Data from Study II were col-lected at a nonteaching hospital in which a smaller number of individuals were in the operating rooms.

Historically, the medical community has focused most of its efforts on identifying and mitigating human error via the analysis of sentinel events. Although this approach has been successful, the health care industry continues to face issues involving morbidity and mortality. The proac-tive identification of human factors issues associated with patient harm, therefore, represents the next step in the evolution of patient safety. This study represents the first step in establishing the reliability of the HFACS frame-work as a tool for classifying observational human fac-tors data in 2 different medical venues.

Because HFACS can be used reliably as an observa-tional tool, findings associated with its use could help iden-tify where errors and adverse events are likely to occur. Predictably, hospital administrators could put in place

by guest on October 26, 2016ajm.sagepub.comDownloaded from

Page 7: American Journal of Medical Quality-2016-Cohen-1062860616675230 (2)

Cohen et al 7

targeted interventions to help mitigate human factors issues before they manifest and become errors in the future.

This concept can be compared to physicians who inoc-ulate patients. Inoculations are used as preventive mea-sures that help reduce the risk of death or suffering. Although inoculations are widely used, physicians do not simply administer “omnibus” vaccinations that prevent the likelihood of any and all disease. Rather, they give a specific vaccination to prevent a specific illness and dis-ease. Similarly, this research allows the investigation and identification of the types of human factors issues in med-icine that may lead to catastrophe in the future. Rather than waiting for catastrophe to happen, and fixing the problem after the fact (a reactive approach), HFACS may be able to be used in a proactive way, much like preven-tive medicine, to design targeted interventions that reduce adverse events and patient harm in the future.

Conclusion

This study establishes the reliability of the HFACS frame-work as a tool for classifying observational human fac-tors data in the CVOR. Additional studies within different medical venues such as emergency medicine, labor and delivery, or orthopedic surgery are needed to explore the generalizability of these findings. Further efforts also are needed to design and evaluate a prospective HFACS-specific observational tool to help identify human factors failures, which would allow targeted interventions to be deployed before harm occurs.

Authors’ Note

Source of work/study: Study I: Medical University of South Carolina, Charleston, SC; Study II: Florida Hospital, Orlando, FL.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author(s) received no financial support for the research, authorship, and/or publication of this article.

References

1. Reinach S, Viale. Application of a human error framework to conduct train accident/incident investigations. Accid Anal Prev. 2000;38:396-406.

2. Chen S, Wall A, Davies P, Yang Z, Wang J, Chou Y. A human and organizational factors (HOFs) analysis method for marine casualties using HFACS-Maritime Accidents (HFACS-MA). Saf Sci. 2003;60:105-114.

3. Li W, Harris D. Pilot error and its relationship with higher organizational levels: HFACS analysis of 523 accidents. Aviat Space Environ Med. 2006;77:1056-1061.

4. Patterson JM, Shappell SA. Operator error and system effi-ciencies: analysis of 508 mining incidents and accidents from Queensland, Australia using HFACS. Accid Anal Prev. 2010;42:1379-1385.

5. Thiels CA, Lal TM, Nilenow JM, et al. Surgical never events and contributing human factors. Surgery. 2015;158:515-521.

6. Reason J. Human Error. New York, NY: Cambridge University Press; 1990.

7. Shappell S, Wiegmann D. Applying reason: The Human Factors Analysis and Classification System (HFACS). Hum Factors Aerosp Saf. 2001;1:59-86.

8. Cohen TN, Wiegmann DA, Shappell SA. Evaluating the reliability of the human factors analysis and classification system. Aerosp Med Hum Perform. 2015;86:728-735.

9. Wiegmann DA, ElBardissi AW, Dearani JA, Sundt TM. An empirical investigation of the surgical flow disruptions and their relationship to surgical errors. Proc Hum Factors Ergon Soc Annu Meet. 2006:1049-1053.

10. Sevdalis N, Forrest D, Undre S, Darzi A, Vincent C. Annoyances, disruptions, interruptions in surgery: the disruptions in surgery index (DiSI). World J Surg. 2008;32:1643-1650.

11. Henrickson Parker SE, Lavina AA, Wadhera RK, Wiegmann DA, Sundt TM. Development and evaluation of an observational tool for assessing surgical flow disrup-tions and their impact on surgical performance. World J Surg. 2010;34:353-361.

12. Wiegmann DA, Eggman AA, ElBardissi AW, Henrickson Parker S, Sundt TM. Improving cardiac surgical care: a work systems approach. Appl Ergon. 2010;41:701-712.

13. Palmer G, Abernathy J, Swinton G, et al. Realizing improved patient care though human-centered operating room design. Anesthesiology. 2013;119:1066-1077.

14. Cohen TN, Cabrera JS, Pohl EE, et al. Identifying work-flow disruptions in the cardiovascular operating room. Anaesthesia. 2016;71:948-954.

15. Wiegmann DA, Shappell SA. Human error analysis of commercial aviation accidents: application of the human factors analysis and classification system (HFACS). Aviat Space Environ Med. 2001;72:1006-1016.

16. Gaur D. Human factors analysis and classification system applied to civil aircraft accidents in India. Aviat Space Environ Med. 2005;76:501-505.

17. Shappell S, Detwiler C, Holcomb K, Hackworth C, Boquet A, Wiegmann DA. Human error and commercial aviation accidents: an analysis using the human factors and classifi-cation system. Hum Factors. 2007;49:227-242.

18. Lenné MG, Ashby K, Fitzharris M. Analysis of general avia-tion crashes in Australia using the human factors analysis and classification system. Int J Aviat Psychol. 2008;18:340-352.

19. Ergai A, Cohen T, Sharp J, Wiegmann D, Gramopadhye A, Shappell S. Assessment of the human factors analysis and classification system (HFACS): intra-rater and inter-rater reliability. Saf Sci. 2016;82:393-398.

20. Gwent K. Handbook of Inter-rater Reliability. 2nd ed. Gaithersburg, MD: Advanced Analytics LLC; 2010.

21. Wiegmann DA, ElBardissi AW, Dearani JA, Daly RC, Sundt TM. Disruptions in surgical flow and their rela-tionship to surgical errors: an exploratory investigation. Surgery. 2007;142:658-665.

by guest on October 26, 2016ajm.sagepub.comDownloaded from