an industry standard risk analysis technique

Upload: jlap29

Post on 06-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    1/15

    16 December 200Vol. 21 No. 4Engineering Management Journal

    An Industry Standard Risk Analysis Technique

    A. Terry Bahill, PE, University of Arizona

    Eric D. Smith, University of Texas at El Paso

    o its popularity, the execution o this technique oen has awsamong them are:

    Failing to dierentiate between levels and categories o risk,1.Using estimated probability o the event rather than its2.requency o occurrence,Using ordinal numbers instead o cardinal numbers or3.severity,Using dierent ranges or severity and requency,4.Using an inappropriate combining equation,5.Using linear scales instead o logarithmic scales,6.Explaining only intermediate risks while seeming to ignore7.high and low risks,Ignoring risk interactions and severity ampliers, and,8.

    Conusing risk with uncertainty.9.

    Te purpose o this article is to elucidate these methodologicamistakes and point out ways to ameliorate them. Its purpose is notto replace this technique with statistics or to x human decisionmistakes in doing a risk analysis.

    Risk analysis is an important part o engineering. Tereare general guidelines or how it should be done, but there is noone correct way to do a risk analysis (Haimes, 1999; Kirkwood1999; Buede, 2000; Carobone and ipett, 2004; Blanchardand Fabrycky, 2006). Tis article presents the most commonrisk analysis technique (Bahill and Karnavas, 2000) and showsproblems associated with it. Bernstein (1996) presents the history

    o mankinds eorts rst to understand and then to manage risk.Tere are many levels o risk: (1) system risk including

    perormance, cost, and schedule o the product, (2) project risk(3) business risk including nancial and resource risks to theenterprise, and (4) saety, environmental, and risks to the publicTe eect o a risk in one category may become the source o riskin another category. Interrelationships between the most commono these risks are shown in Exhibit 1. Te low-level system riskso perormance, cost, and schedule constitute the higher-leveproject risks, which might in term compose enterprise risksExhibit 1 and most industry risk analyses only explicitly presentthe bottom two levels.

    Risks can be placed on sequential trees or graphs in order to

    make their interrelationships clear. Within these structures, theprobability o a risk occurring will be dependent on predecessorrisks. Another technique or categorizing risks is to use an SEramework to dierentiate risks by dierent ramework areas. Riskanalyses can then be conducted solely within ramework areas, oonly across ramework areas that are adjacent to one another.

    Good risk management will not prevent bad things romhappening, but when bad things happen, good risk managemenwill have anticipated them and will reduce their negative eects.

    Refereed research manuscript. Accepted by Associate Editor Componation.

    Abstract: Tis article presents the industry standardtechnique or risk analysis, namely a graph or a plot withrequency o occurrence on one axis and severity o theconsequences on the other. It points out the ollowingcommon errors in implementing this technique: ailing todierentiate between types o risk, using estimated probabilityrather than requency o occurrence, using ordinal insteado cardinal numbers, using dierent ranges or severity andrequency, using an inappropriate combining equation,using linear instead o logarithmic scales, explaining onlyintermediate risks, and conusing risk with uncertainty. Tenthis article points out ways to ameliorate these mistakes. Itpresents an algorithm or deriving severity scores. It discusses

    many actors that aect human perception o risk. Finally, itdiscusses the mitigation benet versus cost tradeo that isalways made in risk management.

    Keywords: Risk Analysis, Risk Management, ProjectManagement

    EMJ Focus Areas: Systems Engineering

    Risk is an expression o the potential harm or loss associatedwith an activity executed in an uncertain environment.Since 1662, it has been written that risk had at least two

    components. Fear o some harm ought to be proportional notonly to the magnitude o the harm, but also to the probability othe event (Arnauld and Nicole, 1996). Tis is the rst use o thewordsmagnitude o harm and probability o the event. Tere aresome ancient Greek, Chinese, and biblical sources that have theconcept o risk, but they do not have these words. It is unlikelythat any older source has the words, because probability was notinvented until the seventeenth century (Pascal, 1654).

    Te risk matrixwas used in 1973 by Witt and in 1978 by

    Hussey. Use o the risk matrixwasstandardized by Mil-Std 882Bin 1984 (DoD, 1984).Risk plots (graphs, charts. gures) showing requency o

    occurrence versus the severity o consequences were used in riskassessments o nuclear power systems (Joksimovic, Houghton,and Emon, 1977; Rasmussen, 1981). Tey were dened by Kaplanand Garrick (1981) and applied to civil engineering projects byWhitman (1984). Use o the risk plot was popularized by theDeense Systems Management College (1986); however, in spite

    History of Risk Analysis

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    2/15

    17December 2009Vol. 21 No. 4Engineering Management Journal

    Frequencies are Better Than ProbabilitiesTe measure o risk is the severity o the consequences timesthe requency (or probability) o occurrence. Because humansevaluate probabilities poorly (Gigerenzer, 2002; Clausen and Frey,

    2005), we will use requency o occurrence instead. Te ollowingexample based on Gigerenzer (1991) shows this superiority orequencies over probabilities.

    A woman has a positive mammogram. What is the probabilitythat she has cancer? Te ollowing inormation is available aboutwomen o her age: (a) the probability that a woman tested in thislab has breast cancer is 0.8%; (b) i a woman has breast cancer, theprobability that she will have a positive mammogram is 88%; c) ia woman does not have breast cancer, the probability that she willhave a positive mammogram is 7%. Tis problem seems ideal orBayes Rule. Let D be the disease, in this case breast cancer. Let Rbe the test result, in this case a positive mammogram. Ten

    i a woman does not have breast cancer, the probability that shewill have a positive mammogram is 7%, because it is not about thecategory in which we are interested, namely women with cancerExhibit 2 shows how most people reason.

    Now let us look at the requency o occurrence approach:(a) eight o 1,000 women tested in this lab have breast cancer(b) o the 8 women with breast cancer, 7 will have a positive

    mammogram(c) o the remaining 992 women, 70 will have a positive

    mammogram

    Imagine a sample o women who have a positive mammogramWhat raction o them has cancer? Te answer obviously is 7/77 =9%. Tese three pieces o data (a), (b) and (c) can be representedwith a tree, as shown in Exhibit 3.

    From this tree, it is easy to see that 7 o the 77 (9%) positivemammograms have cancer. Tis tree is easier or most people

    to understand than Bayes Rule. Te requency o occurrenceapproach uses a mental operation that people perorm quite wellpartitioning a set o cases into exclusive subsets. Te successivepartitioning o sets is well represented by a tree structure. TeBayesian rule approach uses probabilities and is accordinglydifcult or those without an education in probabilities.

    The Case StudySince the 1950s over 80 million Cub Scouts have built ve-ouncewooden cars and raced them in pinewood derbies. Bill Karnavasand erry Bahill ran a Pinewood Derby or their Cub Scout packor ve years. Each year they did a risk analysis, identied themost severe risks and ameliorated them (Bahill and Karnavas

    Exhibit 1. Interrelationship o Risks

    ( | ) ( )( | )( | ) ( ) ( | ) ( )

    0.88 0.008( | ) 0.09

    0.88 0.008 0.07 0.992

    P TR D P D P D TR P TR D P D P TR D P D

    P D TR

    =

    +

    = =

    +

    But this solution is difcult or most people, because peopledo not think this way. Most people ocus on the causally relevantlink(b) i a woman has breast cancer, then the probability thatshe will have a positive mammogram is 88%. Tey ignore (a) theprobability that a woman tested in this lab has breast cancer is0.8%, because it is only statistics, not causal. And they ignore (c)

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    3/15

    18 December 200Vol. 21 No. 4Engineering Management Journal

    2000). For example, in the rst two years the most severe riskswere lane biases and getting the cars into the wrong lanes. oameliorate these risks they switched to a round robin race ormat.Tis allowed each scout to race oen, to race throughout thewhole event, and to race more o their riends. Tey used six

    rounds because that gave each car two races in each lane andstill kept the whole event reasonably short. Te eects o lanebiases were eliminated, because each car ran in each lane thesame number o times. Getting the cars into the correct lanes isdiscussed later. Bahill and Karnavas (2000) divided risk into oubroad categories: perormance, cost, schedule, and project. InExhibit 4, we look at perormance, which is oen called technicaperormance. In Exhibit 4, risk is requency times severity.

    Rationale for the Performance Failure Frequencies

    ypically, in each Pinewood Derby, about 250 heats (races)were run in a little over our hours, or an average o oneheat per minute. Data were collected in our years, so ailuresare given in terms o ailures per 1000 heats or ailures per1000 minutes.

    emporary ailure o sensors at top or bottom o track. Tihappened a little more than twice a year, yielding 10 ailuresper 1000 heats.

    Lane biases. All Pinewood Derby tracks probably have one laneaster than another, so the requency is 1000 ailures per1000 heats.

    Collisions between cars. Originally, about one out o ten heats

    had collisions; this was unacceptably high. Collisions wereprobably caused by imperections where two sections otrack were joined. When a wheel hit such an obstructionthe car bounced out o its lane. By careully aligning andwaxing the joints, the ailure rate was reduced to 20 ailuresper 1000 heats.

    Mistakes in judging or recording the results. Tis was never observedaer we switched rom human to electronic judging. Withcomputer control and electronic judging, this ailure modeis very unlikely. Failures were estimated at 0.1 ailures per1000 heats.

    Human mistakes in weighing cars. Every year an over-weight carprobably snuck through. Tis car would then have run in, on

    average, about 7 heats, meaning 30 ailures per 1000 heats.Human mistakes in allowing modications(such as adding graphitor weight) afer inspection. Each year one scout probablyadded graphite in the middle o his heats, so the requencyo an unair heat would be 15 out o 1000.

    Exhibit 2. Most People Ignore the Evidence Statements (a) and

    (c) and Erroneously Conclude that Most Women With Positive

    Mammograms Have Cancer

    Exhibit 3. Tree Diagram or the Evidence Statements (a), (b) and (c)

    o the Breast Cancer Example

    Exhibit 4. Pinewood Derby Perormance Risk Matrix

    Failure mode Potential eects

    Frequency

    (ailures per

    1000 heats)

    Severity Risk

    Temporary sensor ailure Heat must be re-run 10 5 50

    Lane biases Some cars have unair advantage 1000 0 0

    Collisions between cars Heat must be re-run 20 5 100

    Mistakes in judging or recording Wrong car is declared the winner 0.1 1000 100

    Human mistakes in

    weighing cars Some cars have unair advantage 30 10 300

    allowing modifcations Some cars gain unair advantage 15 10 150

    placing cars in wrong lanes Wrong car is declared the winner 20 1000 20000

    lowering the starting gate Cars can be broken, race could be unair 1 100 100

    resetting fnish line switches Heat must be re-run 10 5 50

    wasting time People get annoyed 10 100 1000

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    4/15

    19December 2009Vol. 21 No. 4Engineering Management Journal

    Human mistakes in placing cars in the wrong lanes. Tis wasobserved ve times in the rst derby, thereore, 20 ailuresper 1000 heats.

    Human mistakes in lowering the starting gate. In the rst derbythis happened once.

    Human mistakes in resetting nish line switches. Tis happenedabout 10 times per 1000 heats.

    Human mistakes wasting time. Tis happened about 10 times per1000 heats.

    Algorithm for Computing Severity ValuesTe ollowing algorithm is used or computing values or theseverity o the consequences

    Assign a requency o occurrence (1. Fi) to each ailure mode.

    Find the ailure mode that has the most severe consequences,2.call its value S

    worst.

    For each other ailure mode, ask: How many o these ailures3.would be equally painul to the Worst? Call this N

    i.Tis can

    be rephrased as, Cumulatively, how many o these ailureswould have an equal impact to the Worst?Compute the severity or each ailure mode as4. S

    i= S

    worst/ N

    i

    Normalize the severity values so that their range equals the5.range o the requency values.

    Compute the estimated risk using a combining equation.6.Prioritize the risks to show which are the most important.7.

    Te results o step 1 are shown in the requency column oExhibit 4. In our Pinewood example, step 2 identies Placingcars in the wrong lanes, thereby declaring the wrong car to be the

    winner,to be the most severe ailure mode. In step 3 each ailuremode is compared to this worst ailure mode. For example, ourdomain expert told us that rerunning 200 heats would be equallypainul to declaring the wrong car a winner. Step 4 now producecandidate values or severity.

    Severity values should be cardinal measures, not ordinameasures. Cardinal measures indicate size or quantity; ordinameasures merely indicate rank ordering. (A mnemonic or this isthat ordinal is ordering, as in rank ordering.) Cardinal numbersdo not merely indicate that one ailure mode is more severe than

    another they also indicate how much more severe. I one ailuremode has a weight o 6 and another a weight o 3, then the rst istwice as severe as the second. Steps 2 to 4 in the above algorithmhelp generate these cardinal numbers.

    Step 5 requires normalizing the severity values so that theirrange equals the range o the requency values. For Pinewood, therequency values range over ve orders o magnitude, thereorethe severity values were adjusted so that they also ran over veorders o magnitude, as is shown in the severity column o Exhibi4. Te reason or this is explained with Exhibit 5.

    Te examples in the le and right halves o Exhibit 5 have thesame requency o ailure, but the severity column in the right halhas been turned upside down. Te risk columns are dierent, bu

    the rank order columns are identical. Severity had no eect! Ingeneral, i two items are being multiplied and they have dierenranges, the one with the bigger range has more weight.

    Tis problem o dierent ranges could be insidious i theseverity values were based on physical terms. For example, i therequency o occurrence or the events ran rom one to ten event

    Exhibit 5. The Problem With Dierent Ranges

    Example 1 Example 2

    Frequency Severity Risk Rank

    OrderFrequency Severity Risk

    Rank

    Order

    10-1 1 110-1 1 10-1 6 610-1 1

    10-2 2 210-2 2 10-2 5 510-2 2

    10-3 3 310-3 3 10-3 4 410-3 3

    10-4 4 410-4 4 10-4 3 310-4 4

    10-5 5 510-5 5 10-5 2 210-5 5

    10-6 6 6x10-6 6 10-6 1 1x10-6 6

    Exhibit 6. People Might Not Weigh Severity and Probability Equally

    Severity o

    Consequences

    (S)

    Probability

    (P)

    Product

    (SxP)Perceived risk R= S2xP

    Low severity, high

    probability

    0.1 0.9 0.09 Low 0.009

    Medium severity,

    medium probability

    0.3 0.3 0.09 Medium 0.027

    High severity, low

    probability

    0.9 0.1 0.09 High 0.081

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    5/15

    20 December 200Vol. 21 No. 4Engineering Management Journal

    per year and the economic losses ranged rom a thousand dollarsto a million dollars, then the rank order o the products wouldbe determined strictly by the dollar amounts: the requency ooccurrence would have no eect. Tis example could be xed bynormalizing the economic loss as ollows:

    69 10 ( 1000) 1normalizedEconomicLoss economicLoss= +

    Step 6 is to compute the estimated risk. For Exhibit 4 wecomputed the risk as the severity o the consequences times the

    requency o occurrence. Other ormulae have been used. Ben-Asher (2006) says that people do not equally weigh severity othe consequences and probability, as shown in Exhibit 6. Forexample, people buy collision insurance or their cars, but theyseldom insure their tires against premature wear. Tereore, he

    computes risk as2

    R S P = .Some people have computed risk with the Severity plus

    Probability o Failure minus the product o Severity and Probabilityo Failure (R=S+P-SP) (Kerzner, 2002), but this ormula doesnot perorm satisactorily. For example, i you set the severityto 1 (assuming a range o 0 to 1), then the probability o ailurecould be reduced rom, say, 10-1 to 10-6 without changing the risk.Furthermore, i either the probability or the severity is zero, then

    the risk should be zero, but this equation does not produce thisresult; thereore, we do not use such an equation.

    Carl Burgher (personal communication), a UMR projectmanager, uses Exposure instead o Probabilitythat is, Risk= Severity x Exposure. Te value or exposure should estimatethe subjects likely endangerment to the risk. Exposure does nothave Probabilitys time-discounting errors, such as hyperbolicdiscounting. It would be hard to calculate the probability that aparticular person would get u this season, but it would be easy toestimate that a rst-grade teacher would have a greater exposureto viruses than a college proessor would.

    Te Failure Modes and Eects Analysis (FEMA) techniquealso includes the difculty o detection in the product (Carbone

    and ippett, 2004), so that (R=SPDD). In some cases this isuseul, but we will not use it in this article.All o the ollowing equations have been used in published

    literature to compute risk.

    Each o these equations is good in certain situations, but orthe reasons given above, we will only use the top two equationsin this article.

    Step 7 o the algorithm prioritizes the risks in order to showwhich are the most important. Oen this process is merely thato selecting the risks with the largest values and pointing themout to the decision maker. Later in the Project Risk Managementsection o this article, we will show a graphical technique orhighlighting the most important risks. Also, there is a general-purpose technique or prioritizing things other than risks

    things like goals, customer needs, capabilities, risks, directivesinitiatives, issues, activities, requirements, technical perormancemeasures, eatures, unctions, and value engineering activities(Botta and Bahill, 2007).

    Discussion of the Performance RisksTe most important ailure mode in this Pinewood Derby riskanalysis is Human mistakes in placing cars in the wrong lanesIn a previous risk analysis o our Pinewood Derby (ChapmanBahill, and Wymore, 1992), we ound that in order to satisy our

    customer, we had to pay the most attention to Mistakes in judgingand recordingand Lane biases; thereore, we designed our systemto pay special attention to these ailure modes. We designed acomputerized judging and recording system that had a very lowailure requency; indeed this ailure mode is the one least likelyshown in Exhibit 1. Furthermore, we designed a round robintournament where each car races twice in each lane; thereorelane biases had no eect. When the eect o a ailure mode waseliminated, we set the severity to 0. We did not remove the ailuremode, because we wanted to show that it was considered; thereorethe ailure modes that were very important in our early riskanalysis were not important in the risk analysis o this article. Inour redesigned derby, humans were the most important element.

    Other Columns

    A ailure modes and eects analysis should include a columnindicatingwho should do what in response to each ailure modeTis was not included in this example because all risks were managedby the same person.A root-cause analysis would have included Sowhat? columns.For example, the ailure mode Collisions betweencars has the potentialeect o Heat must be re-run. Aer this wecould ask, So what? which might produce the response, Tistakes additional time.We could ask again, So what? producingI there were many re-runs the whole Pinewood Derby schedulemight slip, etc.Other columns could also be used to documenwhat and whencorrective actions were taken.

    Risk communication is an important part o a risk analysisWhat should be communicated, and to whom? In general, the risksare communicated to the program manager along with proposalsor mitigating these risks. In this case, the Pinewood Derby Marshalserved as the program manager, so he was told that Placing cars inthe wrong lanes was the most signicant risk. Ten he was shownthat the race schedules have lane 1 on the le and lane 3 on theright. Tis matches the perspective o the nish line judges (lane 1is on their le and lane 3 is on their right), but it is the opposite orthe starter at the top o the track; thereore, we suggested printingregular schedules or nish line judges and printing mirror imageschedules in another color or use by the starter.

    Graphical Presentation

    Risk is oen presented graphically with requency o occurrenceon the ordinate axis and severity o consequences on the abscissaaxis. Each o the axes is linear with a range o 0 to 1, as in Exhibi7a. Risks in the upper right corner are the most serious andshould be handled rst. Te problem with this technique is thatall o the risks in Exhibit 4 would be squashed onto the x-axisbecause they all have requencies at or below 10 -1 (except or ouranomalous lane bias). I some action resulted in a reduction orequency o occurrence rom 10-2 to 10-4, we want the estimatedrisk to change, and in these graphs, it would not change. Usinglogarithmic scales as in Exhibit 7b would help, and indeed thismay be the best solution i quantitative data are available.

    Risk Severity of Consequences Frequency of Occurrence

    Risk Severity of Consequences Likelihood of Occurrence

    Risk Severity of Consequences Estimated Probability

    Risk Impact + Likelihood)/2

    Risk

    =

    =

    =

    =

    =

    (

    2

    Severity + Probability - (Severity Probability)

    Risk Severity Probability Difficulty of Detection

    Risk Severity Probability

    Risk Severity Exposure

    =

    =

    =

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    6/15

    2December 2009Vol. 21 No. 4Engineering Management Journal

    Schedule Risk

    Next, in Exhibit 8, the schedule risk is evaluated. Te likelihoodthat all parts o the system unctioned correctly so that any givenheat could be run was determined. ypically, in each PinewoodDerby, about 250 heats (races) were run in a little over our hoursor an average o one heat per minute. Data was collected overour years, so we calculated how many minutes o ailure shouldbe expected or each 1000 minutes o racing.

    Rationale for the Schedule Risk Frequencies

    Schedule risk requencies were computed on a ailures per minutebasis. Because a heat was run each minute, this is the same asthe requency o delaying an individual heat. Many other criteriacould have been used, such as: Did the Derby start on time? Didit end on time? Did each Divisional Contest start on time? Dideach end on time? Te ollowing explains how the schedule riskrequencies were computed.

    Loss o electric power.During the 14 years or which we havedata, this area o ucson has lost electric power three times (oncewhen lightning hit a transormer, once when a car hit a pole, andonce when the Western Power Grid went down), or a total outageo about three hours. So or any given 1000 minute interval, thepower would be expected to be out on average 0.02 minutes. Tis

    means that there was a 2% chance o delaying one race duringthese our years.

    Personal computer hardware ailure. During the our yearso Derby data collection, there were three personal computerhardware ailures. It took a total o 45 hours to x it or get areplacement: thereore, there was one minute o ailure per 1000minutes o operation. Tis does not count periods o upgradinghardware or soware.

    Server ailure.Schedules were printed weeks in advance, soserver ailure was unlikely to delay a race, 0.1 minutes o ailureper 1000 minutes.

    Failure o commercial sofware. 0.01 minutes o ailure per1000 minutes. Tis ailure requency is low, because the program

    were Unix and DOS-based.Failure o custom sofware.0.05 minutes o ailure per 1000minutes.

    Sofware interace ailure. In our years, there was one interaceailure: It took an hour to x, hence 0.02 minutes o ailure.

    Bad weather.In 14 years, ucson has had one aernoon wherethe temperature dropped 30 degrees in two hours. Tis caused

    Exhibit 7b. Loglog Plot o xy= 0.001, xy= 0.01, xy=0.1 and xy=0.6,

    With Normalized Data o Exhibit 1

    Exhibit 8. Schedule Risk Matrix or a Pinewood Derby

    Failure Mode Potential EectsFrequency, ailures

    per 1000 minutesSeverity Risk

    Loss o electric power Delay races until electricity is restored 0.02 100 2

    PC hardware ailure Cannot compute, cancel the Derby 1 10000 10000

    Server ailure No schedules 0.1 100 10

    Failure o commercial sotware Cannot compute, cancel the Derby 0.01 10000 100

    Failure o custom sotware Cannot compute winners 0.05 20 1

    Sotware interace ailure Data is lost and races must be re-run 0.02 10 0.2

    Bad weather Batteries lose power, cannot determine winners 0.008 1000 8

    Forgetting equipment Lose an hour o setup time 0.01 100 1

    Exhibit 7a. Linear Plot o xy= 0.001, xy= 0.01, xy=0.1 and xy=0.6,

    With Normalized Data o Exhibit 1

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    7/15

    22 December 200Vol. 21 No. 4Engineering Management Journal

    the NiCad batteries to lose power, and the race was shied to amanual, paper-based system. Te switch took one hour, yielding0.008 minutes o ailure per 1000 minutes.

    Forgetting equipment.Over these 14 years, on one road trip,equipment was orgotten and a trip back to the lab was needed.Tis caused a delay o a little over one-hour, yielding 0.01 minuteso ailure per 1000 minutes.

    When assigning the severity values we made evaluations suchas, Forgetting our equipment100 times would be equally painul

    as Hardware ailures that would cause us to cancel a Derby.Te probability that this system would be operating mistake

    ree or any given heat is:(1 0.00002)(1 0.001)(1 0.0001) etc., which equals 0.99878.

    Tis is good reliability. It resulted rom doing studies likethese, nding the weak link, and redesigning the system.

    As can be seen in Exhibit 8, the present weak link is Personalcomputer hardware ailure; thereore, when we put on a PinewoodDerby we keep a spare computer in the car. Actually, we are evenmore paranoid than this. Te last time we put on a PinewoodDerby, we designed it with a round robin (best time) withelectronic judging, but we also designed a backup system that

    required no electricityit used a round robin (point assignment)with human judging. All o the equipment and orms or thebackup system were in the car.

    Failure modes and eects analyses usually do not includeacts o nature, like our Bad weathercategory. We have included

    it in our analysis, because we think that i the process is designedwith these items in mind, then although we cannot prevent themthe system can better accommodate such ailures.

    Bahill and Karnavas (2000) also did a cost and saety riskanalysis or the Pinewood Derby. Tat analysis is not included inthe present article.

    Project Risk ManagementMost o this case study has been about analysis o perormance andschedule risks; however, project managers are more oen worried

    about project risk which has been dened as the potential o anadverse condition occurring on a project which will cause theproject to not meet expectations (Georgetown University, 2008).

    For the Pinewood Derby, some project risks are (1) Failure tocompute schedules, (2) School denies use o acilities, (3) Boy Scoutso America (BSA) stops supportingPinewood Derbies, (4) Parentsor children lose interest, and (5) Lawsuits. Tese are presentedin Exhibit 9. Failure to compute schedules has a large likelihood

    value, because in our rst Pinewood Derby we were not able tocompute schedules that met the original requirements, so wehad to change the requirements or else run a single eliminationtournament, which does not require schedules. Failure to computeschedules is an example o a low-level schedule risk that was

    important enough to be elevated to a project risk. I we have dataor ailure requency, we use the term requency; otherwise, i weare guessing the uture, we use the term likelihood.

    Te School denies use o acilities ailure mode includes theschool demanding an exorbitant ee, having another big schoo

    Exhibit 10. Project Risk Chart or a Pinewood Derby

    Exhibit 9. Project Risk Matrix or a Pinewood Derby

    Id Failure Mode Potential EectsLikelihood

    Value

    Severity

    ValueRisk Assigned to

    1 Failure to compute schedules Run a single elimination tournament 0.8 0.4 0.32 Bill

    2 School denies use o acilities Run the Derby in someones backyard 0.7 0.9 0.63 Terry

    3 BSA stops supporting Derbies Buy commercial car kits 0.1 0.1 0.01 Harry

    4 Parents or scouts lose interest Cancel the Derby 0.1 1.0 0.10 Harry

    5 Lawsuits Buy insurance 0.9 0.9 0.81 Terry

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    8/15

    23December 2009Vol. 21 No. 4Engineering Management Journal

    event scheduled or the same day, and the school not givingaccess to electric power outlets. We also considered unavailabilityo equipment, but this is not a big risk because the Boy Scout packowns everything we need except or a PC. We considered loss oknowledge, but our system is well documented. We consideredloss o key people, but we have many qualied parents.

    For project risk, we do not have accurate numbers, so 0 to 1scales are appropriate. Te likelihood values are notprobabilities;rather they are relative measures indicating likelihood: the schooldenying use o their acilities is considered more likely than the

    Boy Scouts o America stopping its support o Pinewood Derbies. Weare not, however, saying that seven out o ten times we expect theschool to turn down our request. Idis the identication numberthat will be used in Exhibits 10 and 12. Te ailure modes areassigned to Bill Karnavas, erry Bahill, and Harry Williams, theCub Master. Exhibit 9 is analogous to a risk register.

    In Exhibit 10, these data are put into a risk managementtool that is commonly used in industry, namely a plot o relativelikelihood versus severity o ailure. In such plots, the ordinateis oen labeled probability or potential probability, but they arenotprobabilities. Uncertainty and unknown unknowns preventcalculation or even estimation o real probabilities. I this scalewere probability, then any item with a probability greater than,

    say, 0.6 would not be a riskit would be a mandate or a designchange. We think the best label or this axis is relative likelihood.Te word relative emphasizes that it is the relationships between(or ranking o) risks that is being illustrated. Te word likelihoodisnot used in a mathematical sense, but rather in its dictionary senseto indicate the risks that are most likely. Tese relative likelihoodestimates are typically very conservative. Tey are derived withthe ollowing attitude: I we built the system today, with loycapabilities, without improving the design, doing nothing about

    this risk item, and i the worst possible circumstances occurredthen what is the possibility that this risk item will harm us?Initially, some risk items are expected to be in the red zone (theupper right corner); otherwise, program managers will think thathe designers are not trying hard enough to get more perormanceor less cost. Te relative likelihood scale is not lineara risk with arelative likelihood value o 0.8 is not considered twice as probableas one with a value o 0.4.

    Te description just given is the way this tool is actuallyused in industry and government (British Cabinet Ofce, 2008)

    Statisticians are appalled, because o its use o ordinal numbersinstead o cardinal numbers. Oen, qualitative scales are given tohelp assign values. Te numbers given in Exhibit 11 are typicalClearly, these are not cardinal numbers.

    In Exhibit 10, risk number 5, Lawsuits, is the most importanrisk; thereore, we buy insurance to transer this risk to aninsurance company. Risk number 2, School denies use o acilitiesis the next most important risk. We should spend a lot o timemaking sure it does not happen and preparing backup plansTe third most important risk is number 1Failure to computeschedulesso we should keep an eye on this. Te curves shouldbe rectangular hyperbolas, not circles, because they are iso-riskcontour lines or severity times likelihood.

    Te most important project risks are put in the project riskregister and are analyzed monthly and at all design reviews. Forthis Pinewood Derby example, School denies use o acilities ian important risk, so we tried to mitigate it. We talked to theschool principal and also ound an alternate site. (In one yearour pack ran the derby at the Northwest Emergency Center.Tis ameliorated the risk, and at the next review we presented therisk plot o Exhibit 12. Te orte o this risk management tool isshowing changes in risk items.

    Exhibit 11a. Qualitative Scale or Frequency o Failure

    How oten will this ailure occur? Your processes Value

    Almost never will eectively avoid or mitigate this risk based on standard practices. 0.2

    Occasionally have usually mitigated this type o risk with minimal oversight in similar cases. 0.4

    Sometimes may mitigate this risk, but workarounds will probably be required 0.6

    Frequently cannot mitigate this risk, but a dierent approach might 0.8

    Almost always cannot mitigate this risk, no known processes or workarounds are available. 1.0

    Exhibit 11b. Qualitative Scale or Severity o Schedule Slippage

    I the risk event happens, what would be the severity o the impact on the schedule? Value

    Minimal impact, slight changes could be compensated by available program slack 0.2

    Additional activities required, you would be able to meet key dates 0.4

    Minor schedule slip, you would miss a minor milestone 0.6

    Program critical path aected 0.8

    You would ail to achieve a key program milestone 1.0

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    9/15

    24 December 200Vol. 21 No. 4Engineering Management Journal

    Exhibit 12. Pinewood Derby Project Risk Plot Ater Risk Mitigation

    In most risk analyses plots that we have seen in industry,most o the risks were in the center o the plot. Most o the risksthat would have been in the upper-right corner had alreadybeen ameliorated, whereas most o the risks in the lower lecorner were considered unworthy o the tracking eort. Tisdistribution could also be caused by a common human decisionmistake. People overestimate events with low probabilities (e.g.,being killed by a terrorist, dying in an airplane crash, or winningthe state lottery) and they underestimate high probability events(e.g., adults dying o cardiovascular disease) (versky andKahneman, 1992).

    DiscussionTe highest priority risks are oen tracked with technicalperormance measures (PMs) (Oakes, Botta, and Bahill, 2006)PMs are critical technical parameters that a project monitors toensure that the technical objectives o a product will be realizedypically, PMs have planned values at dened time incrementsagainst which the actual values are plotted. Monitoring PMsallows trend detection and correction, and helps identiy possibleperormance problems prior to incurring signicant cost orschedule overruns. Te purpose o PMs is to (1) provide visibilityo actual versus planned perormance, (2) provide early detection

    Exhibit 13. Hydraulic Model or Risk o a Commercial Airplane Flight

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    10/15

    25December 2009Vol. 21 No. 4Engineering Management Journal

    or prediction o problems requiring management attention, (3)support assessment o the impact o proposed changes, and (4)reduce risk.

    Preemptive system design can reduce risk. Preemptive designincludes these practices: risk management, rapid prototyping,ailure modes and eects analysis, sensitivity analysis,identication o cost drivers, reliability analysis, anticipatingproblems, quality engineering, variation reduction, complexityanalysis, 6 Sigma, poka-yoke (translate as ool proong), designor manuacturability, design or test, design or assembly, and

    design or cost (Bahill and Botta, 2008). Te Pinewood Derbydesign used the rst seven o these practices.

    Te project risk list should not be synonymous with therequirements. (1) Requirements have attributes such as priority,stability and difculty; however, the difculty o satisying arequirement should not be conounded with project risk. Forexample, one o the original requirements or the PinewoodDerby was that in a 12-car, 6-heat, round-robin race every carwas supposed to race every other car exactly once (Bahill andKarnavas, 2000). Tis requirement was impossible to satisyand it had to be relaxed. A perormance risk notes that there is achance that the requirement cannot be met. When it is conrmedthat the requirement cannot be met, it now becomes a problem

    not a risk; thereore, this problem was handled with requirementsmanagement, not with risk management. (2) Risks identiyunknown eects. I something is spelled out as a requirement,then it is not an unknown. (3) Risks also arise rom uncertaintyin the environment and external entities; however, a requirementcannot apply to things outside the boundary o its system. (4)Requirements are orders o magnitude more numerous thanrisks. (5) Risks should be related to the mission, high-level goalsor capabilities, and not to numerous low-level requirements.

    Exhibit 13 shows a summarizing example o riskmanagement: the hydraulic model or risk o a commercialairplane ight. Te total risk is the sum o latent hazards (e.g., poorquality maintenance, poor employee moral) and direct hazards

    (e.g., lightening, rain, microbursts). Te risk is largest duringtakeo and landing. Te dispatchers job is that o balancing riskwith resources.

    Here is a scenario or risk management in a commercialaircra situation. You are the dispatcher in Salt Lake City. Its aclear morning. emperature is in the low 30s F. A dozen skiersare scheduled to y to Jackson Hole, but there is a 1/2 o iceon their runway. How would you manage this risk? Possible riskmanagement actions are:

    ranser: divert to Idaho Falls and bus the skiers, thustranserring the risk to Greyhound Bus Lines

    Eliminate: use a helicopterAccept: send the plane as scheduledMitigate

    Request removal o ice rom runwayChange runways (orthogonal direction)Change equipment (dierent type o aircra)Change crew (switch the lander rom co-pilot topilot)

    Tese actions produce the acronym EAM. Tere is,however, an additional complication: the icy runway hazardobject interacts with temperature and cross winds. Te hazard isamplied by temperatures just above reezing and it is ampliedby cross winds. I there are cross winds, the hazard might be

    mitigated by using a runway co-linear with the wind. How abouignoring the risk? Tis is not acceptable: there is no I in EAM.

    Te action eliminate includes reducing the likelihood o therisk event; however, in this case we cannot do much to reducethe likelihood o occurrence. Mother Nature, and thereore, thetemperature at Jackson Hole, is not under our control.

    Tis example shows that risk changes with time, that risksare traded o with available resources, and the common ways omanaging risks.

    In this article, we treated risk as severity o consequences

    times requency o occurrence. Tere are many other actors thahumans use when assessing risk. In particular, the ollowing areseverity ampliers: lack o control, lack o choice, lack o trustlack o warning, lack o understanding, manmade, newnessdreadulness, personalization, recallability and immediacy. Lacko control: a man may be less araid driving his car up a steepmountain road at 55 mph than having his wie drive him to schooat 35 mph. Lack o choice: we are more araid o risks that areimposed on us than those we take by choice. Lack o trust: we areless araid listening to the head o the Center or Disease Controexplain anthrax than listening to a politician explain it. Lack owarning: people dread earthquakes more than hurricanes, becausehurricanes give hours or days o warning. Lack o understanding

    we are more araid o ionizing radiation rom a nuclear reactorthan o inrared radiation rom the sun. Manmade: we are morearaid o nuclear power accidents than solar radiation. Newnesswe are more araid when a new disease (e.g., swine u, SARs orbird u) rst shows up in our area than aer it has been around aew years. Dreadulness : we are more araid o dying in an airplanecrash than o dying rom heart disease. Personalization: a riskthreatening you is worse than that same risk threatening someoneelse. Recallability: we are more araid o cancer i a riend hasrecently died o cancer. We are more araid o trafc accidents iwe have just observed one. Immediacy: A amous astrophysiciswas explaining a model or the lie cycle o the universe. He saidIn a billion years our sun will run out o uel and the earth wil

    become a rozen rock. A man who was slightly dozing awokesuddenly, jumped up, and excitedly exclaimed, WHA did youjust say? Te astrophysicist repeated, In a billion years our sunwill run out o uel and the earth will become a rozen rock. Witha sigh o relie, the disturbed man said, Oh thank God. I thoughyou said in a million years.

    Errors in Numerical ValuesTe data used in a risk analysis have dierent degrees ouncertainty: some o the values are known with precision, otherare wild guesses; however, in addition to uncertainty, all datahave opportunity or errors. For example, Ord, Hillerbrandand Sandberg (2008) calculated that one in a thousand paperspublished in peer reviewed journals have been or should havebeen retracted, because they contained serious errors.

    Tere are three types o errors that can be made in assessingrisk: theory, model, and calculation (Ord, Hillerbrand, andSandberg 2008). (1) Using Newtonian mechanics instead oEinsteins relativity theory to describe merging black holes wouldbe an error o using the wrong theory. (2) A modeling errorwas the direct cause o the ailure on the Mars Climate Orbiter(NASA, 2000). Te prime contractor, Lockheed Martin, usedEnglish units onboard the spacecra, whereas JPL used metricunits in the ground-based model. (3) Errors in calculating valuesare common. Several hospital studies have shown that one to twopercent o drug administrations were the wrong dosage. A study

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    11/15

    26 December 200Vol. 21 No. 4Engineering Management Journal

    o papers rom Nature and the British Medical Journalound thatabout 11% o the statistical results were wrong (Ord, Hillerbrand,and Sandberg 2008). o validate numerical values in a riskanalysis, you should analyze the possibility o errors in choosingthe theory, errors in making the model, and errors in estimatingand computing numerical values.

    In cases where risks can be reduced, the level o eort isdriven by a cost/benet analysis. I we assign expected coststo the risks, then we can see our greatest vulnerabilities, butsheer magnitude is not the whole story. We might also look

    at the sensitivity o total risk dollars to risk handling dollars.By calculating the cost o implementing a particular strategy

    versus the reduced expected cost o risk, we can make gooddecisions. In the Pinewood Derby example o this article, wecan see that spending hundreds o dollars on equipment boxesto protect computers that are not very likely to get broken givesa very poor cost reduction to spending ratio. I instead we spenda couple dollars on duct tape or the extension cords to avoidhuman injury with large potential cost risk, the net savings isgreat. Tis also works comparing the cost o insurance againstthe costs o eliminating a risk. For instance, we may decide thatthe cost risk reduction o buying insurance does not compare tospending the same money or nets or pads to protect derby cars

    around elevated areas o track.History teaches us that managers have requent and common

    shortcomings when addressing risks. (1) Tey underestimate theprobability o bad things happening. (Prior to the Challengeraccident, NASA ofcials put the risk o a shuttle accident at 1 in100,000: aerwards Richard Feynman computed the odds at 1 in100.) (2) Tey underestimate the consequence and chaos causedby the bad things when they do happen. (3) Tey ignore or ailto recognize that the occurrence o certain risks can trigger theoccurrence o other risks. (4) Tey ignore or ail to recognize thatmany risks are not independent and may be coupled with otherrisks. (5) Tey inadequately investigate unknown unknowns.Experience teaches that no matter how competent and thorough

    our technical project team, and no matter how competent andthorough our technical peer review team, Mother Nature stillprovides a ew surprises.

    Future WorkWe are investigating the eects o interdependencies andinteractions in risk analyses and tradeo studies. Withinterdependencies, a change in some external actor can causechanges in the requencies or severities o two or more ailuremodes. For example, in the Pinewood Derby, switching theelectronics rom DC battery power to AC electric powerdecreased the severity oBad weatherand increased the severityoLoss o electric power. As a second example, moving the race

    venue rom ucson to a remote desert community increased theprobability o Loss o electric power and increased the severityo Forgetting equipment. With interactions, a change in therequency or severity o one ailure mode might cause a change inthe requencies or severities o other ailure modes. For example,decreasing the requency o occurrence o Lane bias decreasedthe severity oPlacing cars in the wrong lanes.

    SummaryTe risk assigned to each possible cause is determined by the riskanalysis; however, this analysis is not static. Each time the systemis changed or a relevant test is perormed, the risk analysis needsto be reviewed. As risks are eliminated or reduced, other risks will

    increase in relative importance. As a system is deployed, the actuarisks will become better known. Te Pinewood Derby analysis ispresented in this article aer a dozen years o data acquisitionand is, thereore, ar better than our initial analysis. In our rsattempts, our biggest risks were believed to be equipment ailuresInclement weather was not actually considered (we were in ucsonaer all). And as the system evolved with new instrumentationthe risks changed. Te rst derbies were not computer basedso extension cord trip hazards or computer ailures were notrelevant. As the system evolved, the risks evolved with it and so

    did the riskanalysis.Risk managers should manage the identied risks by regularly

    reviewing and re-evaluating them, because risk identication anddata gathering is not an end in itsel. Also, a word o cautionMore than one project has made the costly mistake o being toocomortable with their risk ranking, which led to ignoring thoseareas they initially called low risk.

    Risk analysis is not an exact sciencethere are many ways todo a risk analysis, the risks change with time, there is uncertaintyin the data. In this article, we showed the simplest and mostcommon industry risk analysis technique, namely plotting therequency o occurrence versus the severity o consequences. Weshowed mistakes that are oen made when using this technique

    and ways to ameliorate them. Te purpose o risk analysis is togive the risk manager insight into the project and to communicatehis or her understanding with others. Te denition o risk wasstated in the late 17th century by Arnauld and Nicole (1996): Inorder to decide what we ought to do to obtain some good or avoidsome harm, it is necessary to consider not only the good or harmin itsel, but also the probability that it will or will not occur.Since then, the engineering literature has embraced this denitionstating that risk is the product o requency o occurrence andseverity o consequences.

    Te object o risk analysis is not just to assess the likelihoodo completing a properly unctioning project on schedule andcost, but to give managers insight into where to invest resources

    to reduce risk to an acceptable level, how much time they can takeout o a schedule without increasing risk, how much they can addbeore the gain is marginal, etc. It is, or should be, a decision toolRisk analysis is an ongoing process that must be redone regularlyor whenever signicant changes occur. It is hard to overemphasizethat point. It is where many programs alter. Most programs doa risk analysis, ewer build a good risk reduction plan, and ewerstill keep it current. Many managers believe they are so up to dateon their programs that the time spent is not worth the returnoo bad, because it is one o the most eective ways to show thecustomer you have your arms around his problem and lie reallygets ugly i you cannot show how you tried to anticipate and solveproblems, because they tend to happen!

    AcknowledgementsWe thank George Dolan, Bob Sklar, Gary Lingle, Bruce GissingAl Chin, and Frank Dean or comments and discussionabout this article. We thank John Michel or pointing out theArnauld reerence.

    ReferencesArnauld, Antoine, and Pierre Nicole, Logic, or, the Art o Tinking

    Containing, Besides Common Rules, Several New Observations Appropriate or Forming Judgment, 5th Ed., 1662, translatedrom French in 1996 by Jill Vance Buroker, CambridgeUniversity Press.

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    12/15

    27December 2009Vol. 21 No. 4Engineering Management Journal

    Bahill, A. erry, and William J. Karnavas, Risk Analysis o aPinewood Derby: A Case Study, Systems Engineering, 3:3(2000), pp. 143-155.

    Bahill, A. erry, and Rick Botta, Fundamental Principles o GoodSystem Design, Engineering Management Journal, submittedor the special issue on Systems o Systems (2008).

    Battalio, Raymond C., John H. Kagel, and Don N. MacDonald,Animals Choices over Uncertain Outcomes: Some InitialExperimental Results, Te American Economic Review, 75:4(1985), pp. 597-613.

    Ben-Asher, Joseph, Development Program Risk AssessmentBased on Utility Teory, Proceedings o the 16th AnnualInternational Symposium o INCOSE (2006).

    Bernstein, Peter L., Against the Gods: Te Remarkable Story oRisk, John Wiley and Sons (1996).

    Blanchard, Benjamin S., and Wolt J. Fabrycky,Systems Engineeringand Analysis, Prentice-Hall (2006).

    Botta, Rick, and A. erry Bahill, A Prioritization Process,Engineering Management Journal, 19:3 (2007), pp. 31-38.

    British Cabinet Ofce,National Risk Register, (2008), http://www.cabinetoffice.gov.uk/reports/national_risk_

    register.aspx, accessed September 2008.Buede, Dennis M., Te Engineering Design o Systems: Models and

    Methods, John Wiley and Sons, Inc. (2000).Carbone, Tomas A., and Donald D. ippett, Project Risk

    Management: Using the Project Risk FMEA, EngineeringManagement Journal, 16(4) (2004), pp. 28-35.

    Chapman, William L., A. erry Bahill, and A. Wayne Wymore,Engineering Modeling and Design, CRC Press Inc. (1992).

    Clausen, Don, and Daniel D. Frey, Improving System Reliabilityby Failure-Mode Avoidance Including Four Concept DesignStrategies, Systems Engineering, 8:3 (2005), pp. 245-261.

    Deense Systems Management College, Risk Management:Concepts and Guidance, Fort Belvoir, VA: DSMC, (1986).

    Department o Deense,Military Standard 882B (1984).Georgetown University, University Inormation Services,

    Data Warehouse: Glossary http://uis.georgetown.edu/

    departments/eets/dw/GLOSSARY0816.html, accessedSeptember 2008.

    Gigerenzer, Gerd, How to Make Cognitive Illusions Disappear:Beyond Heuristics and Biases, European Review o SocialPsychology, 2:4 (1991), pp. 83-115.

    Gigerenzer, Gerd, Reckoning the Risk, Penguin Books (2002).Haimes, Yacov Y., Risk Management, in Handbook o Systems

    Engineering and Management, Andrew P. Sage, and W. B.Rouse (Eds.), John Wiley & Sons (1999), pp. 137-174.

    Hsu, Ming, Meghana Bhatt, Ralph Adolphs, Daniel ranel, andColin P. Camerer, Neural Networks Responding to Degreeso Uncertainty in Human Decision-Making, Science, 310(2005), pp. 1680-1683.

    Hussey, David E, Portolio Analysis: Practical Experience withDirectional Policy Matrix,Journal or Long Range Planning,11:4 (1978), pp. 2-8.

    Joksimovic, V., W.J. Houghton and D.E. Emon, HGR RiskAssessment Study, in Jerry B. Fussell and G.R. Burdick(Eds.), Nuclear Systems Reliability Engineering and Risk

    Assessment, Society or Industrial and Applied Mathematics(1977), pp. 167-190

    Kaplan, Stanley, and B. John Garrick, On the QuantitativeDenition o Risk, Risk Analysis, 1:1 (1981) pp. 11-27.

    Kerzner, Harold, Project Management: A Systems Approach toPlanning, Scheduling and Controlling, 8 th Ed., John Wiley &Sons (2002).

    Kirkwood, Craig W., Decision Analysis, in Handbook o SystemsEngineering and Management, Andrew P. Sage, and W.BRouse (Eds.), John Wiley & Sons (1999), pp. 1119-1145.

    NASA, Mars Program Independent Assessment eam SummaryReport, March 14, 2000,

    ftp://ftp.hq.nasa.gov/pub/pao/reports/2000/2000_mpiat_summary.pdf

    Ord, oby, Raaela Hillerbrand, and Anders Sandberg, Probingthe Improbable: Methodological Challenges or Risks withLow Probabilities and High Stakes, http://www.arXiv.org

    abs/0810.5515 (2008).Martino De, Benedetto, Dharshan Kumaran, Ben Seymour, andRaymond J. Dolan, Frames, Biases, and Rational Decision-Making in the Human Brain, Science313:5787 (2006), pp684-687.

    Pascal, Blaise, Fermat and Pascal on Probability, (1654) wwwyork.ac.uk/depts/maths/histstat/pascal.pd accessed October27, 2008.

    Rasmussen, Norman C., Application o Probabilistic RiskAssessment echniques,Annual Review o Energy, 6(1981)pp. 123-128.

    Real, Leslie A., Uncertainty and Pollinator-Plant InteractionsTe Foraging Behavior o Bees and Wasps on ArticialFlowers, Ecology62 (1981), pp. 20-26.

    Shar, Sharoni, om A. Waite, and Brian H. Smith, Context-Dependent Violations o Rational Choice in Honeybees (Apis

    Melliera) and Gray Jays (Perisoreus Canadensis), BehavioraEcology and Sociobiology51 (2002), pp. 180-187.

    versky, Amos, and Daniel Kahneman, Advances in ProspectTeory: Cumulative Representation o Uncertainty,Journao Risk and Uncertainty, 5 (1992), pp. 297-323.

    Whitman, Robert V., Evaluating Calculated Risk in GeotechnicaEngineering, Journal o echnical Engineering,110:2 (1984pp. 145-188.

    Witt, Robert C., Pricing and Underwriting Risk in AutomobileInsurance: A Probabilistic View, Journal o Risk anInsurance, 40:4 (1973), pp. 509-531.

    About the Authorserry Bahill is a Proessor o Systems Engineering at theUniversity o Arizona in ucson. He received his PhDin electrical engineering and computer science rom theUniversity o Caliornia, Berkeley, in 1975. Bahill has workedwith BAE Systems in San Diego, Hughes Missile Systemsin ucson, Sandia Laboratories in Albuquerque, LockheedMartin actical Deense Systems in Eagan MN, BoeingInormation, Space and Deense Systems in Kent, WA, IdahoNational Engineering and Environmental Laboratory in IdahoFalls, and Raytheon Missile Systems in ucson. For thesecompanies he presented seminars on Systems Engineering,

    worked on system development teams and helped themdescribe their Systems Engineering process. He holds a U.S.patent or the Bat Chooser, a system that computes the IdealBat Weight or individual baseball and soball batters. Hereceived the Sandia National Laboratories Gold PresidentsQuality Award. He is a Fellow o the Institute o Electricaland Electronics Engineers (IEEE), o Raytheon, and o theInternational Council on Systems Engineering (INCOSE).He is the Founding Chair Emeritus o the INCOSE FellowsSelection Committee. Bahill is a registered proessionalengineer in Caliornia and Pennsylvania. His picture is in theBaseball Hall o Fames exhibition Baseball as America. Youcan view this picture at http://www.sie.arizona.edu/sysengr/.

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    13/15

    28 December 200Vol. 21 No. 4Engineering Management Journal

    Eric D. Smith earned a B.S. in Physics in 1994, an MSin Systems Engineering in 2003, and his PhD in Systemsand Industrial Engineering in 2006, rom the University oArizona, ucson. His dissertation research lay at the interaceo systems engineering, multi-criterion decision-making,

    judgment and decision-making, and cognitive science.He taught or two years in Te Boeing Companys SystemsEngineering Graduate Program at Missouri University o

    Science and echnology. He is currently an assistant proessorin the Department o Industrial Engineering, in the SystemsEngineering Program, at the University o exas at El Paso.His interests include risk management, cognitive biases, andcomplex systems engineering.

    Contact: A. erry Bahill, PE, Systems and IndustrialEngineering, University o Arizona, ucson, AZ, 85721-0020,phone: 520-621-6561, [email protected]

    Exhibit A1. An Explanation o Three Bets: A, B and C

    Bet1

    p1

    x2

    p2x

    2 Risk, uncertainty

    A 1.0 $10 $10 $0 None

    B 0.5 $5 0.5 $15 $10 $25 Medium

    C 0.5 $1 0.5 $19 $10 $81 High

    Appendix: Ambiguity, Uncertainty and HazardsIn economics and in the psychology o decision-making, risk isdened as the variance o the expected value, the uncertainty.Exhibit A1 explains three bets: A, B and C. Te ps are the

    probabilities, the xs are the outcomes, is the mean and 2 isthe variance. Tis exhibit shows, or example, that hal the timebet C would pay $1 and the other hal o the time it would pay$19. Tus, this bet has an expected value o $10 and a variance o$81. Tis is a comparatively big variance, so the risk is said to behigh. Most people preer the A bet, the certain bet.

    In choosing between alternatives that are identical with

    respect to quantity (expected value) and quality o reinorcement,but that dier with respect to probability o reinorcement,humans, rats (Battalio, Kagel, and MacDonald, 1985), bumblebees(Real, 1981), honeybees (Shar, Waite, and Smith, 2002), andgray jays (Shar, Waite, and Smith, 2002) preer the alternativewith the lower variance. o model this risk averseness acrossdierent situations the coefcient o variability is oen betterthan variance, Coefcient o variability = (Standard Deviation)/ (Expected Value).

    Engineers use the term risk to evaluate and manage badthings that could happen, hazards. Risk is dened as the severityo the consequences times the requency o occurrence.

    o avoid conusion, in the rest o this section, the engineers

    risks will be called hazards, and decision theorys risks will becalled uncertainty. We will now explain three important conceptsin decision-making: ambiguity, uncertainty, and hazards.

    People do not like to make decisions when there is ambiguity,uncertainty, or hazards, so, or the most part, they preeralternatives that have low ambiguity, low uncertainty, and ewhazards.

    As we gain inormation, we progress rom ambiguity touncertainty. Ambiguity means we know very little about thesituation. Uncertainty means we have learned enough about thesystem that we can estimate the mean and variance, although we

    denitely do not have to use those terms: in act, engineers mighsay that variance is analogous to entropy.

    A little while ago, a wildre was heading toward Bahillshouse. He packed his car with his valuables, but he did not haveroom to save everything, so he put his wines in the swimmingpool. He put the dog in the car and drove o. When he cameback, the house was burned to the ground, but the swimming poosurvived; however, all o the labels had soaked o the wine bottlesonight he is giving a dinner party to celebrate their survivalHe is serving mushrooms that he picked in the orest while hewas waiting or the re to pass. Tere may be some hazard here

    because he is not a mushroom expert. Guests will drink some ohis wine; thereore, there is some uncertainty here. You can beassured that none o his wines are bad, but some are much betterthan others. Finally, he tells you that his sauce or the mushroomcontains saron and oyster sauce. Tis produces ambiguitybecause you probably do not know what these ingredients tastelike. How would you respond to each o these choices?Hazard: Would you preer his orest picked mushrooms or

    portabella mushrooms rom the grocery store?Uncertainty: Would you preer one o his unlabeled wines or a

    Robert Mondavi Napa Valley merlot?Ambiguity: Would you preer his saron and oyster sauce or

    marinara sauce?

    Decisions involving these three concepts are probably madein dierent parts o the brain. Hsu, Bhatt, Adolphs, ranel, andCamerer (2005) used the Ellsberg paradox to explain dierentbrain processing o ambiguity and uncertainty. Tey gave theirsubjects a deck o cards and told them it contained 10 red cardsand 10 blue cards (the uncertain deck). Another deck had 20red or blue cards but the percentage o each was unknown (theambiguous deck). Te subjects could take their chances drawinga card rom the uncertain deck: i the card were the color theypredicted they won $10; otherwise, they got nothing. Tey could

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    14/15

    29December 2009Vol. 21 No. 4Engineering Management Journal

    just take $3 and quit. Most people picked a card. Ten the subjectswere oered the same bets with the ambiguous deck. Most peopletook the $3 avoiding the ambiguous decision. Both decks had thesame expected value, $5, because the subjects picked the color.Te paradox is that they accepted the uncertain deck, but rejectedthe ambiguous deck. Hsu et al., recorded unctional magneticresonance images (MRI) o the brain while their subjectsmade these decisions. While contemplating decisions about theuncertain deck, the dorsal striatum showed the most activity andwhile contemplating decisions about the ambiguous deck, the

    amygdala and the orbitorontal cortex showed the most activity.In another MRI study o ambiguous decisions, when subjects

    were given a choice o gaining $20 or spinning a gambling wheel,most chose to keep the $20. When they were given a choice o

    losing $30 or spinning the wheel, most chose to spin the wheel(Martino De, Kumaran, Seymour, and Dolan, 2006). Te MRIscans suggested that amygdala activity is an emotional signal thatpushes subjects to keep sure money and gamble instead o takinga loss; however, not all subjects succumbed to this emotionasignal. Tose who overcame it had high levels o activity in theorbital and medial prerontal cortex. De Martino speculatesthat the prerontal cortex integrates emotional signals rom theamygdala with other cognitive inormation: people who are morerational do not perceive emotion less, they just regulate it better.

    Ambiguity, uncertainty and hazards are three dierentthings, and people preer to avoid all three. Te purpose o thissection was to introduce the reader to other meanings o the wordo risk.

    Engineering Management enure-rack PositionCollege of Engineering and Computer Science

    Te University of ennessee at Chattanooga

    Te College o Engineering and Computer Science invites applications or a ull-time, tenuretrack, aculty position at the Assistant

    Proessor level in Engineering Management, beginning ideally January 1, 2010 or August 1, 2010 depending on selected candidates

    availability. Review o applications will begin immediately and will continue until the position is lled. Applicants must have a bachelors

    degree in engineering; masters degree in engineering management, engineering, business, or construction management; and Ph.D. in

    engineering management or related eld. Te successul candidate will have the ability to teach undergraduate Engineering echnologyManagement courses with concentrations in engineering management and construction management and graduate Engineering

    Management courses both on campus and online, conduct research, advise students, chair graduate students projects and/or theses, and

    interact with industry. Relevant industrial experience is preerred. Proessional engineering registration or signicant progress towards

    licensure is desirable.

    Te Engineering Management and echnology Department oers a B.S. in Engineering echnology Management with concentrations

    in Construction Management and Engineering Management, and M.S. in Engineering Management, which is oered both on campus and

    ully online, or people with engineering or science backgrounds who have moved or expect to move into managerial responsibilities.

    UC (www.utc.edu) is a metropolitan university, committed to excellence in teaching and applied research. With more than 10,000

    students, UC is the second largest institution in the University o ennessee system. Te College o Engineering and Computer Science

    (http://www.utc.edu/EngineeringAndComputerScience) oers both undergraduate and graduate degrees in Engineering, echnology

    Management, and Computer Science. Te College is also home to the SimCenter with graduate programs (M.S./Ph.D.) in Computational

    Engineering.

    Chattanooga is a thriving, mid-sized city located on the ennessee River near the center o the ennessee Valley echnology Corridor.With nearby mountains, trout streams, hiking trails, excellent schools and a revitalized downtown waterront, Chattanooga oers ample

    opportunities or balanced amily living.

    Applications must include a cover letter that explains applicants interest in the position, a statement o teaching philosophy and

    research interests, a complete resume, including names, addresses, e-mails, telephone numbers o at least three reerences, and copies o

    undergraduate and graduate (M.S. and Ph.D.) transcripts. Tese documents should be e-mailed to: [email protected]. Please reerence

    search number F09-2-004 Engineering.

    Te University o ennessee at Chattanooga is an equal employment opportunity/afrmative action/itle VI & IX/Section 504 ADA/

    ADEA institution. Qualied women and minorities are encouraged to apply.

  • 8/3/2019 An Industry Standard Risk Analysis Technique

    15/15

    Copyright of Engineering Management Journal is the property of American Society for Engineering

    Management and its content may not be copied or emailed to multiple sites or posted to a listserv without the

    copyright holder's express written permission. However, users may print, download, or email articles for

    individual use.