risk calculus final draft

Upload: kim-woodruff

Post on 07-Apr-2018

228 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Risk Calculus Final Draft

    1/58

    Michigan 2009 1Risk Calculus

    Risk Calculus Index

    Must Ignore Low Probability 1AC ....................................................................................................................................3Must Reject Multiple Internal Links 1AC .................................................................................................................... .....4Policy Paralysis 1AC .........................................................................................................................................................5Tyranny of Survival 1AC ..................................................................................................................................................6Extensions - Must ignore low probability Impacts ...............................................................................................................7Extensions - Must ignore low probability Impacts ...............................................................................................................8Extensions - Must ignore low probability Impacts ...............................................................................................................9Extensions - Must ignore multiple internal link Impacts ....................................................................................................11Extend - Policy Paralysis ....................................................................................................................................................12Extend - Policy Paralysis ....................................................................................................................................................13Alternative Prioritize Uniqueness ....................................................................................................................................14Alternative Expert Opinion ..............................................................................................................................................15Alternative Prioritize Probability .....................................................................................................................................16Alternative Prioritize Probability .....................................................................................................................................17Alternative Threshold Probability ...................................................................................................................................18They Say Risk = Probability times impact = infinity ......................................................................................................19They Say Risk = Probability times impact = infinity ......................................................................................................20

    They Say Risk = Probability times impact = infinity ......................................................................................................21They Say Insurance Principle ..........................................................................................................................................22They Say Magnitude is the Most Important ....................................................................................................................23They Say Presumption ....................................................................................................................................................24They Say Probability is Impossible to Calculate ...................................................................................................... ......25They Say Probability is Impossible to Calculate ...................................................................................................... ......26They Say Even Improbable Impacts sometimes Occur ...................................................................................... ...... ......27They Say Focusing on Risk Calculus Distracts Education ................................................................................. ...... ......28They Say Focusing on Risk Calculus Distracts Education ................................................................................. ...... ......29 No Nuclear War impacts .............................................................................................................................................. ......30 No Environmental Collapse ................................................................................................................................................31 No Biodiversity Extinction .................................................................................................................................................32No Ozone Depletion Extinction ..........................................................................................................................................33

    No HIV/AIDS Extinction ....................................................................................................................................... ...... ......34 No Terror Attack .................................................................................................................................................... ...... ......35 No Bioterror Attack ............................................................................................................................................................36 No Bioterror Attack ............................................................................................................................................................37 No Iran Impacts ..................................................................................................................................................................38 No Iran Impacts ..................................................................................................................................................................39Only Nuclear Impacts are Existential .................................................................................................................................40Bostrom Indicts ...................................................................................................................................................................41Extensions Nuclear Threat Rhetoric Bad .................................................................................................................. ......42Extensions Nuclear Threat Rhetoric Bad .................................................................................................................. ......43Extensions Nuclear Threat Rhetoric Bad .................................................................................................................. ......44Extensions Nuclear Threat Rhetoric Bad .................................................................................................................. ......45Extensions Nuclear Threat Rhetoric Bad .................................................................................................................. ......46

    Extensions Nuclear Threat Rhetoric Bad .................................................................................................................. ......47They Say Ignore Low Probability ...................................................................................................................................48They Say Ignore Low Probability ...................................................................................................................................49They Say Ignore Low Probability ...................................................................................................................................50They Say Ignore Low Probability ...................................................................................................................................51They Say Ignore Low Probability ...................................................................................................................................52They Say Ignore Low Probability ...................................................................................................................................53They Say Tyranny of Survival ..........................................................................................................................................54They Say Resource Wars Improbable .............................................................................................................................55They Say Resource Wars Improbable .............................................................................................................................56They Say Economic Collapse = Nuclear War Improbable ....................................................................................... ......57They Say Economic Collapse = Nuclear War Improbable ....................................................................................... ......58

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    2/58

    Michigan 2009 2Risk Calculus

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    3/58

    Michigan 2009 3Risk Calculus

    Must Ignore Low Probability 1AC

    [ ] Mini-max arguments are flawed they overemphasize pessimism, they misinterpret

    probability, they place too much value on Novelty, and they undemocratically distribute

    values

    David Berube, Associate Professor of Speech Communication at the University of South Carolina, 2000 [Director ofDebate Debunking Mini-max Reasoning: The Limits Of Extended Causal Chains In Contest Debating,Contemporary Argumentation and Debate, Volume 21, Available Online at http://www.cedadebate.org/CAD/2000_berube.pdf, Accessed 04-05-2008, p. 64-69]

    Vohra warned: "There are many inherent uncertainties in the quantitative assessment of accident probability. Theseuncertainties include lack of sufficient data, the basic limitations of the probabilistic methods used, and insufficientinformation about the physical phenomena . . . relating to the potential accident situation" (211). Why then, do weaccept claims associated with these probability assessments? The answer lies in the seductiveness of the mini-max principle: Actto minimize the risk of maximum disaster. According to Kavka, under the mini-max principle, "benefits and probabilities are disregarded, andthat option is considered best which promises the least bad (or most good) outcome" (46-47). This is similar to what Kavka called the disasteravoidance principle: `"When choosing between potential disasters under two-dimensional uncertainty, it is rational to select the alternative that

    minimizes the probability of disaster occurrence" (50), and what Luce and Raiffa called the maximization-of-security-level theory (278-281). As

    a number of authors have noted, the mini-max principle is fraught with difficulties. I will recount four particular pitfalls inthis article. First mini-max reasoning is grounded in ultrapessimisim, or "disregarding a relevant experiment regardlessof its cost" (Parmigiani 250). "The mini-max principle is founded on ultra-pessimism, [in] that it demands that theactor assume the world to be in the worst possible state" (Savage, "Statistical Decisions" 63). Savage concluded:"The mini-max rule based on negative income is ultrapessimistic and can lead to the ignoring of even extensiveevidence and hence is utterly untenable for statistics" (Foundations 200). Furthermore, Parmigiani found that "no form of the mini-max principle is generally superior to the other in guarding against ultrapessimism. . . . [I]t is not possible to concoct a standardization method

    that makes the mini-max principle safe from ultrapessimism" (243. 249). Second, mini-max reasoning is confounded by incorrectprobability assessments. "Applying mini-max means ignoring the probabilities as various outcomes" (Finnis 221).One of the reasons for incorrect decisions is grounded in politics. Proponents of a mini-max claim may misrepresentthe probabilities. "The group mini-max rule is also objectionable in some contexts, because, if one were to try toapply it in a real situation, the members of the group might well lie about their true probability judgments, in orderto influence the decision generated by the mini-max rule in the direction each considers correct" (Savage, Foundations175). This problem is worsened as proponents incorporate lay source material into their extended arguments. Several studies have noted that layestimates of low probability hazards tend to be substantially higher than expert probability estimates. . . . Is it that people sensitive to riskconsequences, and unwilling to accept the risk or risk management or both strategies, might systematically exaggerate the magnitude ofconsequences while those in the opposite camp might systematically underplay the consequential danger involved? This implies the hypothesisthat acceptance is an a priori condition, and becomes a driver of likelihood and consequence assessments, at least in some instances, while threat

    probabilities become the key causal factor in acceptance in still other instances. (Nehvevajsa 522) The third fault with mini-maxreasoning is that it is "flagrantly undemocratic. In particular, the influence of an *opinion, under the group mini-maxrule, is altogether independent of how many people in the group hold that opinion" (Savage, Foundations 175). Inother words, singular experts make mini-max estimations. Quasi-experts or secondary experts make some of themost bizarre extended arguments. In addition, there is an elitist sense to the process. The reasoning of the "expert" ispresumptive over the opinion of individuals who are less educated, less affluent, or even less white. What happens when theelite are wrong? The arrogance of elitism is hardly more evident in any other setting. Deference to authority is an important co-requisite ofextended mini-max claims in contest debates. There is an insipid maxim associated with it: "Don't understand? Don't worry. We do the thinkingso you won't have to!" This problem is amplified when an exceptional source in a mini-max argument cannot be corroborated. Making a decision

    based on a sole opinion grossly inflates the qualifications of the source to make the claim. Consider how this issue worsens as well when thesource is nameless or institutional, such as a press service. The final pitfall of mini-max reasoning is that the persuasiveness of any such argumentis a function of contingent variables, in particular, its novelty. Consider this simple illustration: A single large outcome appears to pose a greaterrisk than does the sum of multiple small outcomes. "It is always observed that society is risk averse with respect to a single event of large

    consequence as opposed to several small events giving the same total number of fatalities in the same time period. Hence 10.000 deathsonce in 10,000 years is perceived to be different from 1 death each year during 10,000 years" (Niehaus, de Leon &Cullingfort 93). Niehaus, de Leon, and Cullingford extended their analysis with a review of nuclear power plantsafety. "The Reactor Safety Study similarly postulated that the public appears to accept more readily a much greatersocial impact from many small accidents than it does from the more severe, less frequent occurrences that have asimilar society impact" (93). Theorists in many different settings have described this phenomenon. Wilson, forinstance, devised a way to examine the impact of low-probability high-consequence events that more clearlyportrayed societal estimates of such events: "A risk involving N people simultaneously is N2 (not N) times asimportant as an accident involving one person. Thus a bus or aeroplane accident involving 100 people is as seriousas 10,000, not merely 100, automobile accidents killing one person" (274-275).

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    4/58

    Michigan 2009 4Risk Calculus

    Must Reject Multiple Internal Links 1AC

    [ ] Long chains of internal links make risk assessment impossible because they distort

    probabilities

    Hansson, 2006; professor in philosophy at the Royal Institute of Technology (Sven Ove; May 23, 2006; TheEpistemology of Technological Risk,; http://scholar.lib.vt.edu/ejournals/SPT/v9n2/hansson.html#bondi

    Even after many years' experience of a technology there may be insufficient data to determine the frequencies of

    unusual types of accidents or failures. As one example of this, there have (fortunately) been too few severeaccidents in nuclear reactors to make it possible to estimate their probabilities. In particular, most of the reactortypes in use have never been involved in any serious accident. It is therefore not possible to determine the risk(probability) of a severe accident in a specified type of reactor. One common way to evade these difficulties is tocalculate the probability of major failures by means of a careful investigation of the various chains of events that

    may lead to such failures. By combining the probabilities of various subevents in such a chain, a total probability

    of a serious accident can be calculated. Such calculations were in vogue in the 1970's and 1980's, but today there is agrowing skepticism against them, due to several difficult problems with this methodology. One such problem is

    that accidents can happen in more ways than we can think of beforehand. There is no method by which we can

    identify all chains of events that may lead to a major accident in a nuclear reactor, or any other complextechnological system. Another problem with this methodology is that the probability of a chain of events can be

    very difficult to determine even if we know the probability of each individual event.Suppose for instance that anaccident will happen if two safety valves both fail. Furthermore suppose that we have experience showing that theprobability is 1 in 500 that a valve of this construction will fail during a period of one year. Can we then conclude thatthe probability that both will fail in that period is 1/500 x 1/500, i.e. 1/25000? Unfortunately not, since this calculation isbased on the assumption that failures in the two valves are completely independent events. It is easy to think of ways inwhich they may be dependent: Faulty maintenance may affect them both. They may both fail at high temperatures or inother extreme conditions caused by a failure in some other component, etc. It is in practice impossible to identify allsuch dependencies and determine their effects on the combined event-chain.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    5/58

    Michigan 2009 5Risk Calculus

    Policy Paralysis 1AC

    [ ] If you focus on minute probabilities, all actions can have catastrophic consequences

    absolute risk avoidance would stifle all assessment even if we cannot draw a perfect line, we

    should still act on probabilities

    Hansson, 2006; professor in philosophy Royal Institute of Technology (Sven Ove; May 23, 2006; The Epistemology ofTechnological Risk; http://scholar.lib.vt.edu/ejournals/SPT/v9n2/hansson.html#bondi

    However, it would not be feasible to take such possibilities into account in all decisions that we make. In a sense, anydecision may have catastrophic unforeseen consequences. If far-reaching indirect effects are taken into account, then given the unpredictable nature of actual causation almost any decision may lead to a disaster. In order to be able todecide and act, we therefore have to disregard many of the more remote possibilities. Cases can also easily be found inwhich it was an advantage that far-fetched dangers were not taken seriously. One case in point is the false alarm on so-called polywater, an alleged polymeric form of water. In 1969, the prestigious scientific journal Nature printed a letterthat warned against producing polywater. The substance might "grow at the expense of normal water under anyconditions found in the environment," thus replacing all natural water on earth and destroying all life on this planet.(Donahoe 1969 ) Soon afterwards, it was shown that polywater is a non-existent entity. If the warning had been heeded,

    then no attempts would have been made to replicate the polywater experiments, and we might still not have known thatpolywater does not exist. In cases like this, appeals to the possibility of unknown dangers may stop investigations andthus prevent scientific and technological progress. We therefore need criteria to determine when the possibility ofunknown dangers should be taken seriously and when it can be neglected. This problem cannot be solved withprobability calculus or other exact mathematical methods. The best that we can hope for is a set of informal criteria thatcan be used to support intuitive judgment. The following list of four criteria has been proposed for this purpose.(Hansson 1996) 1. Asymmetry of uncertainty: Possibly, a decision to build a second bridge between Sweden andDenmark will lead through some unforeseeable causal chain to a nuclear war. Possibly, it is the other way around sothat a decision not to build such a bridge will lead to a nuclear war. We have no reason why one or the other of thesetwo causal chains should be more probable, or otherwise more worthy of our attention, than the other. On the otherhand, the introduction of a new species of earthworm is connected with much more uncertainty than the option not tointroduce the new species. Such asymmetry is a necessary but insufficient condition for taking the issue of unknowndangers into serious consideration. 2. Novelty: Unknown dangers come mainly from new and untested phenomena. Theemission of a new substance into the stratosphere constitutes a qualitative novelty, whereas the construction of a newbridge does not. An interesting example of the novelty factor can be found in particle physics. Before new and morepowerful particle accelerators have been built, physicists have sometimes feared that the new levels of energy mightgenerate a new phase of matter that accretes every atom of the earth. The decision to regard these and similar fears asgroundless has been based on observations showing that the earth is already under constant bombardment from outerspace of particles with the same or higher energies. (Ruthen 1993) 3. Spatial and temporal limitations: If the effects of aproposed measure are known to be limited in space or time, then these limitations reduce the urgency of the possibleunknown effects associated with the measure. The absence of such limitations contributes to the severity of manyecological problems, such as global emissions and the spread of chemically stable pesticides. 4. Interference withcomplex systems in balance: Complex systems such as ecosystems and the atmospheric system are known to havereached some type of balance, which may be impossible to restore after a major disturbance. Due to this irreversibility,uncontrolled interference with such systems is connected with a high degree of uncertainty. (Arguably, the same can besaid of uncontrolled interference with economic systems; this is an argument for piecemeal rather than drastic economicreforms.) It might be argued that we do not know that these systems can resist even minor perturbations. If causation is

    chaotic, then for all that we know, a minor modification of the liturgy of the Church of England may trigger a majorecological disaster in Africa. If we assume that all cause-effect relationships are chaotic, then the very idea of planningand taking precautions seems to lose its meaning. However, such a world-view would leave us entirely withoutguidance, even in situations when we consider ourselves well-informed. Fortunately, experience does not bear out thispessimistic worldview. Accumulated experience and theoretical reflection strongly indicate that certain types ofinfluences on ecological systems can be withstood, whereas others cannot. The same applies to technological,economic, social, and political systems, although our knowledge about their resilience towards various disturbances hasnot been sufficiently systematized.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    6/58

    Michigan 2009 6Risk Calculus

    Tyranny of Survival 1AC

    [ ] Voting on a minute risk of Nuclear War imposes the Tyranny of Survival - which

    ensures oppression and self destructive.

    Daniel Callahan, prof of philosophy at Harvard, 1973 [ Co-founder and former director of TheHastings Institute, The Tyranny of Survival p 91-93

    The value of survival could not be so readily abused were it not for its evocative power. But abused it has been.In thename of survival, all manner of social and political evils have been committed against therights of individuals, including the right to life.The purported threat of Communist domination has forover two decades fueled the drive of militarists for ever-larger defense budgets, no matter what the cost to other social

    needs. During World War II, native Japanese-Americans were herded, without due process of law, to detention camps.

    This policy was later upheld by the Supreme Court in Korematsu v. United States (1944) in the general context that a

    threat to national security can justify acts otherwise blatantly unjustifiable. The survival of the Aryan race was one of

    the official legitimations of Nazism. Under the banner of survival, the government of South Africaimposes a ruthless apartheid, heedless of the most elementary human rights.The Vietnamese warhas seen one of the greatest of the many absurdities tolerated in the name of survival: the destruction of villages in orderto save them. But it is not only in a political setting that survival has been evoked as a final and unarguable value. The

    main rationale B. F. Skinner offers in Beyond Freedom and Dignity for the controlled and conditioned society is the

    need for survival. For Jacques Monod, in Chance and Necessity, survival requires that we overthrow almost every

    known religious, ethical and political system. In genetics, the survival of the gene pool has been put forward as

    sufficient grounds for a forceful prohibition of bearers of offensive genetic traits from marrying and bearing children.Some have even suggested that we do the cause of survival no good by our misguided medical efforts to find means by

    which those suffering from such common genetically based diseases as diabetes can live a normal life, and thus

    procreate even more diabetics. In the field of population and environment, one can do no better than to cite Paul Ehrlich,

    whose works have shown a high dedication to survival, and in its holy name a willingness to contemplate

    governmentally enforced abortions and a denial of food to surviving populations of nations which have not enacted

    population-control policies. For all these reasonsit is possible to counterpoise over against the need forsurvival a "tyranny of survival." There seems to be no imaginable evil which some group isnot willing to inflict on another for sake of survival, no rights, liberties or dignities which it isnot ready to suppress.It is easy, of course, to recognize the danger when survival is falsely and manipulativelyinvoked. Dictators never talk about their aggressions, but only about the need to defend the fatherland to save it from

    destruction at the hands of its enemies. But my point goes deeper than that. It is directed even at a legitimateconcern for survival, when that concern is allowed to reach an intensity which would ignore,suppress or destroy other fundamental human rights and values. The potential tyrannysurvival as value is that it is capable, if not treated sanely, of wiping out all other values.Survival can become an obsession and a disease, provoking a destructive singlemindednessthat will stop at nothing.We come here to the fundamental moral dilemma. If, both biologically andpsychologically, the need for survival is basic to man, and if survival is the precondition for any and all human

    achievements, and if no other rights make much sense without the premise of a right to lifethen how will it be

    possible to honor and act upon the need for survival without, in the process, destroying everything in human beingswhich makes them worthy of survival. To put it more strongly,if the price of survival is humandegradation, then there is no moral reason why an effort should be made to ensure thatsurvival.It would be the Pyrrhic victory to end all Pyrrhic victories. Yet it would be the defeat of all defeats if,because human beings could not properly manage their need to survive, they succeeded in not doing so.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    7/58

    Michigan 2009 7Risk Calculus

    Extensions - Must ignore low probability Impacts

    [ ] Risk prioritizes probability low-probability outcomes should be ignored regardless

    of how significant the consequences because low probabilities bias decisionmaking

    Vertzberger, 1995, Professor at the Department of International Relations, the Hebrew University of Jerusalem, Israel(Yaacov Y. I. Vertzberger, June 1995, Rethinking and Reconceptualizing Risk in Foreign Policy Decision-Making: ASociocognitive Approach, Political Psychology, Vol. 16, No. 2, pp. 347-380, http://www.jstor.org/stable/3791835)

    Risk in Foreign Policy Decision-Making makers to their assessments of both outcomes and the probability

    distribution of outcomes may range from total confidence to very low confidence. The less valid their estimates,

    the higher the chance that inappropriate choices will be made that will result in costs and losses.The level of risk,from the decision-maker's vantage point, will thus be defined by the answers to the following three questions in this order of presentation: (1)What arethe gains/losses associated with each known outcome? (2)What is the probability of each outcome? (3)How valid are outcome probability and

    gain/loss estimates? Risk, then, is defined as the likelihood of the materialization of validly predictable direct and

    indirect consequences with potentially adverse values, arising from events, self-behavior, environmental

    constraints, or the reaction of an opponent or third party. Accordingly, risk estimates have three dimensions:

    outcome-values (desired or undesired), the probability of outcomes,3 and the confidence with which these

    estimates of outcome-values and probabilities are held by the decision-maker.Although the three components of actual risk

    are in reality independent of one another, that is not the case with subjective risk estimates. Payoffs affect probabilities (Einhorn and Hogarth, 1986),and probabilities influence the weight of payoffs in the process of choice. Experimental evidence shows that in a choice between two gambles, whenthe probability of winning is greater than the probability of losing, choice is most closely associated with the amounts involved in each gamble; butwhen the probability of losing is greater than the probability of winning, choice is most closely associated with the probabilities. This indicates that

    choice is not independent of the structure of the alternatives available to the decision-maker(Payne, 1975). (For

    discursive purposes only we shall continue to treat the three components separately, although they are inherently intertwined.) The validity

    dimension affects the weight that decision-makers will attribute to probability and payoff estimates in their risk

    evaluation, and in general will bias choice toward preference of decisions with known risks over alternatives with

    unknown risks4 (Curley et al., 1986; Ellsberg, 1961; Fischhoff, 1983; Gardenfors & Sahlin, 1982; Heath & Tversky, 1991).This is to alarge degree a reflection of a human need for certainty regarding value and probability estimates3In probabilisticterms, perceived risk increases: (a) when the probability of undesirable conse-quences increases and the probability of desirable consequencesdecreases, and (b) when the variance of probabilities is greater, that is, when the probability of extreme outcomes increases. In utilities terms (a) themore undesirable the consequences the greater the perceived risk, and (b) as the distribution of possible consequences is more slanted towardundesirable consequences, perceived risk increases. In terms of the interaction of probabilities and utilities, an alternative with higher negative SEU

    will be rated as riskier than one with lower negative SEU (Milburn & Billings, 1976). 4 People have little confidence in intermediate-

    level subjective probability estimates and tend not to use them in decision-making. They also have trouble

    understanding and interpreting information about low-probability events, so that small probabilities are

    particularly prone to biasing (Freuden-burg, 1988; March & Shapira, 1987; Peterson & Lawson, 1989; Sj6berg, 1979; Ster, 1991, p. 106).

    These biases result in low-probability outcomes, being ignored regardless of how significant the consequences of

    these outcomes are. These biases also result in paying more attention to physical certainties, concrete events, and

    well-specified causal relationships at the expense of the less tangible dimensions of a problem. 351 Vertzberger in risky

    situations. In making value and probability estimates, decision-makers require a certain threshold of confidence

    before they will consider the risk as something worth worrying about. As confidence in high-probability, high-

    cost outcomes increases so does the perception of risk, and when confidence declines so does risk perception. Thisimplies that similar value and probability estimates may result in dissimilar overall risk perceptions and related anxiety by different individuals, sincethe probability and/or value estimates may be held with dissimilar levels of confidence. In addition, individuals vary in their minimal confidencethreshold requirements and thus in the degree to which anxiety will increase in response to the same incremental rise in the validity of probability andvalue estimates. For some people in risky situations, a small increase in confidence levels of threatening information produces a large leap in anxiety;

    for others only a large increase in confidence will cause a significant increase in anxiety. The need for veridicality is most prominent in

    decisions involving high stakes, where the costs of errors, including challenges to the legitimacy of decisions,could prove critical. Validityi s thus a particularlyi mportantc onsider-ation in the formation of risk assessments

    where decision-makers have reason to doubt the information available to them.5 In foreign policy this is very

    often the case because of the vague or ambiguous nature of foreign policy-related informa-tion

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    8/58

    Michigan 2009 8Risk Calculus

    Extensions - Must ignore low probability Impacts

    [ ] The Negatives impact analysis relies on the Tyranny of Illusory Precision the

    refusal to assign zero risk to low probability impacts is based on a false belief in objectivity

    and a tendency to compromise. This destroys the value of probability and divorces debate

    from the real world

    Dale Herbeck, Professor of Communication at Boston College, 1992 [Director of the Fulton Debating Society at BostonCollege, The Use and Abuse of Risk Analysis in Policy Debate, Paper Presented at the 78th Annual Meeting of theSpeech Communication Association (Chicago, IL), October 29th-November 1st, Available Online via ERIC NumberED354559, p. 10-12]

    Those of us who judge debate with some frequency know that while it is difficult to quantify probabilities,

    risk analysis forces us to assign probabilities to all arguments in a debate. As a result, we may come under, as

    John Holdren would call "the tyranny of illusory precision." This phenomenon occurs whenever we take

    qualitative judgments, decouple them from their context, and then use these judgments to assign a

    probability which is used to justify conclusions. Even if we resist the temptation to assign unwarranted risks,

    a related problem is that decision makers often fall prey to the fallacy of the golden mean.According to Edward

    Darner, this "fallacy consists in assuming that the mean or middle view between two extremes must be the best or right one simply because it isthe middle view."12 In other words,rather than assess zero probability to an impact, a judge might assume that theprobability necessarily lies somewhere between the two positions advocated in the debate. Recognizing this

    tendency, advocates have become quite adept at framing their arguments to justify the attribution of some

    amount or probability. Consider, for example, the following quotation from Unberto Saffiotti of the National Cancer Institute: "The most'prudent' policy is to consider all agents, for which the evidence is not clearly negative under accepted minimum conditions of observation as ifthey were positive."13 Of course, the implication is that we must assess some probability of carcinogenicity absent proof to the contrary.

    Evidence such as this, when invoked in debate, is often used to justify the claim that there must be some risk of the impact. The "Zero-Infinity Problem."14 A second problem with risk analysis is that the magnitude of the impact has come to

    dominate questions or probability. The result, according to Ehrlich and Ehrlich, is the "zero-infinity

    problem." Although the probability of some events is infinitesimally small, the impacts may be so grave that

    the risk becomes significant.To illustrate this point, the Ehrlich's cite the example of pancreatic cancer. Although the probability ofgetting this form of cancer is extremely small, it is almost always fatal. Accordingly, the fear of contracting pancreatic cancer might be sufficientto warrant measures which would be unlikely to decrease the incidence of this deadly disease. It is easy to translate the zero-infinity problem tothe debate context. Consider the following risks:

    probability impact risk 99 in 100 100,000 lives = 99,000 lives1 in 100 10,000,000 = 100,000 lives

    Of course, the conclusion that can be drawn trom the above example is that a low probability/high impact

    argument would generally outweigh a high probability/low impact argument. Being perceptive by nature, debatersare well aware of this fact. It is, therefore, not surprising that the vast majority of all debate arguments eventually culminate in a nuclear war. Byoffering the penultimate of impacts, the skilled advocate can effectively moot the importance of probability.For the purpose of illustration, assume that a nuclear war would kill exactly one billion people, which may in fact be a conservative assessment. Theincredible argumentative power of this staggering impact is evident in the following statement of risks:

    probabilityimpact1,000,000,000 risk= 10,000,000 lives

    1 in 100 (.01)1 in 1000 (.001) 1,000,000,000 = 1,000,000 lives1 in 10,000 (.0001) 1,000,000,000 = 100,000 lives1 in 100,000 (.00001) 1,000,000,000 = 10,000 lives1 in 1,000,000 (.000001) 1,000,000,000 = 1,000 lives

    In other words, a 1 in 10,000 chance of a disadvantage culminating in nuclear war would be the equivalent of an affirmative saving 100,000 lives.

    Not surprisingly, low probability/high impact arguments have come to dominate contemporary debate. Indeed, if

    a stranger should hear a debate upon this year's intercollegiate policy topic, they would probably conclude that

    any change in development policy, no matter how small, is likely to culminate in a nuclear war.As Star Muir has

    observed:This takes form in two disturbing tendencies: an unwillingness to examine more real world impacts ofpolicies, and a jaded view of global devastation. That first is apparent in the unwillingness of the debaters to argue that a recession, perse, is bad; that a regional war, not escalating to superpower conflict, w( (aid be a horrible thing. A global recession would probably not cause a

    nuclear war, but it doubtless would cause untold suffering and human anguish.A regional war in Africa could kill hundreds ofthousands of lives, easily enough to outweigh a properly mitigated set of case scenarios. The problem is that

    debaters won't tell these stories, but they will take the easy way out and read a blurb on World War III.'15Of course, the problem with such argumentation is that it frequently borders on the absurd.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    9/58

    Michigan 2009 9Risk Calculus

    Extensions - Must ignore low probability Impacts

    [ ] Low-probability high-consequence arguments use mini-max reasoning which distorts

    the amount of risk we should assign to impacts

    Berube, 2000, Associate Professor of Speech Communication and Director of Debate at the University of SouthCarolina (David M. Berube, 2000, Debunking mini-max reasoning: the limits of extended causal chains in contestdebating http://www.cedadebate.org/CAD/2000_berube.pdf, pages 53-73)

    The lifeblood of contemporary contest debating may be the extended argument. An extended argument is any argumentrequiring two or more distinct causal or correlational steps between initial data and ending claim. We find it associatedwith advantages to comparative advantage cases, with counterplan advantages, with disadvantages, permutation andimpact turnarounds, some kritik implications, and even probabilistic topicality arguments. In practice, these often arenot only extended arguments. they are causal arguments using mini-max reasoning. Mini-max reasoning is defined as anextended argument in which an infinitesimally probable event of high consequence is assumed to present a highlyconsequential risk. Such arguments, also known as low-probability high-consequence arguments, are commonlyassociated with "risk analysis." The opening statement from Schell represents a quintessential mini-max argument.Schell asked his readers to ignore probability assessment and focus exclusively on the impact of his claim. While Schell

    gave very specific reasons why probability is less important than impact in resolving this claim, his arguments are notimpervious to rebuttal. What was a knotty piece of evidence in the 1980s kick-started a practice in contest debatingwhich currently is evident in the ubiquitous political capital disadvantage code-named "Clinton." Here is an example ofthe Clinton disadvantage. In theory, plan action causes some tradeoff (real or imaginary) that either increases ordecreases the President's ability to execute a particular agenda. Debaters have argued the following: Clinton (soon to beGore or Bush) needs to focus on foreign affairs. A recent agreement between Barak and Assad needs presidentialstewardship. The affirmative plan shifts presidential focus to Nigeria that trades off with focus on the Middle East. As aresult, the deal for the return of the Golan Heights to Syria fails. Violence and conflict ensues as Hizbollah terroristslaunchGuerilla attacks into northern Israel from Lebanon. Israel strikes back. Hizbollah incursions increase. Chemicalterrorism ensues and Israel attacks Hizbollah strongholds in southern Lebanon with tactical nuclear weapons. Iranlaunches chemical weapons against Tel Aviv. Iraq allies with Iran. The United States is drawn in. Superpowermiscalculation results in all-out nuclear war culminating in a nuclear winter and the end of all life on the planet. Thislow-probability high-consequence event argument is an extended argument using mini-max reasoning. The appeal ofmini-max risk arguments has heightened with the onset of on-line text retrieval services and the World Wide Web, bothof which allow debaters to search for particular words or word strings with relative ease. Extended arguments arefabricated by linking evidence in which a word or word string serves as the common denominator, much in the fashionof the sorities (stacked syllogism): AaB, BaC, CaD, therefore AaD. Prior to computerized search engines, a contestdebater's search for segments that could be woven together into an extended argument was incredibly time consuming.The dead ends checked the authenticity of the extended claims by debunking especially fanciful hypotheses. Textretrieval services may have changed that. While text retrieval services include some refereed published materials, theyalso incorporate transcripts and wire releases that are less vigilantly checked for accuracy. The World Wide Web allowsvirtually anyone to set up a site and post anything at that site regardless of its veracity. Sophisticated super searchengines, such as Savvy Search help contest debaters track down particular words and phrases. Searches on textretrieval services such as Lexis-Nexis Universe and Congressional Universe locate words and word strings within nwords of each other. Search results are collated and loomed into an extended argument. Often, evidence collected in thismanner is linked together to reach a conclusion of nearly infinite impact, such as the ever-present specter of globalthermonuclear war. Furthermore, too much evidence from online text retrieval services is unqualified or under-

    qualified. Since anyone can post a web page and since transcripts and releases are seldom checked as factual, pseudo-experts abound and are at the core of the most egregious claims in extended arguments using mini-max reasoning. Innearly every episode of fear mongering . . . people with fancy titles appeared. . . [F]or some species of scares . . .secondary scholars are standard fixtures. . . . Statements of alarm by newscasters and glorification of wannabe expertsare two telltales tricks of the fear mongers' trade. . . : the use of poignant anecdotes in place of scientific evidence, thechristening of isolated incidents as trends, depictions of entire categories of people as innately dangerous. . . . (Glassner206, 208) Hence, any warrant by authority of this ilk further complicates probability estimates in extended argumentsusing mini-max reasoning. Often the link and internal link story is the machination of the debater making the claimrather than the sources cited in the linkage. The links in the chain may be claims with different, if not inconsistent,warrants. As a result, contextual considerations can be mostly moot.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    10/58

    Michigan 2009 10Risk Calculus

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    11/58

    Michigan 2009 11Risk Calculus

    Extensions - Must ignore multiple internal link Impacts

    [ ] Multiple internal link calculations fail different conditions make determining

    probability impossible

    Berube, 2000, Associate Professor of Speech Communication and Director of Debate at the University of SouthCarolina (David M. Berube, 2000, Debunking mini-max reasoning: the limits of extended causal chains in contestdebating http://www.cedadebate.org/CAD/2000_berube.pdf, pages 53-73)

    The complex probabilities of extended arguments are problematic. For example, too much reliance is given an extendedlink story when each step in the link exhibits a probability that is geometrically self-effacing. According to thetraditional multiplication theorem, if a story is drawn from AaBaCaD, the probabilities of AaB and BaC and CaD aremultiplied. "The probability that two subsequent events will both happen is a ratio compounded of the probability of the1st, and the probability of the 2nd in the supposition the 1st happens" (Bayes 299). If the probability of AaB is .10 andthe probability of BaC is also .10, then the probability of AaC is .01. If the probability of CaD is also .10, then theprobability of AaD is .001. If all we had to do to determine probability involved multiplying fractions, calculatingprobabilities would be easy. Unfortunately, such is not the case. An interesting caveat involves conditional probability."Its expositors hold that we should not concern ourselves with absolute probabilities, which have no relevance to things

    as they are, but with conditional probabilities - the chances that some event will occur when some set of previousconditions exists" (Krause 67). Conditional probabilities are most often associated with calculations involving variablesthat may be even remotely associated, such as phenomena in international relations. If one considers the probability ofmany separate events occurring, one must also consider whether or not they are correlated - that is, whether or not theyare truly independent. If they are correlated, simply multiplying individual probabilities will not give you the correctestimate, and the final probability may actually be much larger than one will predict if one makes this error. Forexample, the probability that I will utter an obscenity at any given instance may be small (although it is certainly notzero). The probability that I will hit my funny bone at any given instant is also small. However, the probability that Iwill hit my funny bone and then utter an obscenity is not equal to the product of the probabilities, since the probabilityof swearing at a given instant is correlated to the probability of hurting myself at a given instant. (Krause 67) Hence, "ifwe calculate a priori the probability of the occurred event and the probability of an event composed of that one and asecond one which is expected, the second probability divided by the first will be the probability of the event expected,drawn from the observed event" (Laplace 15). Another complication of extended causal chains is the corroborationprinciple. "There are cases in which each testimony seems unreliable (i.e., has less than 0.5 probability) on its own, eventhough the combination of the two testimonies is rather persuasive. . . . [I]f both testimonies are genuinely independentand fully agree with one another, we are surely going to be inclined to accept them" (Cohen 72). When we are uncertainabout a probability, we might try to engage multiple sources making the same or same-like claim. We feel it is lesslikely that two or more sources are incorrect than that a single source will be. While corroboration seems valid, it is apersuasive pipe-dream. If we use this calculus to draw our claims, errors are likely to be shared and replicated. Witnesssome of the problems associated with realism in international relations literature.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    12/58

    Michigan 2009 12Risk Calculus

    Extend - Policy Paralysis

    [ ] Probability below certain thresholds should be ignored otherwise it would paralyze

    policymaking.

    Nicholas Rescher is an American philosopher, University Professor of Philosophy and Chairman of the Center forPhilosophy of Science at University of Pittsburgh. 1983. [Risk: A Philosophical Introduction to the Theory of RiskEvaluation and Management, p35-36. 1983.]

    A probability is a number between zero and one. Now numbers between zero and one can get to be very smallindeed: As N gets Bigger, 1/N will grow very, very small. What, then, is one to do about extremely smallprobabilities in the rational management of risks? On this issue there is a systemic disagreement betweenprovabilities working mathematics or natural science and decision theorists who work on issues relating to humanaffairs. The former take the line that small numbers are small numbers and must and must be taken into account assuch. The latter tend to make the view that small probabilities represent extremely remote prospects and can bewritten off. (De minimis non curat lex, as the old precept has it: there is no need to bother with trifles.) Whensomething is about as probable as it is that a thousand fair dice when tossed a thousand times will all come up sizes,then, so it is held, we can pretty well forget about it as a wrothy concern. As a matter of practical policy we operate

    with probabilities on the principle that when x < e, then x ~= 0. Where human dealings in real-life situations areconcerned, sufficiently remote possibility can for all sensible purpose be viewed as being of probability zero,and possibilities with which they associated set aside. In the real world people are prepared to treat certainprobabilities as effectively zero, taking certain sufficiently improbable eventualities as no longer representing realpossibilities. In such a case our handling of the probabilities at issue is essentially a matter of fiat, of deciding as amatter of policy that a certain level of sufficiently low probability can be taken as a cut-off point below which weare no longer dealing with real possibilities and with genuine risks. In real-life deliberations, in the law(especially in the context of negligence) and indeed throughout and setting of out practical affairs, it is necessary todistinguish between real and unreal (or merely theoretical- possibilities. Once the probability of an event gets tobe small enough, the event at issue may be seen as no longer a real possibility. (theoretically possible though it matbe). Such an event is something we can simply write off as being outside the range of appropriate concern,something we can dismiss for all practical purposes. As one writer on insurance puts it: [P]eople... refuse to worryabout losses whose probability is below some threshold probabilities below the threshold are treated as though theywere zero.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    13/58

    Michigan 2009 13Risk Calculus

    Extend - Policy Paralysis

    [ ] Some probabilities are effectively zero if we dont ignore them, we would be

    paralyzed

    Nicholas Rescher is an American philosopher, University Professor of Philosophy and Chairman of the Center forPhilosophy of Science at University of Pittsburgh. Risk: A Philosophical Introduction to the Theory of RiskEvaluation and Management, p35-36. 1983.

    No doubt, events of such possibility can happen in some sense of the term, but this can functions somewhatfiguratively it is no longer something presents a realistic prospect. To be sure, this recourse to effective zero-hooddoes not represent a strictly objective, ontological circumstance. It reflects a matter of choice or decision, namely thepractical step of treating certain theoretically extant possibilities as unreal as not worth bothering about, asmeriting being set at zero, as being literally negligible. Of course, the question remains: How small is small enoughfor being effectively zero? With that value of x does x ~= 0 begin: just exactly where does the threshold ofeffective zerohood lie.? This is clearly not something that be n resolved in a once-and-for-all-manner. It may varyfrom individual to individual, changing with th the cautiousness of the person involved, representing an scene ofan individual's stance towards the whole risk-taking process. And it may also vary with the magnitude of the stake at

    issue. For it seems plausible to allow the threshold of effective zerohood to be readjusted with the magnitude of thethreat of issue taking lower values as the magnitude of the at issue increases. (Such a policy seems in smooth accordwith the fundamental principle of risk management that greater potential losses should be risked only when theirchance for realization are less.) In deliberating about risks to human life, for example, there is a some tendency totake as a baseline the chance of death by natural disasters (or acts of God), roughly 1/1,000,000 per annum in theUSA. This would be seen as something akin to the noise level of a physical system of fatality probabilitiessignificantly smaller then this would thus be seen as negligible. Such an approach seems to underly the Good andDrug Administration's proposed standards of 1 in a million over a lifetime. People's stance in the face of theprobability that when embarking on a commercial airplane trip they will end up as an aviation fatality (which standsat roughly 3 x 10-8 for the U.S.A.) also illustrates this perspective. (Most neither worry - nor insure unless "thecompany pays.") But an important point must be noted in this connection. The probability values that we treat aseffectively zero must be values of which, in themselves, we are very sure indeed. But real-life probability values areseldom all the precise. And so in general there will be considerable difficulty in sustaining the judgment that acertain probablity ideed is effectively zero. A striking instance is afforded by the Atomic Energy Commission-sponsored "Rasmussen report" (named after Norman C. Rasmussen, the study director) on the accident risks ofnuclear power plants: From the viewpoint of a person living in the general vicinity of a reactor, the likelihood ofbeing killed in any one year in a reactor accident is one chance in 300,000,000 and the likelihood of being injured inany one year in a reactor accident is one chance in 150,000,000. The theoretical calculations that sustain such afinding invoke so many assumptions regarding facts, circumstances, and operating principles that such probabilityestimates are extremely shaky. Outside the domain of purely theoretical science we are too readily plunged belowthe threshold of experimental error, and will thus confront great difficulties in supporting minute probabilitydistinctions in the sphere of technological and social applications. Statistical probabilities can be very problematic inthis regard, in particular since statistical data are often deficient or unavailable in the case of very rare events.Personal probabilities too are very vulnerable in this context of assessing very low probabilities. For example, theflood victims interviewed by the geographer R. W. Kates flatly denied that floods could ever recur in their area,erroneously attributing previous floods to freak combinations of circumstances that were extremely unlikely torecur. 16 One recent writer has asserted, not without reason, that in safety-engineering contexts it simply is not

    possible to construct sufficiently convincing arguments to support very small probabilities (below 10-5).17 Andindeed a diversified literature has been devoted to describing the ways in which the estimation of very lowprobabilities can go astray.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    14/58

    Michigan 2009 14Risk Calculus

    Alternative Prioritize Uniqueness

    [ ] Prioritizing Uniqueness as an absolute take out is essential to avoid becoming

    enslaved to infinite risk

    Dale Herbeck, Professor of Communication at Boston College, 1992 [Director of the Fulton Debating Society at BostonCollege, The Use and Abuse of Risk Analysis in Policy Debate, Paper Presented at the 78th Annual Meeting of theSpeech Communication Association (Chicago, IL), October 29th-November 1st, Available Online via ERIC NumberED354559, p. 10-12]

    Third, we must not allow ourselves to become enslaved to large impacts. The fact that the impact is grave, does not, in andof itself, mean that there is any probability associated with the outcome. Consider, for example, a disadvantage whichposited that the plan would increase the risk of species extinction. While it is true that species extinction would haveserious consequences, this fact should not force us to mindlessly reject any policy that might cause species extinction.Further, we should take care in assessing evidence purporting to prove that a prudent policy maker should reject any action that risks the impact. Inother words, evidence claiming that species extinction is the ultimate of all evils is not sufficient to prove that theaffirmative case should be summarily rejected. Finally, we must rehabilitate the importance of uniqueness arguments indebate. When arguing the position is not unique, an advocate is arguing that the disadvantage should already have

    occurred or will inevitably occur in the status quo. For example, when arguing uniqueness against a budgetdisadvantage, an affirmative would argue that the President and/or Congress have routinely increased spending.Therefore, such spending should cause the disadvantage. The problem in debate today is that judges consistently assignsome level of risk to disadvantages even when the affirmative presents uniqueness arguments which have a greater linkto the disadvantage than the affirmative plan. Consider the following example. Suppose an affirmative team advocated a planwhich provided for increased military training of Bangladesh's army under the International Military Education and TrainingProgram (IMET). Against this plan, suppose the negative advocated a disadvantage claiming that increased U.S. influencein Bangladesh would cause a loss of Indian influence in Bangladesh, causing them to lash-out as a way of reclaimingtheir influence in South Asia. Given the fact that the United States has given Bangladesh over 3 billion dollars over thepast 20 years,19 and given the fact that U.S. influence in South Asia is vastly increasing due to the virtual collapse of Soviet influence in theregion,20 it would be ludicrous to assume that there is any unique risk of India fearing a minimal expansion of the IMETprogram to Bangladesh. In this example, where the uniqueness arguments prove a greater increase in U.S. influencethan will ever occur under the affirmative plan, a judge should conclude that there is zero risk to adopting theaffirmative plan. Unfortunately, many judges in this situation would irrationally assign some minimal risks to thedisadvantage. They would reason that there is always some risk, albeit small, to adopting the affirmative plan. Yet, suchreasoning makes a mockery of the concept of uniqueness arguments. If a uniqueness argument proves that the statusquo actions will be larger than the affirmative's link to the disadvantage, then it has sufficiently demonstrated that thereis no unique risk to adopting the affirmative plan. Under these circumstances, the judge should assign zero risk to thedisadvantage.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    15/58

    Michigan 2009 15Risk Calculus

    Alternative Expert Opinion

    [ ] Expert opinion is critical to assessing probabilities for risk calculus

    Vertzberger, 1995, Professor at the Department of International Relations, the Hebrew University of Jerusalem, Israel

    (Yaacov Y. I. Vertzberger, June 1995, Rethinking and Reconceptualizing Risk in Foreign Policy Decision-Making: ASociocognitive Approach, Political Psychology, Vol. 16, No. 2, pp. 347-380, http://www.jstor.org/stable/3791835)

    Risk in Foreign Policy Decision-Making issues, such as environmental hazards, where they perceive these criteria as relevant. Inperson-based validation, confidence is rooted in the individual who is the source of the knowledge. The user does not care about howknowledge was generated, but who is disseminating it. Confidence in a particular person may derive from several sources: innatequalities (like charisma); affective qualities (such as liking); past experience (the person has proven himself in the past to becredible); an established relationship (a long-time friend), and other reasons. The third source of confidence is belief-basedvalidation. In this case, knowledge that conforms or is congruent with important beliefs of the user will be considered as valid, evenif the knowledge is methodologically flawed. In this case, the identity of the person delivering the knowledge is of little consequenceto the user. Ideologues, such as former President Ronald Reagan, are inclined to use this validation criterion. Thus Reagan'sconfidence in the exaggerated assess-ments that there was a high probability that a Marxist regime in Grenada would pose a highthreat to the United States was belief-based. It stemmed from his intense belief in the evil motives driving such regimes and theirunavoidable subordination to the interest of the Soviet Union. The fourth and least important source is situation-based validation.Here context determines the reliability of knowledge, which is applied when the observer distrusts his or her information sources.

    Situation-based validation is based on the premise that in particular situations the information either cannot be manipulated andtherefore can be trusted or the source of information has no incentive to manipulate information because the costs are too high or thegains are marginal. Being cognitive misers, people tend to devise a hierarchy of their most preferred to least preferred validationcriteria. Judgment of the reliability of value and probability assessments will relate to this hierarchy. Decision-makers will start byapplying their most preferred criterion. If the most preferred criterion cannot be applied in their judgment of reliability, they will

    proceed to the next level in the hierarchy, and so on, moving down the list of preferences, each step representing a lower level ofconfidence. Preference for one source of validation over another has important implications for the increase or decrease of confidencelevels over time. Epistemic-based validation has built-in rules for discrediting or falsifying currently held assessments. Person-basedvalidation will change when there is diminished trust in a particular person or when a highly regarded person provides invalidatinginformation. Belief-based validation is the most difficult to discredit because beliefs, especially core beliefs, change very slowly.Practically the only quick way of convincing a decision-maker relying on belief-based validation to change is by reframing theinformation in a manner that will convince the decision-maker that it is not congruent any longer with his or her beliefs, or that the

    preference for reliance on belief-based validation was an error.

    [ ] Expert opinion is critical to assessing probabilities for risk calculus

    Hansson, 2006; professor in philosophy Royal Institute of Technology (Sven Ove; May 23, 2006; The Epistemology ofTechnological Risk; http://scholar.lib.vt.edu/ejournals/SPT/v9n2/hansson.html#bondi

    Technological risks depend not only on the behaviour of technological components, but also on human behaviour.Hence, the risks in a nuclear reactor depend on how the personnel act both under normal and extreme conditions.Similarly, risks in road traffic depend to a large degree on the behaviour of motorists. It would indeed be difficult to findan example of a technological system in which failure rates depend exclusively on its technical components, with noinfluence from human agency. The risks associated with one and the same technology can differ drastically betweenorganizations with different attitudes to risk and safety. This is one of the reasons why human behaviour is even moredifficult to predict than the behaviour of technical components. Humans come into probability estimates in one moreway: It is humans who make these estimates. Unfortunately, psychological studies indicate that we have a tendency tobe overly confident in our own probability estimates. In other words, we tend to neglect the possibility that our ownestimates may be incorrect. It is not known what influence this bias may have on risk analysis, but it certainly has apotential to create errors in the analysis. (Lichtenstein, 1982, 331) It is essential, for this and other reasons, to make aclear distinction between those probability values that originate in experts' estimates and those that are known throughobserved frequencies. There are occasions when decisions are influenced by worries about possible dangers althoughwe have only a vague idea about what these dangers might look like. Recent debates on biotechnology are an exampleof this. Specific, identified risks have had only a limited role in these debates. Instead, the focus has been on vague orunknown dangers such as the creation of new life-forms with unforeseeable properties.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    16/58

    Michigan 2009 16Risk Calculus

    Alternative Prioritize Probability

    [ ] Risk is based on the knowledge of the probability of an outcome

    Vertzberger, 1995, Professor at the Department of International Relations, the Hebrew University of Jerusalem, Israel

    (Yaacov Y. I. Vertzberger, June 1995, Rethinking and Reconceptualizing Risk in Foreign Policy Decision-Making: ASociocognitive Approach, Political Psychology, Vol. 16, No. 2, pp. 347-380, http://www.jstor.org/stable/3791835)

    What is risk? As a real-life construct of human behavior,risk has to be viewed as a compendium that represents a complexinterface among a particular set of behaviors and outcome expectations, taking into account the environmental

    context in which these behaviors take place. (For a review of various social science approaches to the concept of risk, see Bradbury,

    1989, and Renn, 1992.) Risk must be approached in a nontechnical manner, and hence the common distinction

    between risk and uncertainty is neither realistic nor practical when applied to the analysis of nonquantifiable

    and ill-defined problems, such as those posed by important politico-military issues. The classical distinction

    found in economics between risk and uncertainty postulates that risk exists when decision-makers have perfect

    knowledge of all possible outcomes associated with an event and the probability distribution of their occurrence;

    whereas uncertainty exists when a decision-maker has neither the knowledge of nor the objective probabilities

    distribution of the outcomes associated with an event (Kobrin, 1979, p. 70; 1982, pp. 41-43). These definitions tend to overlook

    outcome values. Yet the term "risk" in everyday language and as commonly understood by decision-makers has a

    utility-oriented connotation. "[T]he word risk now means danger, high risk means a lot of danger.... The

    language of risk is reserved as a specialized lexical register for political talk about undesirable outcomes" (Douglas,1990, p. 3; also March & Shapira, 1987). Uncertainty, on the other hand, is information-oriented and connotes a state of incomplete information. Atthe most extreme case, uncertainty entails even lack of information regarding what dimensions are relevant to the description of the risk-set associatedwith a particular case of intervention. This is defined as descriptive uncertainty. It is possible, however, that although the relevant problem dimensionsare known, their values are not; this is defined as measurement uncertainty (Rowe, 1977, pp. 17-18).

    [ ] Risk assessment must focus on probability this mirrors the real world and is the

    most precise method

    Hansson, 2006; professor in philosophy at the Royal Institute of Technology (Sven Ove; May 23, 2006; TheEpistemology of Technological Risk; http://scholar.lib.vt.edu/ejournals/SPT/v9n2/hansson.html#bondi

    Beginning in the late 1960's, growing public concern with new technologies gave rise to a new field of applied science: the study of risk. Researchersfrom the technological, natural, behavioral, and social sciences joined to create a new interdisciplinary subject, risk analysis. (Otway 1987) The aim ofthis discipline is to produce as exact knowledge as possible about risks. Such knowledge is much in demand, not least among managers, regulators,

    and others who decide on the choice and use of technologies. But to what extent can risks be known? Risks are always connected to lackof knowledge. If we know for certain that there will be an explosion in a factory, then there is no reason for us to

    talk about that explosion as a risk. Similarly, if we know that no explosion will take place, then there is no reason

    either to talk about risk. What we refer to as a risk of explosion is a situation in which it is not known whether or

    not an explosion will take place. In this sense, knowledge about risk is knowledge about the unknown.It is thereforea quite problematic type of knowledge. Although many have tried to make the concept of risk as objective as possible, on a fundamental level it is an

    essentially value-laden concept. More precisely, it is negatively value-laden.Risks are unwanted phenomena. The tourist who hopes for asunny week talks about the "risk" of rain, but the farmer whose crops are threatened by drought will refer to the possibility of rain as a "chance" rather

    than a "risk." There are more distinctions to be made. The word "risk" is used in many senses that are often not sufficiently

    distinguished between. Let us consider four of the most common ones.(1) risk = an unwanted event which may or may notoccur.This is how we use the word "risk" when we say that lung cancer is one of the major risks that affect smokers, or that aeroembolism is aserious risk in deep-sea diving. However, we may also describe smoking as a (health) risk or tell a diver that going below 30 meters is a risk not worth

    taking. We then use the word "risk" to denote the event that caused the unwanted event, rather than the unwanted event itself: (2) risk = thecause of an unwanted event which may or may not occur. In addition to identifying the risks, i.e. finding out what they

    are, we also want to determine how big they are. This is usually done by assigning to each risk a numerical value

    that indicates its size or seriousness. The numerical value most often used for this purpose is the probability of

    the event in question. Indeed, risks are so strongly associated with probabilities that we often use the word "risk"

    to denote the probability of an unwanted event rather than that event itself. This terminology is particularly common inengineering applications. When a bridge-builder discusses the risk that a bridge will collapse, or an electrical engineer investigates the risk of powerfailure, they are almost sure to use "risk" as a synonym of probability.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    17/58

    Michigan 2009 17Risk Calculus

    Alternative Prioritize Probability

    [ ] Probability is necessary for the quantification of risk

    Hansson, 2006; professor in philosophy Royal Institute of Technology (Sven Ove; May 23, 2006; The Epistemology of

    Technological Risk; http://scholar.lib.vt.edu/ejournals/SPT/v9n2/hansson.html#bondi

    From a decision-maker's point of view,it is useful to have risks quantified so that they can easily be compared andprioritized. But to what extent can quantification be achieved without distorting the true nature of the risks

    involved?As we have just seen,probabilities are needed for both of the common types of quantification of risk(the thirdand fourth senses of risk).Without probabilities, established practices for quantifying risks cannot be used. Therefore,the crucial issue in the quantification of risk is the determination of probabilities.

    [ ] Probability is the most important aspect of an argument

    Berube, 2000, Associate Professor of Speech Communication and Director of Debate at the University of SouthCarolina (David M. Berube, 2000, Debunking mini-max reasoning: the limits of extended causal chains in contestdebating http://www.cedadebate.org/CAD/2000_berube.pdf, pages 53-73)

    The strength of the relationship between the claims in extended arguments rests on the probability of the

    causation between and among the simple claims. The relationship between each claim in an extended argument

    is moderated by its probability.Probability is challenging to define. Many scientists and members of the risk assessment community "havenot as yet come to grips with the foundational issue about the meaning of probability and the various interpretations that can be attached to the term

    probability. This is extremely important, for it is how one views probability that determines one attitude toward a statistical procedure" (Singpurwalla

    182). We employ the notion of probability when we do not know a thing with certainty. But our uncertainty is

    either purely subjective (we do not know what will take place, but someone else may know) or objective (no one

    knows, and no one can know). Subjective probability is a compass for an informational disability.. . . Probability is, so to speak, a cane for ablind man; he uses it to feel his way. If he could see, he would not need the cane, and if I knew which horse was the fastest, I would not need

    probability theory. (Lem 142) In simple arguments, "risks are simply the product of probability and consequence " (Thompson &Parkinson 552). Thompson and Parkinson found a difficulty in risk assessment associated with mini-max arguments that they identified as the

    problem of risk tails. "Risk tails are the segments of the standard risk curve which approach the probability and consequence axes. The tails representhigh-consequence low-probability risk and low-consequence high-probability risk" (552). This region, especially the high-consequence low-

    probability tail, is the site of mini-max computation.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    18/58

    Michigan 2009 18Risk Calculus

    Alternative Threshold Probability

    [ ] The alternative to Infinite risk is Threshold probability we should ignore risks that

    fall below a minimal level of probability

    Dale Herbeck, Professor of Communication at Boston College, 1992 [Director of the Fulton Debating Society at BostonCollege, The Use and Abuse of Risk Analysis in Policy Debate, Paper Presented at the 78th Annual Meeting of the SpeechCommunication Association (Chicago, IL), October 29th-November 1st, Available Online via ERIC Number ED354559, p. 10-12]

    First, and foremost, we need to realize that some risks are so trivial that they are simply not meaningful. This is not to argue thatall low probability/high impact arguments should be ignored, but rather to suggest that there is a point beneath which

    probabilities are meaningless. The problem with low probability arguments in debate is that they have taken on a life of theirown. Debate judges routinely accept minimal risks which would be summarily dismissed by business and political leaders. Whileit has been argued that our leaders should take these risks more seriously, we believe that many judges err in assessing anyweight to such speculative arguments. The solution, of course, is to recognize that there is a line beyond which probability is notmeaningfully evaluated. We do not believe it is possible to conclude, given current evidence and formats of debate, that a planmight cause a 1 in 10,000 increase in the risk of nuclear conflagration.17 Further, even if it were possible, we need to recognizethat at some point a risk becomes so small that it should be ignored. As the Chicago Tribune aptly noted, we routinely dismiss the

    probability of grave impacts because they are not meaningful: It begins as soon as we awake. Turn on the light, and we risk

    electrocution; several hundred people are killed each year in accidents involving home wiring or appliances. Start downstairs toput on the coffee, and you're really asking for it; about 7,000 Americans die in home falls each year. Brush your teeth, and youmay get cancer from the tap water. And so it. goes throughout the day -- commuting to work, breathing the air, working, havinglunch, coming home, indulging in leisure time, going back to bed.18 Just as we ignore these risks in our own lives, we should bewilling to ignore minimal risks in debates. Second, we must consider the increment of the risk. All too often, disadvantages claimthat the plan will dramatically increase the risk of nuclear war. This might be true, and still not be compelling, if the original riskwas itself insignificant. For example, it means little to double the probability of nuclear war if the original probability was only 1in one million. To avoid this temptation, advocates should focus on the initial probability, and not on the marginal doubling of therisk claimed by the negative.

    [ ] The alternative to mini max arguments is to disregard low probabilities this

    restores rationality to decisionmaking

    David Berube, Associate Professor of Speech Communication at the University of South Carolina, 2000 [Director ofDebate Debunking Mini-max Reasoning: The Limits Of Extended Causal Chains In Contest Debating, ContemporaryArgumentation and Debate, Volume 21, Available Online at http://www.cedadebate.org/CAD/2000_berube.pdf, Accessed 04-05-2008, p. 64-69]

    If extended arguments using mini-max reasoning is so indefensible, what can we do? Surprisingly, the answer is quite a lot. As astarting point, we need to reject the notion that contest debating would be impossible without [mini-max debating} them. Wecould demand a greater responsibility on the part of arguers making mini-max claims (a subject approached below). Debaterscould use their plans and counterplans to stipulate the internal link and uniqueness stories for their extended arguments,consequently focusing the debate on probability assessment and away from exaggerated impacts. Alternatively, debaters mayselect to discuss ideas as we have seen in the recent trend toward kritik debating. In addition, we need to understand that burdensof proof associated with extended arguments involving mini-max reasoning are not always extraordinary. Here is one rationalewhy it might be imprudent to reject all instances involving mini-max claims. Consider these two questions. Should we decide toforego a civil rights initiative in the U.S. because it may lead to a war in the Middle East? Should we refrain from building a

    plutonium reprocessing plant nearby to avoid the heightened incidence of cancer? We might accept the second more regularly

    than the first. The reason the second extended argument should be more presumptive is simply because interceding variables thatmight preclude the consequence are less reliable than in the first scenario because they would be derivative. In other words, thefix would need to be designed by agents similarly motivated. Just like "realist" foreign policy theorists may think too much alike,so do agents who are acting within the same agency. Unlike the second scenario, agents able to intercede between civil rightslegislation and U.S.-Israeli foreign relations come from different disciplines and worldviews (different directions) and are lesslikely to share motivations which might prevent their capability to interpose end stops into a particular series of occurrences.With these caveats out of the way and assuming some mini-max extended arguments are more reliable than others, I propose anumber of tests by which the strength of particular mini-max extended arguments might be adduced. The tests fall into threegeneral categories: probability and confidence, scenario construction, and perceptual bias. I offer these tests merely assuggestions and in full awareness of the fact that they hardly exhaust the potential checks on extended arguments using mini-maxreasoning.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    19/58

    Michigan 2009 19Risk Calculus

    They Say Risk = Probability times impact = infinity

    [ ] Basing risk assessment on impact times size makes assessment too indeterminate it

    ignores subjectivities in how to measure and evaluate expectations

    Hansson, 2006; professor in philosophy Royal Institute of Technology (Sven Ove; May 23, 2006; The Epistemology ofTechnological Risk; http://scholar.lib.vt.edu/ejournals/SPT/v9n2/hansson.html#bondi

    In risk-benefit analysis, i.e. the systematic comparison of risks with benefits, risks are measured in terms of theirexpectation values. This is also the standard meaning of risk in several branches of risk research. Hence, in studies ofrisk perception the standard approach is to compare the degree of severity that subjects assign to different risk factors("subjective risk") to the expectation values that have been calculated for the same risk factors ("objective risk"). Theunderlying assumption is that there is an objective, knowable risk level that can be calculated with the expectation valuemethod. However, this measure of risk is problematic for at least two fundamental reasons. First, probability-weighingis normatively controversial. A risk of 1 in 1000 that 1000 persons will die is very different from a risk of 1 in 10 that10 persons will die. Although the expectation values are the same, moral reasons can be given to regard one of thesetwo situations as more serious than the other. In particular, proponents of a precautionary approach maintain thatprevention against large but improbable accidents should be given a higher priority than what would ensue from an

    expectation value analysis. (O'Riordan and Cameron 1994, O'Riordan et al 2001) The other problem with theexpectation value approach is that it assesses risks only according to their probabilities and the severity of theirconsequences. Most people's appraisals of risks are influenced by factors other than these. In particular, the expectationvalue method treats risks as impersonal entities, and pays no attention to how risks and benefits are distributed orconnected. In contrast, the relations between the persons affected by risks and benefits are important in most people'sappraisals of risks. It makes a difference if it is myself or someone else that I expose to a certain danger in order to earnmyself a fortune. If the expectation value is the same in both cases, we can arguably say that the size of the risk is thesame in both cases. It does not follow, however, that the risk is equally acceptable in the two cases. More generallyspeaking, if we use expectation values to measure the size of risks, this must be done with the reservation that the sizeof a risk is not all that we need to in order to judge whether or not it can be accepted. Additional information about itssocial context is also needed.

    [ ] Indeterminate risk assessment is counterproductive it creates the illusion of

    objectivity and masks dangers

    Hansson, 2006; professor in philosophy Royal Institute of Technology (Sven Ove; May 23, 2006; The Epistemology ofTechnological Risk,; http://scholar.lib.vt.edu/ejournals/SPT/v9n2/hansson.html#bondi

    In real life we are seldom in a situation like that at the roulette table, when all probabilities are known with certainty (orat least beyond reasonable doubt). Most of the time we have to deal with dangers without knowing their probabilities,and often we do not even know what dangers we have ahead of us. This is true not least in the development of newtechnologies. The social and environmental effects of a new technology can seldom be fully grasped beforehand, andthere is often considerable uncertainty with respect to the dangers that it may give rise to. (Porter 1991) Risk analysishas, however, a strong tendency towards quantification. Risk analysts often exhibit the tuxedo syndrome: they proceedas if decisions on technologies were made under conditions analogous to gambling at the roulette table. In actual fact,however, these decisions have more in common with entering an unexplored jungle. The tuxedo syndrome is dangeroussince it may lead to an illusion of control. Risk analysts and those whom they advice may believe that they know whatthe risks are and how big they are, when in fact they do not.When there is statistically sufficient experience of an event-type, such as a machine failure, then we can determine its probability by collecting and analysing that experience.Hence, if we want to know the probability that the airbag in a certain make of car fails to release in a collision, weshould collect statistics from the accidents in which such cars were involved.

    The Method Lab

  • 8/6/2019 Risk Calculus Final Draft

    20/58

    Michigan 2009 20Risk Calculus

    They Say Risk = Probability times impact = infinity

    [ ] Probability times magnitude does not take real world event into account and does not

    work in decision making it denies decision makers crucial information and flexibility

    Hansson professor in philosophy Royal Institute of Technology 2007 [Hlne Hermansson, Sven Ove Hansson. RiskManagement. Basingstoke: Jul 2007. Vol. 9, Iss. 3; pg. 129, 16 pgs J