baron_1994 reference 2

Upload: asara83

Post on 08-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Baron_1994 REFERENCE 2

    1/42

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17, 1-42Printed in the United States of America

    Nonconsequential is t decisionsJonathan BaronDepartment of Psychology, University of Pennsylvania, Philadelphia, PA19104-6196Electronic mail: [email protected]

    Abstract: According to a simple form of consequentialism, we should base decisions on our judgments about their consequences forachieving our goals. Our goals give us reason to endorse consequentialism as a standard of decision making. Alternative standardsinvariably lead to consequences that are less good in this sense. Yet some people knowingly follow decision rules that violateconsequentialism. For example, they prefer harmful omissions to less harmful acts, they favor the status quo over alternatives theywould otherwise judge to be belter, they provide third-party compensation on the basis of the cause of an injury rather than thebenefit from the compensation, they ignore deterrent effects in decisions about punishment, and they resist coercive reforms theyjudge to be beneficial. I suggest that nonconsequentialist principles arise from overgeneralizing rules that are consistent withconseq uentialism in a limited set of cases. Com mitm ent to such rules is detache d from their original purp oses . The existence of suchnonconsequentialist decision biases has implications for philosophical and experimental methodology, the relation betweenpsychology and public policy, and education.Keywords: bias; consequentialism; decision; goals; intuition; irrationality; judgment; normative models; norms; omission; over-generalization; utility1. IntroductionSocrates, Aristotle, and Plato started a tradition of eval-uating human reasoning according to standards that ap-plied to the reasoning itself rather than to its conclusions.We now maintain such standards through schools, child-rearing, and public and private discourse. The best-known standards apply to logic, and standards have alsobeen applied to practical and moral reasoning. To accusesomeone of being "illogical" or "unreasonable" is to ex-press such standards, even when the accuser is motivatedby a dislike of the conclusion rather than the means ofreaching it .A long tradition in psychology concern s the evaluationof human reasoning with respect to standards of this sort.Evaluation of reasoning was explicit in the study of logic(Evans 1989; Woodworth & Schlosberg 1954) problem-solving (Wertheim er 1959); cognitive developm ent (e.g.,Koh lberg 1970; Sharp e t al. 1979); and t he social psychol-ogy of stereotyping, obedience to illegitimate authority,excessive conformity, and self-serving judgments (Nisbett& Ross 1980; Sabini 1992).

    In th e 1950s, this tradition w as extended to the study ofjudgments and decisions, using statistics, probability the-ory, and decision theory as standards. Early results (Pe-terson & Beach 1967; Tversky 1967) indicated that peoplewere at least sensitive to the right variables. For example,people are more confident in numerical estimates basedon larger samples, although, not surprisingly, the effect ofsample size is not exactly what the formula says it shouldbe. Beginning about 1970, however, Kahneman andTversky (e.g., 1972) began to show qualitative depa rture sfrom these standards; for example, judgments are some-times completely insensitive to sample size. [See alsoCo hen: "Multiple Mechanisms for Partitioning: BB S 12(4)

    1989 and Kyburg: "Induction and Probability" BB S 9(4)1986. Ed. ]Allais (1953) and Ells ber g (1961) note d de par tur es fromthe expected-utility theory of decision making fairly early.At the time, that theory was understood as both a norma-tive model, specifying how people should make decisionsideally, and as a descriptive model, predicting and ex-plaining how decisions are actually made. Economistswere not bothered by the idea that a single model coulddo both jobs, for they had clone well by assuming thatpeople are approximately rational. Demonstrations thatpeople sometimes violated expected-utility theory wereat first taken to imply that the model was incorrect bothdescriptively and normatively (Allais 1953; Ellsberg1961).Kahneman and Tversky (1979) suggested that a dis-tinction between normative and descriptive models waswarranted in decision making as well as elsewhere.Expected-utility theory could be normatively correct butdescriptively incorrect. They proposed "prospect theory"as an alternative descriptive model.The suggestion that people were behaving nonnor-matively was defended most clearly by reference to fram-ing effects (Kahn eman & Tversky 1984): peo ple makedifferent decisions in (what they would easily recognizeas) the sam e situations desc ribed differently. For exam-ple, most people prefer a 20% chance of winning $45 overa 25% chance of $30, but if they have a 75% chance ofwinning nothing and a 25% chance of a choice between$30 for sur e and an 8 0% chance of $45, they say they willpick the $30 if they get the choice. The latter choice wasthought to result from an exaggerated weight given tooutcomes tha t are certain, a certainty effect. Presentationof the gamble as a two-stage lottery induced subjects tosee th e $30 as certain (if they got the cha nce to choose it).

    1994 Cambridge University Press 0140-525X194 $5.00+.00

  • 8/7/2019 Baron_1994 REFERENCE 2

    2/42

    Baron: Nonconsequentialist decisionsBecause such choices are, in a sense, contradictory, theycannot be normatively correct. Hence, separate norma-tive and descriptive models may be needed, and certainrules of decision making (such as the certainty effect) maybe seen as errors, in the sense that they depart from anormative model . Although Kahneman and Tversky didnot defend expected-utility theory itself, it seems that theonly way to avoid all the framing effects they have found isto follow that theory (e.g., Hammond 1988).I want to defend an approach to the study of errors indecision making based on a comparison of decisions tonormative models. I shall argue that this approach is anatural extension of the psychology of reasoning errorsand that it has some practical importanc e. D ecisions (andjudgments) have consequences. By improving decisions,we might , on the average, improve the consequences.Many of the consequences that people often lament arethe result of human decisions, such as those made bypolitical leaders (and, implicitly, by those who electthem). If we can improve decision making, we can im-prove our lives.Psychologists can make errors, too, but we can makeerrors of omission as well as errors of commission. If wefail to point out an error that should be corrected, theerror continues to be mad e. E rrors of commission have anextra cost. If we mistakenly try to change some pattern ofreasoning that is not really erroneous, we not only riskmaking reasoning worse but we also reduce our credibil-ity. Arguably, th is has happened for many "pop psycholo-gists." Still, we cannot wait for perfect confidence, or wewill never act. Academic caution is not the only virtue.The basic approach I shall take here (Baron 1985;1988a) is to con sider th ree type s of accounts or "m odels" ofdecision making. Normative accounts are those that spec-ify a st and ard by which decision making will be evaluated.Descriptive accounts tell us how decision making pro-ceeds in fact. Of particular interest are aspects of decisionmaking that seem no nnorm ative. If we find such phen om -ena, we know there is room for improvement . Prescrip-tive accounts are designs for improving decision making.They can take the form of very practical advice for every-day decision making (Baron et al . 1991) or formal schemesof analys is (Bell et al. 1988). A systematic depar ture from anormative model can be called an error or a bias, butcalling it this is of no use if the error cannot be corrected.Therefore, the ultimate standards are prescriptive. Theprescriptive standards that we should try to find willrepresent the best rules to follow for making decisions,taking into account human limitations in following anystandard absolutely. N ormative standards are theoretical,to be appealed to in the evaluation of prescriptive rules orindividual decisions.

    In the rest of this target article, I shall outline a nor-mative m odel of decision making. I shall then summarizesome departures from that model, mostly based on myown work and that of my colleagues. Most of thesedepartures involve following rules or norms that agreewith the normative model much of the t ime. Departuresbased on more pernicious rules (e.g., racist ones) doubt-less occur too, but less often. I shall discuss the meth-odological implications of these d epartu res for philosophyand psychology and their prescriptive implications forpublic policy and educat ion.

    2. Consequentialism as a normative modelHe re , I shall briefly defend a simple normative modelaccording to which the best decisions are those that yieldthe best consequences for achieving people's goals. Goalsare criteria by which people evaluate states of affairs, forexample, rank them as bet te r or worse. Examples of goalsare financial security, maintenance of personal relation-ships, social good, or more immediate goals such assatisfying a thirst.Various forms of consequentialism have been devel-oped, including expected-utility theory and utilitari-anism. I have defended these particular versions else-where (Baron 1993b). For present purposes, consequen-tialism holds simply that the decision should be deter-mined by al l-things-considered judgm ents of overall ex-pected goal achievement in the states to which thosedecisions immediately lead. In this simple form of conse-quential ism, the states are assumed to be evaluated ho-listically, but these holistic evaluations take into accountprobabilities of subsequent states and the distribution ofgoal achievement across individuals. Suppose, for exam-ple, that the choice is between government programs Aan d B, each affecting many people in uncertain ways. If Ijudge that the state of affairs resulting from A is, on thewhole, a bet te r one than that resulting from B for theachievement of goals, then consequentialism dictates thatI should choose A. It is irrelevant that program B may,through luck, turn out to have been better. In sum,j udgme nt of expected consequences should determinedecisions.To argue for this kind of consequentialism, I must askwhere normative models come from, what their justifica-tion could be. I take the idea of a normative model to be anabstraction from various forms of behavior that I de-scribed at' the outset, specifically those in which weexpress our endorsement of norms (in roughly the senseof Gibbard 1990), that is, standards of reasoning. Thebasic function of such end orse me nt is to induce others toconform to these norms. What reasons could we have forendorsing norms?Self-interest and altruism gives us such reasons. Self-interest gives us reason to endorse norms, such as theGolden Rule, with which we exhort others to help us or torefrain from hurting us. Altruism motivates the samenorms: we tell people to be nice to other people. Andaltruism gives us reason to endorse norms for the pursui tof self-interest. We care about others so we want to teachthem how to get what they want. Indirectly, advocacy ofsuch norms helps the advocate to follow them, so we alsohave a self-interested reason to endorse them.It might be argued that norms themselves can providereasons for their own endorsement . For example, thosewho think active euthanasia should be either legal orillegal want others to agree with them . But in any inquiryabout what norms we should endo rse, it is important thatwe put aside the norms we already have, lest we beg thequest ion. In thinking about my own normative standards,I put aside the goals that derive from those standards,although I must trea t othe r people's goals as given when Ithink about what is best for them.If goal achieve men t gives us reasons to endorse norms,then, other things being equal , we should endorse norms

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    3/42

    Baron: Nonconsequentialist decisionsthat help us achieve our goals (collectively, because ourreasons are both altruistic and selfish). Other things areequal, I suggest, because we have no other reasons forendorsing norms. Goals are, by definition, the motiveswe have for doing anything. We need not decide he re onthe a ppro priate balance of goals of self versus others . Thisissue does not arise in the examples I shall discuss.For example, consider two possible norms concerningacts and omissions that affect others. One norm opposesharmful acts. Another opposes both harmful acts andomissions, without regard to the distinction. The secondnorm requires people to help other people when thejudged total harm from not helping exceeds that fromhelping. Harm includes everything relevant to goalachievement, including effort and potential regret.Which norm should I want others to follow? If othersfollow the first, more limited, norm, my goals will not beachieved as well, because I would lose the benefit ofpeople helping me when the total benefits exceed thecosts. I therefore have reason to endo rse a norm that doesnot distingu ish acts and omissio ns, and I have no reason todistinguish acts and omissions as such in the norms Iendorse. Once I endorse this norm, those who accept itwill want me to follow it, too, but if I hold back myendorsement for this reason, I will lose credibility (Baron1993b).Suppose 1 have a goal opposing harmful action but notopposing harmful omission. This goal is not derived frommy moral intuitions or commitments, which we have putaside. In this case, I would have reason to endorse thelimited norm. The more people who have such a (non-moral) goal, the more reason we all have to endorse thisnorm (out of altruism, at least). But consequentialismwould not be violated, for adhere nce to the norm would infact achieve people's nonmoral goals. Although this argu-ment leads to a consequentialist justification for a normdistinguishing acts and omissions, the arg um ent is contin-gent on a (dubious) assumption about human desires.Consider the case of active versus passive euthanasia.Suppose we believe that there are conditions underwhich most people would want life-sustaining treatmentwithh eld b ut would not want to be actively killed. Then, ifwe do not know what a patient wants, this belief wouldjustify a distinction. However, if we know that the patienthas no goals concerning the distinction, we have no reasonto make it on the p atient 's behalf. (Likewise, the slippery-slope argum ent that active euthanasia will lead to re ducedrespect for life depends on a contingent fact that could betaken into account if it wer e tru e. The slipp ery slope couldalso go the other way: refraining from active euthanasiacould lead to errors of misallocation of resources and theconsequent neglect of suffering.) Our decision woulddepend on whether death itself was to be preferred, noton the way in which death comes about (assuming that themeans of death have no other relevant consequences oftheir own for people's goals). In sum, we do not neces-sarily have any reason to want each other to honor aprinciple distinguishing acts and omissions.

    Consider another example. Suppose a girl has a 10 in10,000 chance of death from a disease. A vaccine willprev ent that disease, bu t the re is a 5 in 10,000 chance ofdea th from its side effects. The girl should end ors e a normthat tells you to give her the vaccine, assuming that this

    helps her achieve her goal of living. We would each wantone another to act in this way, so we all have reason toend ors e this no rm. Following H are (1963), I shall call thiskind of argument a Golden Rule argument.If you have a goal of not putting others at risk throughacts in part icular and if this inhibits you from vaccinatingthe girl, sh e is hu rt. She has reason to discourage you fromholding such a goal. It is in her interest for your goalsabout your decisions to be concerned only with theirconsequences. Of course, altruistically, she might beconcerned with your own goals about your decisions, soshe might conclude that it is on the whole better for younot to vaccinate her. B ut she has no general reason - apartfrom w hat she knows about you in particular - to endor sea norm for you that prescribes nonvaccination. The normswe have selfish reason to endorse are those concernedonly with consequences, because those are what affect us.Even when we endorse norms out of altruism, we haveno general reason to endorse a norm treating acts andom issions differently. You migh t have a goal of not Causingharm to yourself through acts, so you might not vaccinateyourself. Such a goal would make it more harmful for meto force you to vaccinate yourself, for I would go againstthat goal. But I have no reason, altruistic or selfish, toend ors e a norm that lead s you to have such a goal if you donot already have it, for it will not help you achieve yourother goals, or mine.This kind of argument concerning the act-omissiondistinction differs from other approaches to this issue (seeKuhse 1987, for an enlightening review). Most of theseare based on in tuitio ns abou t cases as data to be acco untedfor (e.g., the articles in Fischer & Ravizza 1992). Yet, it isjust these intu itions that are at issue. I suggest that manyof them arise from overgeneralizations, to which people -even those who become moral philosophers becomecommitted in the course of their development. In thiscase, for example, harmful acts are usually more inten-tional than harmful omissions, and hence, more blame-worthy. But intention is not different in the cases justdiscussed. People continue to distinguish acts and omis-sions, however, even when the feature that typicallymakes them different is absent.This same argument will apply to the other kinds ofnorms I shall discuss. In general, the function of norma-tive models for decision making (as described earlier) isnot served by any norms other than those that specifyattaining the b est conseq uences, in terms of goal achieve-ment. And those norms should not encourage people tohave any goals for decision making other than achievingthe best consequences. We might be able to go fartherthan this, specifying norms for the analysis of decisionsinto utilities and probabilities, for example, but the exam-ples discussed do not require such analysis. (Although thevaccination case involved probabilities, I simply assumedthat anyone would judge a lower risk of death to be abetter state of affairs: no trading off of probability andutility was required.)

    The u pshot of this argum ent is that we have reason to bedisturbed, prima facie, when we find others makingdecisions that violate consequentialism . O n further inves-tigation, we might find that no better prescriptive normsare possible. But, unless this is true, these norms will leadto decisions that prevent us from achieving our goals as

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    4/42

    Baron: Nonconsequentialist decisionswell as other decisions might. Our goals themselves,including our altruistic goals, therefore, give us reason tobe concerned about nonconsequentialist decisions. Whatwe do about this disturb ance is anoth er question, to whichwe might apply consequentialist norms.

    3. Departures from consequentialismI shall now pre sen t a few examp les of possible violations ofconsequentialism. I hope these examples make it plausi-ble that nonconsequentialist thinking exists and matters,even if each example is subject to one qu ibble or another.3.1. Omission and status-quo bias. Kitov and Baron(1990) examined a set of hypothetical vaccination deci-sions like the one just described. We compared omissionand commission as options within th e same choice. In oneexpe rim ent, subjects were told to imagine that their childhad a 10 out of 10,000 chan ce of deat h from a flu ep idem ic,that a vaccine could prevent the flu, but the vaccine itselfcould kill some nu mb er of children . Subjects were askedto indicate the maximum overall death rate for vaccinatedchildren for which they would be willing to vaccinate theirchild. Most subjects answered well below 9 per 10,000.Of the subjects who showed this kind of reluctance, themean tolera ble risk was about 5 out of 10,000, half the riskof the illness itself. The findings were the same whensubjects were asked to adopt the position of a policymaker deciding for large numbers of children. Whensubjects were asked for justification, some said theywould be responsible for any deaths caused by the vac-cine, but they would not be (as) responsible for deathscaused by failure to vaccinate. When a Golden Ruleargument was presented (Baron 1992), the bias waslargely eliminated. Asch et al. (1993) and Meszaros et al.(1992) have found th at th e existen ce of this bias correlateswith mothers' resistance toward pertussis vaccination(which may prod uce de ath or perma nen t damage in a veryfew children).Other studies (Ritov & Baron 1992; Spranca et al . 1991)indicate a general bias toward omissions over acts thatproduce the same outcome. In one case used by Sprancaet al. (1991), for exam ple, s ubjects wer e told about John, atennis player who thought he could beat Ivan Lendl onlyif Lendl were ill. John knew that Ivan was allergic tocayenne pepper, so, when John and Ivan went out to thecustomary dinner before their match, John planned torecommend to Ivan the house salad dressing, whichcontained cayenne pepper. Subjects were asked to com-pare John's morality in different endings to the story. Inone ending, John recomm ended the dressing. In anotherending, John was about to recommend the dressing whenIvan chose it for himself, and John, of course, said noth-ing. Of the 33 subjects tested, 10 thought that John'sbehavior was worse in the comm ission en ding; no sub jectthought the omission was worse. Other studies (Baron &Ritov, in press; Ritov & Baron 1992; Spranca et al. 1991,Exp erim ent 4) show that th e bias toward omissions is notlimited to cases in which harm (or risk) is the result,although the effect is greater when the decision leads tothe worse of two possible outcomes (Baron & Ritov, inpress).Inaction is often confounded with maintaining the sta-tus quo; and several stu dies have shown an appa rent bias

    toward the status-quo/omission option. People requiremore money to give up a good than th ey are willing to payfor the same good (Knetsch & Sinden 1984; Samuelson &Zeckhau ser 1988; and, for public goods, Mitchell & Car-son 1989). Kahneman et al. (1990) showed that theseeffects were not the result of wealth effects or otherartifacts. They are, at least in part, true biases. AlthoughRitov and Baron (1992) found t hat this statu s-qu o bias waslargely a consequence of omission bias, Schweitzer (inpress) found both omission bias without a status-quooption, and status-quo bias without an omission option.Baron (1992) and Kahneman et al . (1990) also found a pu restatus-quo bias. Status-quo bias, like omission bias, canresult from overgeneralization of rules that are oftenuseful, such as, "If it ain't broke, don't fix it."It is clear that omission and status-quo bias can causefailures to achieve the best consequences, as we wouldjudge them in the absence of a decision. Possible exam-ples in real life are the pain and waste of resourcesresulting from the prohibition of active euthanasia (whenpassive euthanasia is welcomed), the failure to consideraiding the world's poor as an obligation on a par with nothurting them (Singer 1979), and the lives of leisure (orwithdrawal from worldly pursuits) led by some who arecapable of contributing to the good of others.3.2. Compensation. Compensation for misfortunes is of-ten provid ed by insurance (including social insurance) orby the tort system. The consequentialist justification ofcompensation is complex (Calabresi 1970; Calfee &Rubin, in press; Friedm an 1982), but, in the cases consid-ered here, compensation should depend on the nature ofthe injury (including psychological aspects) and not other-wise on its cause or on counterfactual alternatives to it.(The compensation in these cases can help the victim, bu tit cannot punish the injurer or provide incentive for thevictim to complain.) Any departure from this consequen-tialist standard implies that some victims will be overcom-pensated or others undercompensated, or both.Miller and McFarland (1986) asked subjects to makejudgments of compensation. When a misfortune wasalmost avoided, more compensation was provided thanwhen it was hard to imagine how it could have beenavoided. A possible justification for this difference is thatvictims were more emotionally upset in the former casethan in the latter. Ritov and Baron (in press), however,found the same sort of result when subjects understoodthat th e victim did not know the cause of the injury or th ealternatives to it. In all cases a train accident occurredwhen a fallen tree was blocking the tracks. Subjectsjud ged that more compensation should be provided (by aspecial fund) when the train's unexpected failure to stopcaused the injury than when the suddenness of the stopwas the cause. The results were the same whether thefailure was that of an automatic stopping device or of ahuman engineer.

    These results can be partially explained in terms ofnorm theory (Kahneman & Miller 1986), which holds thatwe evaluate outcomes by com paring them to easily imag-ined counterfactual alternatives. Wh en it is easy to imag-ine how things could have turned out better, we regardthe outcome as worse. When subjects were told that theoutcome would have been worse if the train had stopped(when it did not stop), or if the train had not stoppedBEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    5/42

    Baron: Nonconsequentialist decisions(when it did), they provided less compensation, as normtheory predicts. Likewise, they provided more compen-sation if the counterfactual outcome would have beenbetter. But this information about counterfactuals did noteliminate the effect of the cause of the outcome. Hence,norm theory, while supported, is not sufficient to explainall the results. Another source could be overgeneraliza-tion of principles that would be applied to cases in whichan injurer must pay the victim. The inju rer is more likelyto be at fault when a device fails or wh en the en gin eer failsto stop.A similar sort of overg eneralizatio n migh t be at work inanother phenomenon, the person-causat ion bias. Here,subjects judge that more compensation should be pro-vided by a third party when an injury is caused by hu manbeings than when it is caused by nature (Baron 1992;Ritov & Baron, in press). This resu lt is found, again, w henboth the injurer (if any) and the victim are unaware of thecause of the injury or of the amount of compensation(Baron 1993b), so that even psychological punishment isimpossible. For example, subjects provided more com-pensat ion to a person who lost a job from unfair and illegalpractices of another business than to one who lost a jobfrom normal business competition (neither victim knewthe cause). The same result was found for blindnesscaused by a restaurant's violation of sanitary rules versusblindness caused by a mosquito.This effect might be an overgeneralization of the desireto punish someone. Ordinarily, punishment and compen-sation are correlated, because the injurer is punished byhaving to compen sate th e victim (or possibly even by th eshame of seeing that othe rs m ust compen sate th e victim).But when this correlation is broken, subjects seem tocontinue to use the same heuristic rule. This sort ofreasoning might account in part for the general lack ofconcern about th e discrepancy betw een victims of naturaldisease, who are rarely comp ensated (beyond their m edi-cal expenses ), and victims of hum an activity, who are oftenCompensated a great deal, even when little specific deter-rence results because t he com pensation is paid by liabilityinsurance.3.3. Punishment. Notoriously, consequentialist views ofpunishment hold that two wrongs do not make a right, sopun ishm ent is justified largely on the groun ds of deter-rence. Deterrence can be defined generally to includeeducation, support for social norms, and so on, but pun-ishment must ultimately prevent more harm than itinflicts. Again, I leave aside the ques tion of how to add u pharms across people and time. The simple consequ ential-ist model put forward here implies that, normatively, ourjudgment of whether a punishment should be inflictedshould depend ent irely on our judgment of whetherdoing so will bring about net benefit (com pared to the bestalternative), whether or not the judgment of benefit ismade by adding up benefits and costs in some way. (Wemight want to include here the benefits of emotionalsatisfaction to those who desire to see punishment inflic-ted. But we would certainly want to include deterrenteffects as well.)

    People often ignore deterrence in making decisionsabout pun ishm ent or pen alties. Baron and Ritov (in pressa) asked subjects to assess penalties and compensationseparately for victims of birth-control pills and vaccines

    (in cases involving no clear negligence). In one case,subjects were told that a higher penalty would make thecompany and others like it try harder to make saferproducts. In an adjacent case, a higher penalty wouldmake the company more likely to stop making the prod-uct, leaving only less safe products on the market. Mostsubjects, including a group of judg es, assigned the samepenalties in both of these cases. In another test of thesame principle, subjects assigned penalties to the com-pany even w hen t he penalty was secret, the company wasinsured, and the company was going out of business, sothat (subjects were told) the am ount of the penalty wouldhave no effect on anyone's future behavior. Baron et al.(1993) likewise found that subjects, including judges andlegislators, typically did not penalize companies differ-ently for dumping hazardous waste, whether the penaltywould make companies try harder to avoid waste orinduce them to cease making a beneficial product. It hasbeen suggested (e.g., Inglehart 1987) that companieshave in fact stopped making beneficial products, such asvaccines, exactly because of such penalties.Such a tendency toward retribution could result fromovergeneralization of a deterrence rule. It may be easierfor people - in the course of develo pme nt - to understa ndpunishment in terms of rules of retribution than in termsof deterren ce. Those who do understand the d eterrencerationale generally make the same judg men ts - becausedeterrence and retribution principles usually agree - sooppo rtunitie s for social learning are limited. O ther possi-ble sources of a retribution rule may be a perception ofbalance or equity (Walster et al. 1978) and a generalizationfrom the emotional response of anger, which may operatein terms of retribution (although it may also be subject tomodulation by moral beliefs; see Baron 1992).A second bias in judgments of punishment is thatpeople seem to want to make injurers un do the harm theydid, even when some other penalty would benefit othersmore. Baron and Ritov (1993) found that both compen-sation and penalties tended to be greater when thepharmaceutical company paid the victim directly thanwhen penalties were paid to the government and com-pensation was paid by the government (in the secret-settlement case described earlier). Baron et al. (1993)found (unsurprisingly) that subjects preferred to havecompanies clean up their own waste, even if the wastethrea tened no one, rathe r than spend the same amou nt ofmoney cleaning up the much more dangerous waste of adefunct company. Ordinarily, it is easiest for peopleto undo their own harm, but this principle may beovergeneralized.Both of these biases can lead to worse consequences insome cases, although much of the time th e heuristics thatlead to them probably generate the best consequences.These results, then, might also be the result of over-generalization of otherwise useful heuristics.3.4. Resistance to coerced reform. Reforms are socialrules that improve matters on the whole. Some reformsrequire coercion. In a social dilemma, each person isfaced with a conflict between options: one choice is betterfor the individual and the o ther is bette r for all mem bersof the group in question. Social dilemmas can be, andhave been, solved by agreements to penalize defectors(Hardin 1968). Coercion may also be required to resolve

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    6/42

    Baron: Nonconsequentialist decisionsnegotiations (even though almost any agreement wouldbe bet ter for both sides than no agreement) or bring aboutan im prov emen t for many at the expense of a few, as whentaxes are raised for the wealthy. I t is in the in tere st of mostpeople to support beneficial but coercive reforms, andsome social norms encourage such support, but othersocial norms may oppose reforms (Elster 1989).

    To look for such norms, Baron and Jurney (1993) pre-sented subjects with six proposed reforms, each involvingsome public coercion that would force people to behavecooperatively, that is, in a way that would be best forall if everyone behaved that way. The situations involvedabolition of television advertising in political campaigns,compulsory vaccination for a highly contagious flu, com-pulsory treatment for a contagious bacterial disease, no-fault auto insurance (which eliminates the right to sue),elimination of lawsuits against obstetricians, and a uni-form 100% tax on gasoline (to reduce global warming).Most subjects thought things would be better on thewhole if the reforms, as described, were put into effect,but many oithose subjects said they w ould not vote for thereform s. S ubjects w ho voted against proposals they saw asimprovements cited several reasons. Three reasonsplayed a major role in such resistance to reform, asindicated both by correlations with resistance (amongsubjects who saw the proposal as improvements) and bysubjects indicating (both in yes-no and free-responseformats) that these were the reasons for their votes:fairness, harm, and rights.Fairness concerns the distribution of the benefits orcosts of reform. People may reject a generally beneficialreform such as an agreement between management andlabor, on th e gro und s tha t it allocates benefits or costs in away that violates some standard of distribution.

    Harm refers to a norm that prohibits helping oneperson by h arming an other, even if the benefit outweighsthe harm and even if unfairness is otherwise not at issue(e.g., when those to be harmed are determined ran-domly). Whereas opposition to reform on the grounds offairness compares reform to a reference point defined bythe ideally fair result, opposition on the grou nds of harmcompares it to the status quo. One trouble with mostreforms is that they help some people and hurt others.For example, an increased tax on gasoline in the UnitedStates may help the world by reducing CO 2 emissions,and it will help most Americans by reducing the budgetdeficit; but it will hurt those few Americans who arehighly dependent on gasoline, despite the other benefitsfor them. The norm against harm is related to omissionbias, because failing to help (by not passing the reform) isnot seen as an equivalent to the harm resulting fromaction.

    A right, in this context, is an option to defect. Theremoval of this right might be seen as a harm, even if, onothe r groun ds, th e person in question is clearly bet ter offwhen the option to defect is removed (because it is alsoremoved for everyone else).Subjects cited all of these reasons for voting againstcoercive reforms (both in yes-no and open-ended re-

    spon se formats). F or exam ple, in one study, 39% ofsubjects said they would vote for a 100% tax on gasoline,but 48% of those who would vote against the tax thought itwould do more good than harm on the whole. Subjectsthus admitted to making nonconsequentialist decisions,

    both through their own judgment of consequences andthrough the justifications they gave. Of those subjectswho would vote against the tax despite judging that itwould do more good than harm, for example, 85% citedthe unfairness of the tax as a reason for voting against it,75 % the fact that the tax would harm some people, and35% the fact the tax would take away a choice that peopleshould b e able to make. (In other cases, rights were m oreprominent.) Removal of liberty by any means may set aprecedent for other restrictions of freedom (Mill 1859),hence a consequentialist argument could be made againstcoercion even when a simple analysis suggests that coer-cion is justified. But no subject mad e this kind of argu-men t. Th e appeals to the p rinciples listed were in all casesdirect and were written as though they were sufficient.Baron (1993b) obtained further eviden ce for the "do noharm " heuristic. Subjects were asked to put themselves inthe position of a benevolent dictator of a small islandconsisting of equal numbers of bean growers and wheatgrowers. The decision was whether to accept or declinethe final offer of the island's only trading partner, as afunction of its effect on the incomes of the two groups.Most subjects would not accept any offer that reduced theincome of one group to increase the income of the other,even if the reduction was a small fraction of the gain, andeven if the g roup bearing t he loss had a higher income atthe outset . (It remains to be determ ined wheth er subjectsthink the su bjectiv e effect of the loss is gre ater th an th at ofthe gain.) The idea of Pareto efficiency (Pareto 1971) mayhave the same intuition origin.Additional evidence for the role of fairness comes froma number of studies in which subjects refuse to acceptbeneficial offers because the benefits seem to be unfairlydistributed (Cam erer & Loewenstein 1993; Thaler 1988).

    In all of these studies of dep artur es from consequ ential-ism, it might be possible for someone who had made anonconsequentialist decision to find a consequentialistjustification for it, for example, by imagining goals orsubtle p reced ent-s ettin g effects. Yet, in all these studies,justifications are typically not of this form. Moreover, thequestion at issue is not whe the r a conceivable conseq uen-tialist justification can be found but, rath er, w he the rsubjects faced with a description of the consequences,divorced from the decisions that led to them, would jud gethe co nsequ ences for goal achievem ent in a way that wasconsistent with their decisions. It seems unlikely thatthey would do so in all of these studies.

    4. The sources of intuitionI have given several examples of possible n oncon sequen-tialist thinking. My goal has been to make plausible theclaim that nonconsequentialist decision rules exist andthat they affect real outcomes. Before discussing what, ifanything, should be done about these norms, we shouldconsider whether they are really as problematical as Ihave suggested. One argument against my suggestion isthat these norms have evolved through biological andcultural evolution over a long period of time, hence theyare very likely the best we can achieve, or close to thebest .Like Singer (1981), I am skeptical. Although severalevolutionary accounts can explain the emergence of var-

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    7/42

    Baron: Nonconsequentialist decisionsious forms of altruism as well as various moral emotionssuch as anger (Frank 1985), I know of no such accounts ofthe specific norms I have cited. (I could imagine such anaccount for the norm of retribution, however, whichmight arise from a tendency to counterattack.) Any ac-count that favors altruism would also seem to favor conse-quentialist behavior, and this would be inconsistent withnonco nsequen tialist norms. Even an account of anger as away of making threats credible (Frank 1985) need notdistinguish be tween anger at acts and omissions. Ind eed,we are often angry with people for what they have failedto do.Even if norm s have an evolution ary basis, we still do notneed to endorse them . As Singer (1981) points out in o therterms, evolution is trying to solve a problem other thanthat of determining the best morality to endorse. A rulemight engender its own survival without meeting theabove criteria for being worthy of endorsement. Forexample, chauvinism might lead to its own perpetuationby causing nations that encou rage it to trium ph over thosethat do not, the victors then spreading their norms to thevanquished. Likewise, ideologies that encourage open-mindedness might suffer defections at higher rates thanthose that do not, leading to the perpetu ation of a doctrinethat closed-minded thinking is good (Baron 1991). Suchmechanism s of evolution do not give us reason to en dorsethe rules they promote. As Singer points out, an evolu-tionary explanation of a norm can even undercut ourattachment to it , because we then have an alternative tothe hypothesis that we endorsed it because it was right.

    Some have compared decision biases to optical illu-sions, which are a necessary side effect of an efficientdesign or adaptation (Funder 1987). Without a plausibleaccount of how this adaptation works, however, accep-tance of this argument would require blind faith in thestatus quo. More can be said, however. Unlike opticalillusions (I assume), nonconsequentialist decision rulesare not always used. In all the research I described, manyor most subjects did not show the biases in question.Moreover, Larrick et al . (in press) found t hat th ose who donot display such b iases are at least no worse oft in term s ofsuccess'or wealth than those who do. (Whether they aremorally worse was not examined.) Nonconsequential de-cision mak ing is not a fixed characte ristic of our cond ition.To understand where nonconsequentialist rules (norms)come from, we need to understand where an y decisionrules com e from. I know ofno deep theory about this. Somerules may result from observation of our biological behav-ioral tendencies, through the naturalistic fallacy. We ob-serve, for example, that m en are stronger than women andsometim es push women arou nd, so we conclude that menought to be dominant. Rules are also discovered byindividuals (as Piagetian theorists have emphasized); theyare explicitly taught (as social-learning theorists haveemphasized) by paren ts, teachers, and the clergy; and theyare maintained through gossip and other kinds of socialinteraction (Sabini & Silver 1981).

    If people evolved to be docile, as proposed by Simon(1990), then we become attached to the rules that we aretaught by others. These rules need have no justificationfor this attachment mechanism to work. Arbitrary rulescan acquire just as much loyalty as well-adjusted rules .And, indeed, people sometimes seem just as attached torules of dress or custom that vary extensively across

    cultu res as they are to fundamental moral rules that seemto be universal (Haidt et al., in press).Why w ould anyone in vent or modify a rule and teach itto someone else? One reason is that the teachers benefitdirectly from the "students" following the rule, as whenparen ts teach their children to tell the truth, help with thehousework, or control their temp ers. In some cases, theserules are expressed in a general form ("don't botherpeople," "pitch in and do your share"), perhaps becauseparen ts un derstan d that children will be liked by others ifthey follow those rules. So parents teach their children tobe good in part out of a natural concern with the children'slong-run interests. Parents may also take advantage ofcertain opportunities for such instruction: a moral lessonmay be more likely after a harmful act than after a failureto help (unless the help was specifically requested).Often, such rules are made up to deal with specificcases, for example, "don't hurt people," in response tobeating up a little brother. We can think of such rules ashypo theses, as attempts to capture w hat is wrong with thecase in quest ion . It is useful to express th e rules in a mor egeneral form rather than referring to the specific casealone ("don't twist your brother's arm"). But such generalrules are not crafted after deep thought. They are spur-of-the-moment inventions, although they do help controlbehavior for the better.If the ru le is badly stated, o ne corrective m echanism iscritical thought about the rule itself (Singer 1981). Tocriticize a rule, we ne ed to have a stand ard, a goal, such asthe test suggested earlier: is this a member of the set ofrules that we benefit most from endorsing? We also needto have arguments about why the rule fails to achieve thatstand ard as well as it could, such as examples (like those Igave earlier) where the rule leads to general harm. Andwe need alternative rules, although these can eome afterthe criticism rather than before it.In the abs ence of such critical thou ght, rules may attaina life of their own. They become overgeneralized (or therules that might replace them in specific cases are under-generalized, even if they are used elsewhere). Because ofour docility, perhaps, and the social importance of moralrules, our commitments to these rules are especiallytenacious. The retrib utive rule of pun ishm ent, "an eye foran eye," was originally a reform (Hommers, 1986), animprovement over the kind of moral system that led toescalating feuds. But wh en app lied intuitively by a courtto the cases of a child killed by a vaccine, with butnegligence, it is overgeneralized to a case wh ere th e ruleitself probably does harm (Baron & Ritov 1993; Oswald1989 makes a similar suggestion).

    Critical thoug ht about moral rules undou btedly occurs.It may be what Piaget and his followers take to be themajor mechanism of moral development. The sorts ofexperience that promote such thought may work becausethey provide counterexamples to rules that have beenused so far. But critical thoug ht is not universal. A prin ci-ple such as "do no harm " may be developed as an admon i-tion in cases of harm through actions. This principle maythen be applied to cases in which harm to some isoutweighed by much .greater good to others, such ascompulsory vaccination laws, fuel taxes, or free-tradeagreements. The application may be unreflective. Theprinciple has become a fundamental intuition, beyondquest ion.

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    8/42

    Baron: Nonconsequentialist decisionsSuch overgeneralization is well known in the study oflearning. For example, Wertheimer (1959) noted thatstud ents wh o learn the base-times-height rule for the areaof a parallelogram often apply the same rule inap-propriately to roughly similar figures and fail to apply therule when it should be applied, for example, to a longparallelogram turned on its side. Wertheimer attributed

    such over- and undergeneralization to learning withoutunderstanding. I have suggested (Baron 1988a) that thecrucial element in understanding is keeping the justifica-tion of the formula in mind, in terms of the purposeserved and the argum ents for why the formula serves thatpurpose. In the case of the base-times-height rule, thejustification involves the goal of making th e parallelograminto a rectangle, which cannot be done in the same waywith, for example, a trapezoid.Overgeneralization in mathematics is easily corrected.In morality and decision making, however, the rules thatpeople learn arise from important social interactions.People become committed to these rules in ways that donot usually happen in schoolchildren learning mathemat-ics. In this respect, overgeneralization also differs frommechan isms that have been proposed as causes of types ofbiases other than those discussed here, mechanisms suchas the costs of more complex strategies, associative struc-tures, and basic psychophysical principles (Arkes 1991).A defense of overgeneralization is that prev enting it iscostly. Cr ud e rules might be good enough, given the timeand effort it would take to improve them. Moreover, theeffort to improve them might go awry. People who reflecton their decision rules might simply dig themselvesdeeper into whatever hole they are in, rather than im-proving those rules. We might also be subject to self-serving biases when we ask whether a given case is anexcep tion to a gene rally good rule (H are 1981), such as indeciding whether an extramarital affair is really for thebest (despite a belief that most are not).

    These defenses should be taken seriously, but theirimplications are limited. They imply that we should bewary of trying to teach everyone to be a moral philoso-pher. They also suggest that prescriptive systems of rulesmight differ from normative systems (although they donot prov e this - see Baron 1990). But they do not implythat simpler rules are more adequate as normative stan-dards than full consequentialist analyses.Moreover, some decisions are so important that thecost of thorough thinking and discussion pales by compar-ison to th e cost of erro neo us ch oices. I have in mind issuessuch as global environmental policy, fairness toward theworld's poor, trade policy, and medical policy. In thesematters, the thinking is often done by groups of peopleengaged in serious debate, not by individuals. Thus,there is more protection from error, and the effort is morelikely to pay off. Many have sugg ested that utilitarianismand consequentialism are fully consistent with commonsense or everyday moral intuition (e.g., Sidgwick 1907),but this may be more true in interpersonal relations thanin thinking about major social decisions.Finally, th e cost of think ing (or the cost of learning) may

    be a good reason for learning a more adequate rule, butnot a good reason for having high confidence in theinadequate rules that are used instead. Yet many exam-ples of the use of nonconsequentialist rules are charac-terized by exactly such confidence, to the point of resist-

    ing counterarguments (e.g., Baron & Ritov 1993). Insome cases it might be best not to replace the nonconse-quentialist rules with more carefully crafted ones but,rather, to be less confident about them and more willingto examine the situation from scratch. The carefullycrafted rules might be too difficult to learn. In such caseswe might say that overgeneralization is a matter of exces-sive rigidity in the application of good general rules ratherthan in the use of excessively general rules.In this section , I have tried to give a plausible account ofhow erroneous intuitions arise in the development ofindividuals and cultures. Direct evidence on such devel-opment is needed. In the rest of this article, I exploresome implications of my view for research and applica-tion. These implications depend in different ways on theprobability that this view is correct. Some require onlythat it is possible.5. Intuition as a philosophical methodIf intuitions about decision rules resu lt from overgeneral-ization, then (as also argued by Hare 1981 and Singer1981) these intuitions are suspect as the basic data forphilosophical inquiry. Philosophers who argue that theact-omission distinction is relevant (e.g., Kamm 1986;Malm 1989) typically appeal directly to their own intu-itions about cases. Unless it can be shown that intuitionsare trustworthy, these philosophers are simply beggingthe quest ion.Rawls (1971) admits that single intuitions can be sus-pect, but he argues for a reflective equilibrium based onan attempt to systematize intuitions into a coherent the-ory. Such systematization need not solve the problem,however. For examp le, it might (although it does not do sofor Rawls) lead to a moral system in which the act-omission distinction is central, a system in which moralityconsists mainly of prohibitions and positive duties play alimited role, if any (as suggested by Baron 1986).Rawls's argument depends to some extent on an anal-ogy between moral inquiry and fields such as modernlinguistics, wher e systematization of intuition has been apowerful and successful method. The same method ar-guably underlies logic and mathematics (Popper 1962,Ch. 9). I cannot refute fully this analogical argument, butit is not decisive, only suggestive. I have suggested (alongwith Singer 1981) that morality and decision rules have anexternal purpose through which they may be understood,and that this criterion, rather than intuition, can be usedas the b asis of justification. Perha ps this idea can beexten ded by analogy to language, logic, and m athematics,but that is not my task here.

    6. Experimental methodologyExperiments on decision biases often use between-subject designs (each condition given to different sub-jects) or other means to make sure that subjects do notcompare directly the cases the experimenter will com-pare (such as separating the cases within a long series).The assumption behind such between-subject designs isthat subjects would not show a bias if they knew whatcases were being compared. All of the biases I havedescribed above are within-subject. Subjects show the

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    9/42

    Baron: Nonconsequentialist decisionsomission bias knowingly, for example, even when the actand omission versions are adjacent.In between-subject designs, subjects may display bi-ases that they would them selves jud ge to be biases. Suchinconsistency seems to dispense with the need for anormative theory such as the consequentialist theory Ihave proposed, or expected utility theory. But without anindependent check, we do not know that subjects wouldconsider th eir respon ses to be inconsistent. Frisch (1993)took the trouble to ask subjects whether they regardedthe two critical situations as equivalen t - such as buyingversus selling as ways of evaluatin g a good - and she foundthat they often did not. In other words, between-subjectdesig ns are no t necessary to find man y of the classic biases(including the status-quo effect described earlier), andsubjects often disagree with experimenters regardingwhich situations are equivalent.When we use within-subject designs, however, wecannot simply claim that subjects are making mistakesbecause they are violating the rules they endorse. Whenasked for justifications for their judgments, subjects in allthe experiments I described earlier endorsed a variety ofrules that are consistent with their responses. We there-fore ne ed a normative theory, such as the con sequentialisttheory I have outlined, if we are to evaluate subjects'responses.Much the same normative theory seems to be implicitin most of the literature on framing effects and inconsis-tencies. Whether or not a factor is relevant to making adecision is a normative question, to which alternativeanswers can be given. For example, Schick (1991) arguesthat the way decision makers describe situations to them -selves is normatively relevant to the decision they oughtto make (even, presumably, if these descriptions do notaffect consequences), because descriptions affect their"understanding" of the situation, and understandingsare necessarily part of any account of decision making.Hence, framing effects do not imply error. If consequen-tialism is correct, thoug h, Schick is wrong: consequ ential-ism implies that understandings themselves can be erro-neous (we might say). The attempt to bring in aconsequentialist standard through the back door whileostensibly talking about inconsistency (Davves 1988) andframing effects will not work. The standard should bebrought in explicitly, as I have tried to do.In sum, although between-subject designs are usefulfor studying heuristics, we may also use within-subjectdesigns to evaluate subjects' rules against a normativestandard such as consequentialism.

    7. Policy implicationsThe examples I used to illustrate my argument are ofsome re levan ce to issues of pub lic concern , as noted. I amtempted to offer evidence of psychological biases as am-mu nition in various battles over public policy. Tetlock andMitchell (1993), however, correctly warn us against usingpsychological research as a club for beating down ourpolitical opponents. It is too easy, and it can usually bedon e by both sides. On th e other h and, a major reason forstudying biases is to discover where decisions need im-provement, and if we researchers are going to reject allapplication to public or personal decision making, we

    mig ht as well fold up our t ent s and move on to more usefulactivities. How should we draw the line between makingtoo many claims and too few?I do not propose to answer this question fully, but partof an answ er concerns the way we m ake claims con cerningpublic policy debates. I suggest that claims of biasedreasoning be directed at particular arguments made byone side or the other, not at positions, and certainly not atindividuals.For example, one of the arguments against free-tradeagreements is that it is wrong to hurt some people (e.g.,those on both sides who will lose their jobs to foreigncompetition) to help others (e.g., those who will beprevented from losing their jobs because their productswill be exported). This could be an example of omissionbias, or the do-no-harm heuristic, which operates inmuch clearer cases and which, I have argued, is indeed anerror in these clear cases. Now real trade negotiations areextremely complex, and they involve other issues charac-teristic of any negotiations, such as trying to get the bestdeal for everyone. So at most we could conclude that aparticular part of the argument against free trade is afallacy that has been found elsewhere in the laboratory.This does not imply that the other arguments opposingfree trade are wrong or not decisive, or that the peoplewho oppo se free trade are any more subject to error thanthose who favor it.With this kind of caution at least, the general programof research can be extended more broadly to other mat-ters of policy that I have not discussed here, such asfairness in testing and selection for academic and employ-ment opportunities, abortion, euthanasia, nationalism,the morality of sex, and so on. In all these kinds of cases,arguments for two (or more) sides are complex, but someof the argu men ts are probably erron eous. Psychology hasa role to play in discovering fallacious arguments andpointing them out.One discipline has concerned itself with exactly thiskind of inquiry, the study of informal logic (e.g., Arnauld1964; Johnson & Blair 19983; Walton 1989). This field,however, has not incorporated many of the advances innormative theory that have occurred since the time ofAristotle, and it has paid no attention to psychologicalevidence - sparse as it is - about which fallacies actuallyoccur in rea.1 life.

    8. Educational implicationsIf discoveries about n onconsequ entialist decision makingare themselves to have any consequences, they mustinfluence the thinking that people do. As I suggested atthe outset, norms for thinking are enforced throughoutsociety, so new research can influence these norms inmany ways. One may argue that psychological researchon racial and ethnic stereotyping (and authoritarianism,ethnocentrism, dogmatism, etc.) has influenced Westerncultures in a great variety of ways. The claim that anargument is prejudiced now refers to a well-establishedbod y of psychological lit eratu re familiar to most peo ple inthese societies. (In using this as an example, I am notassuming that the research in question is flawless. In-deed, Tetlock and Mitchell point out that much of thisresearch is itself politically biased, and the example may

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1 9

  • 8/7/2019 Baron_1994 REFERENCE 2

    10/42

    Commentary /Baron: Nonconsequentialist decisionsstand as a warning against excess as much as a model ofinfluence.)Much of the influence that this sort of scholarshipexerts is very general. Scholars write articles, which areread by stude nts, w ho acquire beliefs that they then teachto their childre n. Scholars also teach their stude nts. T radebooks and m agazines convey ideas to the general pub lic.But education provides a special channel for standards ofthinking and decision making to be transmitted, becausethe transmission of such standards has long been a self-conscious goal of education.A natural question, then, is how to conduct educationin consequentialist decision making, assuming that weacquire increasing confidence in our assessment of whaterrors need to be corrected. Can we teach students thatth ere is a right way of making decisions, and it is conse-quentialism? Here are two sides:On the pro side, it is difficult to convey a standard thatyou do not apply yourself. If, for example, we want toteach students to write grammatically, or to avoid logicalerrors, we mu st try to practice what we preach and applythe standards w hen we evaluate stude nts, each other, andourselves. Thus, when students hand in papers withdangling participles or ad hominem arguments, we mustpoint this out with red pencils. Likewise, when studentsprovide nonconsequential ist arguments, we must pointthis out, too, if we want stu dent s to becom e aware of thisissue. And we must give students opportunities to thinkabout decisions in places where the standards of decisionmaking can be discussed and brought to bear (Baron &Brown 1991).On the con side, decision making is not like grammar.People have deep commitments to their decision rules,and they oion these commitments. They may regardattempts to change their views as a kind of theft. It iscertainly a hurt. Moreover, no matter how confident weare in the normative theory, we can never be fully confi-dent about prescriptive theory. Perhaps it is better thatsome people not try to understand consequentialist the-ory, even assuming its correctness. And we must remem-ber that normative theory itself will evolve over time, sotha t we can give peop le at mo st a best g uess (and it is verylikely to be one that is nowhere near universally ac-cepted). T hese considerations argue for a different kind ofinstruction, one in which we present arguments as argu-men ts rath er than as standards - not in red pencil but inblack pen cil, as if to say: "This is jus t an argu me nt on theothe r side that you should con sider." This view is compat-ible with most approaches to moral education, in whichthe emphasis is on discussion and argumentation ratherthan on learning.A moder ate p osition betw een these two (Baron 1990) isto present consequentialist theory as something thatstudents should know but do not have to accept or follow.Thus , students should be able to apply the theory whenasked to do so, although they need not endorse what theywrite as their own opinions. Alternative theories could b etaught as well; and discussion like that suggested in thelast paragraph is also compatible with this kind ofinstruct ion.

    One relevant finding of Baron and Ritov (1993) is thata large fraction of our college-student subjects said theyhad never heard of the argument that punishment wasjustified by deterrence rather than retribution. Of those

    who said that they heard it for the first time in our study,some accepted it and some rejected it , and a similardivision was found among those who recognized theargument as familiar. I suggest that not recognizing thisargument is evidence of an educational gap, and that thisparticular gap is a worse one than that shown by th e copyeditor who asks for Aristotle's first name.A C K N O W L E D G M E N T SI thank Jean Beattie, Deborah Frisch, Richard Hare, BarbaraMellers, liana Ritov, Mark Spranca, and several anonymousreviewers for helpful comments. Much of the research de-scribed was supported by National Science Foundation giantsSES-8809299 and SES-9109763.

    Open Peer CommentaryCommentary submitted by the qualified professional readership of thisjournal will be considered for pu blication in a later issue as ContinuingCommentary on this article. Integrative overviews and syntheses areespecially encouraged.

    Fairness to policies, distinctions andintuitionsJonathan E. AdlerDepartment of Philosophy, Brooklyn College, City University of New York,Brooklyn, NY 11210Electronic mail: [email protected]. Baron oppo ses th e acts-omission distinction. Defenses of thisdistinction usually dep en d on "intuitions" he believes arise fromovergeneralization. The choice of "overgeneralization" as a de-scription has, I believe, two implications: first, that the gen eral-ization makes a truth-claim that does not hold over the entirerange of the rule's application; and second, that we would bebetter off without the ouergeneralized rule.Yet wh en real cases confront us in everyday life, a num ber offactors contribute to the greater stringency in avoiding doingharm than in failing to do what will lessen harm. Besidesdifferences in intention, these include differences in certaintyand responsibility. It can be argued (though not here) that thesedifferences are re levant to th e distinction. But even if we avoidarguing that claim, we can recognize that it may be the bestpolicy to draw and conform to this distinction, though notabsolutely, especially when we have good reason to believe thatif one treats each case as deserving of case-by-case assessment,one may do much worse. If we then view the reasoning in thecases Baron cites not as realizing an overgeneralized rule but asthe ado ption of a policy, then , first, no truth-claim is being madeover the entire range of the policy, and second, we could bebetter off maintaining the policy than switching to a moreaccurate set of rules, even conceding the imperfections. (Insection 4, "The sources of intuition," Baron touches on this lineof objection, but I find his response too dismissive.)

    Describing subjects' reasoning as the application of over-genera lized rules comes naturally w hen w e focus on the specificcase, for that focus biases us toward evaluating subjects on thebasis of the single case alone, rathe r than taking the perspe ctiveof the constrained decision or policy maker who attempts tosimplify by assimilating, as much as possible, new cases to old.The need for cognitive economy is fundamental to the traditionthat informs much of the rest of Baron's research.

    10 BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    11/42

    Commentary/Baron: Non consequ entialist decisionsBaron sets aside considerations of planning and economies atother places. For example, he writes that "the crucial element inunderstanding is keeping the justification of the formula inmind, in terms of the purposes served and the arguments forwhy the formula serves that purpose." But if the justificationserves no foreseeable purpose, would I not be wise to allowmyself to forget it, perhaps storing it away in an accessible

    location? A final example: in the discussion of compensation,Baron writes that "subjects judge that more compensationshould be provided by a third party when an injury is caused byhuman beings than when it is caused by nature." But surely ifone is designin g a general plan for comp ensation and knows thatonly some of the unjustified injuries can be compensated, it issensible to extend greater tolerance to those that are more amatter of chance than those that are the outcomes of humanchoice and hence responsibility.Mine is not an approach that seeks to excuse or rationalize thefaults in subjects' reasoning and judgment. Evaluation or criti-cism should be more complex because in subjects' understand-ing of these tasks multiple ends (or goals) meet. (On thesematters and in disagreement with Flinders [1987] more opti-mistic view, see Adler [1991].) The assu mption that the re can bealternative purposes (or understandings) of these tasks does notimply, as Baron claims against Schick (1991), that each of thesealternatives is proper, only that some are.2. In rejecting the acts-omission distinction, Baron does nottell us how we would handle difficult cases. Specifically, there isthe pair of cases now known as the Trolley and the Transplant(Thom son 1990). In the former, a runawa y trolley, if allowed tocon tinue , w ill kill five, but a person can pull a switch to divert itto another track where it kills only one. In the transplant case,five otherwise healthy individuals need just one of the organsfrom a loner who wa nders into th e hospital. We judge it right toswitch the trolley and wrong to take the organs. Does Baronaccept this judgm ent, and ifso , how does he account for it on hissimple consequentialist model? (He defines consequentialismin terms of goals, rather than causal independence [of the actfrom the consequences]. But if so , why could "not intending tobring about unwarranted harm " not be a goal?)3. To treat intuitions as overgeneralizations and in conflict is tomisrepresent how philosophers understand the notion of intu-ition. As with linguistic intuitions, these are not generalizationsfrom experience, and seeming conflicts in intuition show onlythat either at least one is not a genuine intuition or that thefoundation of the enterprise is wrong. The linguistic analogyshows something else: a theory, such as a theory of grammar, canbe tes table, ev en w hen it specifies the sou rce from w hich its ownconfirming data are to be found. The theory can still be testableif it can gene rate precise p redictions an d is broad eno ugh to haveimplications well beyond the range of data to which it has so farappealed.

    Baron claims that intuitions must be shown to be "trustwor-thy, " but I think this demand is unwarranted. It suggests, first,that we have a criterion that is neutral (with respect to compet-ing theories) and independent (of intuitions) for the Tightness orwron gness of a set of cases. The claim is not made good , and her ethe parallel with linguistics holds. Second, though now theparallel with linguistics fades, ethical intuitions do not rep rese nta single source or a natural kind. Depending on the cases,different overlapping sets of beliefs and values inform ourintuitions. Those who rely on the method of intuitions are notcom mitted to a single intuitive "se ns e' that we could assess in ageneral way for reliability. Third, even those who complainabout intuitions as prejudices (Hare 1981; Singer 1979) are q uitesuccessful in appealing to them as support for their views.We should be suspicious of many intuitive judgments and, inparticular, some of the examples meant to elicit them. Baron'svaluable research helps expose biases that explain why someexamples do not yield true intuitions. Let's not throw out thebaby with the bathwater.

    Three reservations about consequentialismHal R. ArkesDepartment of Ps ychology, Ohio U niversity, Athens, O H 45701-2979Electronic mail: [email protected] would like to agree wholeheartedly with Baron's consequen-tialist thesis that "the best decisions are those that yield the bestconsequences for achieving people's goals. ' However, there arethree irksome issues that make me hesitate.The first concerns the definition of consequences. Exactlywhat should be included in reckoning consequences? Severalauthors, such as Schmitt and Marwell (1972), Kahneman et al.(1986), and Loewenstein et al. (1989) have shown that peoplewill forsake gains to themselv es and even reject aggregate gainsto al l players to achieve what is perceived to be a more equitab ledistribution. For example, people might prefer a distributionamon g thre e participants in which each person gets two assets toa distribution in which one person gets five, one person getsfour, and one person gets three. People are willing to forgoadditional assets for everyone to equalize outcomes. Are suchpeop le failing to follow the conse quentialist norm ? I believe so.But what if the three people involved pointed out that jealousyand even hostility would undoubtedly ensue if the unequaldistribution occurred? Such negative occurrences would sub-stantially detract from its otherwise generous consequences.This might actually render the equal but smaller distributionsuperior.My own preference would be not to honor jealousy andpettiness in the calculation of consequences. If the goal is tomaximize benefits, then whining about unequal distributionswhen everyone is better off seems counterproductive to me.How ever, if coun terpro ductiv e emotional reactions have seriousconsequences, can we ignore them? If we do, we may invitetruly terrible consequences. If we do not ignore them, wedignify and pe rpe tuat e their influence. W hat is a consequen tial-ist to do?The second annoying issue can be illustrated by referring toan example alluded to by Lichtenstein et al. (1990). Severalyears ago a young girl named Jessica fell down an abandonedwell in Texas. The enormous amount of money spent to rescueher could have b een used instead to cap lots of abando ned wellsin Texas, thereby preventing the deaths of many children infuture year s. (Several chil dren die each year in Texas after fallingdown such wells.) Should we have ignored Jessica's cries in or derto spend the m oney in a more effective way, that is, according toconsequentialist norms?There are some possible escape hatches for a consequential-ist. For example, saving an identifiable life might have positiveconsequences that would not be obtained if multiple potentiallives were saved. Ne verthe less, saving Jessica was preferred to acourse of action (capping many wells) that would probably havesaved more than one life. Did this therefore violate the conse-quentialist n orm ? Probably. But I would not have volun teered todefend the co nsequ entialist n orm to the citizenry of Texas at thattime.

    The third issue is one that has been raised by critics ofutilitarianism (Williams 1973). Suppose I promise my dyingfriend that I will use the assets in his estate to fulfill his lastrequest. Keeping this promise might be seen as a behaviorhaving prima facie Tightness. But what if I can achieve a muchbetter consequence by breaking my promise? My reputationwould not suffer, because I am the only one aware of mypromise. However, my integrity is at stake. What amount of"achievement of people's goals" am I willing to sacrifice topreserve my integrity? Does a consequentialist norm requirethat the answer be "none"?Philosophers have long debated the issue of what is good andwhat is right (Ross 1930). Cons equentialism generally favors theformer. Obligation, integrity, and honesty may sometimes favorthe latter.

    BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1 11

  • 8/7/2019 Baron_1994 REFERENCE 2

    12/42

    Commentary/Baron: Nonconsequentialist decisionsBaron may object that my three reservations about the conse-quentialist norm represent tiny chinks in what is otherwise asolid structure. The participants in his ingenious experimentsdid not cite the wisdom of John Stuart Mill when they objectedto coercion. Similarly, my reservations might represent prob-lems with the consequentialist norm that most people are notgoing to mention. Many people will countenance the deaths of10 children who are not vaccinated to avoid the deaths of 5children who are allergic to the vaccine, but in so doing they willnot say that they have rejected the consequentialist normbecause of the problems I have enumerated. They are morelikely to reject it for the inadequate reason Baron suggests, thatis, overgeneralization of other rules.In fact, I have witnessed p lenty of nonconsequentialist behav-ior that does not seem to be related to the three issues I haveraised. For example, some of the supporters of the immenselyexpensive space station have said recently that no matter howquestionable its scientific merit, we ought to continue fundingit, because we have already spent so much on it.I conclude that whatever the deficiencies of the consequen-tialist norm may be, it is less deficient than the thinkingexhibited by people, both in and out of Baron's experiments.

    Inappropriate judgements: Slips, mistakes orviolations?Peter Aytona and Nigel Harvey""Psychology Department, City Un iversity, London EC1VO HB, UnitedKingdom and "Department of Psychology, University College London,London WC1E6BT, United KingdomElectronic mail: [email protected] it is cognitively taxing to make judgements by applyingstrictly rational methods, people may use decision rules akin toKeynes's (1936) judgem ent conventions or Tversky and Kahne-man's (1974) heuristics . The cause of nonconsequen tial reason-ing, Baron argues, is overgeneralising such heuristics. Thisargument is at once rathe r curious; heuristics are simple rules ofthumb that inevitably generalise to some degree or other.Overgeneralisation suggests that there is some more reasonablelevel of generalisation that people are exceeding. Simple heuris-tics that never made erro rs would be very valuable, but are theypossible? We suspect that relatively simple rules will only beperfectly effective in relatively simple environments.To estimate the efficiency of the heuristics we may use toperform judgem ents we need to know something of the naturalecology of the environment in which the heuristics are applied.How often will the heuristics work? How often will they fail?How bad will the (consequences of) failures be? What are thecosts and benefits of alternative strategies? These questions arenot answered by the research cited in the target article. Indeed,given the ingenuity required to devise problems that demon-strate nonconsequential reasoning, it could be argued that suchheuristics may actually be rather good at their job.Following Reason et al. (1991), incorrect behaviour can beattributed to slips, mistakes, and violations. Slips occur whenpeople have the right objectives but fail to reach them because ofthe intrusion of some strong habits over which they have nocontrol. When people receive feedback about the outcomes oftheir judgements, they know they were inappropriate. Al-though Baron claims that his within-subjects experiments pre-clude th e possibility that his subjects are inadvertently breakingrules they actually endorse, he also finds that subjects presen tedwith a Golden Rule argument will revise their judgements,thereby eliminating bias. They therefore seem to be makingslips; their attempts to reach their objectives have been dis-rupted by a strong hab itual form of reasoning. G iven their lack ofcontrol over their reasoning, they are no more to blame for its

    results than a car driver having a heart attack is to blame for theensuing crash.Shafir and Tversky (1992) showed that people will use norma-tive principles when they are salient or "transparent" in prob-lems, but when they are not, people are more likely to reasonnonconsequentially. Shafir and Tversky point out that althoughmost subjects make the wrong choices of cards to turn in Wason's(1966) selection task, they can easily appreciate the relevance ofthe possible outcomes of their choices. In our experience,subjects who are wrong often stridently insist that they are rightfor reasons that plainly satisfy them, yet these subjects will(more quietly) appreciate the correct argument.Mistakes occur when people have an inappropriate objectivefor meeting their goal, however that inappropriate objective ismet. They have done what they wanted to do but, in terms ofachieving their goal, they have done the wrong thing. Perhapsthese are the subjects who rejected the notion that punishmentis justified by deterrence rather than retribution and perhapsthey are suffering from an "educational gap." Given their moreintentional nature, moral mistakes appear more blameworthythan moral slips, though some would claim that ignorance orlack of intelligence is not evidence for irrationality (e.g ., Cohen1981). To this exten t, ignorance of the law is an excuse.People may also behave in a way that they know will not meettheir objectives. They may do so to explore the relationshipbetween objectives and goals. In such cases short-term viola-tions may lead to improved performance in the long run becauseof the extra knowledge revealed. It is difficult to interpret thisrather existentialist approach to moral judgement as blamewor-thy. Self-conscious violations can be found in Baron's (1992)evidence of dissociations between "should" and "would" inpeople's reactions to hypothetical scenarios. A proportion ofsubjects recognised that they should shoot 1 innocent person inorder to save 38 but conceded that they would not do so.We agree that people probably do not always reason criticallyin order to make moral judgements. Indeed, we wonderwhether it is universally recognised that moral reasoning shouldinform moral choices. We suspect that people will commonlyappeal to authority rather than reason. Political and religiousideology may often serve as decision support systems for a rangeof judgem ents concerning morality. This seems exemplified bythe recent rather startling advice offered by the British primeminister that, with respect to criminal behaviour, "society needsto condemn a little more and understand a little less." Whenconsidering the inte rpretation of the merits of the choices madeby subjects in experimental tests of moral decision making,however, it may be appropriate to be cautious with condemna-tion until we understand a little more.

    Do, or should, all human decisions conformto the norms of a consumer-orientedculture?L. Jonathan CohenThe Queen's College, Oxford University, Oxford 0X1 4AW , EnglandElectronic mail: [email protected]'s hypothesis in the earlier part of his target article is thatwe should make decisions according to our judgements of theirconsequences for the achievement of our goals, and that whennonconsequentialist principles of decision making are used,they are biases and arise from the overgeneralisation of rulesthat are consistent with the requirements of consequentialismwhen applied only to a limited set of cases. At heart, therefore,holds Baron, we are all consequentialists. There are, however,three major difficulties with this argument.The first is that it does not exclude the feasibility of exploitingovergeneralisation in another direction. Consider, for example,

    12 BEHAVIORAL AND BRAIN SCIENCES (1994) 17:1

  • 8/7/2019 Baron_1994 REFERENCE 2

    13/42

    Commentary /Baron: Nonconsequentialist decisionsa culture dominated by conservatism, in which conformity withthe status quo is the most important criterion for a person'sselecting one proposed pattern of behaviour over another. Insuch a culture interven tion in existing social or natural processesis normally to be avoided. Traditional patter ns of medication, forexample, are quite acceptable, and the rule to preserve hum anlife is thus valid within a limited ran ge of application. Nev erthe -less that rule may be unacceptable if applied too widely, as indesiring the use of ultramodern ventilation machinery whenthere is no prospect of the patient's recovery. From the conser-vative's point of view such an overgeneralisation of the rolewould generate a consequentialist bias. It follows that furtherempirical research is needed to determine whether we areindeed consequentialists at heart, with an occasional over-generalising bias toward conservatism or some other nonconse-quentialist principle, or conservatives (or something else), withan occasional overgeneralising bias toward consequentialism (orwhatever). In other words, if Barons hypothesis about biasesarising from overgeneralisation is to be taken seriously, he has todescribe some experimental method of demonstrating that theprinciple against which such overgeneralisation operates isalways consequentialist.It may be tempting for someone to claim that empiricalevidence is not needed here, because consequentialism is in aconceptu ally privileged position. Necessarily, one might say, weall want to achieve our various goals, whether these be themaximisation of one's own pleasure or of someone else's plea-sures, or the continuance of existing patterns of behaviour, orwhatever. But this way out of the difficulty is not open to anyonewho wan ts, as Baron does, to be able to contrast, as approp riate,behaviour conforming to the norm of goal-achievement andbehaviour exhibiting this or that particular bias. If such biasescan occur, it cannot be necessarily true that every principlefor decision making is motivated by a desire for goal achieve-ment .A second major difficulty with Baron's hypothesis must nowbe considered. Why do we have to assume, in conductingempirical research on the issue, that whatever norm is dominantin one community is also dominant in another? From the factthat somewhere in the United States or Europe most subjects ina group of sophomores, clinicians, judges, or legislators have acertain preference we cannot infer that "most people," that is,most people in the world, have that preference also. Conse-quentialism, with i ts consumer-oriented culture, may be muchmore common in some parts of the world than in others.Elsewhere, perhaps, cross-cultural investigations that weresufficiently thorough would reveal a preference for a conserva-tive ethic over a consequentialist one, or for conformity to asupposedly revealed religious ethic over the permissiveness ofsecular toleran ce. An d, far from its being th e case that the idealfor all people is to be able to achieve all their des ires all the tim e,ther e are Budd hists whose ideal is to have no desires of any kind.Nor are individuals necessarily single-minded in their choice ofnorms, always allowing consequentialism, say, or conservatismto dominate their decision making. Many people are pluralistsinstead, allowing first one set of cultural values to dominate,then a noth er - perh aps as a result of the complex pattern s ofmigration and intermixture that the accidents of human historyhave produced in human society.

    A third major difficulty with Baron's hypothesis is now appar-ent. If a pluralist account of the norms involved in decisionmaking is correct from a descriptive point of view, it willinevitably also affect investigators' normative judg em ents aboutdecision making. Specifically, if two investigators do not shareexactly the s ame system of norm s, w hat one of them takes to be abias in a subject's decision making, the other may take to be avalid princ iple. And the situation is thus analogous to that whichprevails (Cohen 1985; Gigerenzer 1993) in regard to so-calledcognitive biases. Subjects' responses to certain judgementaltasks are interpreted by some investigators as exhibiting persis-

    tent patterns of error or irrationality, whereas other investiga-tors interpret the responses differently and fail to find thesepatterns of error or irrationality. If it could be safely assumedthat everyone attached precisely the same meanings to ques-tions about the validity or probability of particular judgements,then biases could be reliably diagnosed wherever such ques-tions were systematically given the wrong answers. But in factthat assumption cannot safely be made, any more than we cansafely assume that everyone thinks the same values are relevantto all questions about particular decisions. Just as one canmeasure quantit ies of apples by number, weight, or volume, so,too, the re are radically different ways of measu ring probabilitiesor evaluating decisions. It may therefore be doubted, paceBaron, whether the study of so-called biases can in fact givemuch help to decision makers in public life. Indeed, negotia-tions may well proceed better through the identification ofcommon interests from which to reason, and of opposing inter-ests to be respected, than through attempts to convict others offallacies in their reasoning.

    Correct decisions and their goodconsequencesSteven DanielDepartment of Philosophy, C ornell University, Ithaca, NY 14850Electronic mall: [email protected] argues for a normative model of decision making that isconsequentialist because it holds that "the best decisions arethose that yield the best consequences for achieving people'sgoals." He says little about how to assess the goodness ofconsequences, and people will often disagree about whichconsequences of a decision are best (e.g., when evaluating themagainst competing goals). This will often be the case even if weare all trying to maximize some particular good, such as thequantity of happiness or pleasure our decisions produce. Still,assuming that these disagreements are resolvable in principle, Ithink Baron is right that a normatively correct decision proce-dure will have better consequences than a normatively incorrectone. There may be particular instances in which this fails tohold, where reasoning poorly might by accident have betterconsequences than reasoning well; but such cases will beexceptional.What I find puzzling is the suggestion, implicit throughoutBaron's target article, that these ide