cognitive scientific challenges to morality

22
This article was downloaded by: [Monash University Library] On: 20 May 2012, At: 20:48 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Philosophical Psychology Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/cphp20 Cognitive Scientific Challenges to Morality Neil Levy Available online: 23 Jan 2007 To cite this article: Neil Levy (2006): Cognitive Scientific Challenges to Morality, Philosophical Psychology, 19:5, 567-587 To link to this article: http://dx.doi.org/10.1080/09515080600901863 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and- conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.

Upload: mq

Post on 16-May-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

This article was downloaded by: [Monash University Library]On: 20 May 2012, At: 20:48Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

Philosophical PsychologyPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/cphp20

Cognitive Scientific Challenges toMoralityNeil Levy

Available online: 23 Jan 2007

To cite this article: Neil Levy (2006): Cognitive Scientific Challenges to Morality, PhilosophicalPsychology, 19:5, 567-587

To link to this article: http://dx.doi.org/10.1080/09515080600901863

PLEASE SCROLL DOWN FOR ARTICLE

Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions

This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.

The publisher does not give any warranty express or implied or make any representationthat the contents will be complete or accurate or up to date. The accuracy of anyinstructions, formulae, and drug doses should be independently verified with primarysources. The publisher shall not be liable for any loss, actions, claims, proceedings,demand, or costs or damages whatsoever or howsoever caused arising directly orindirectly in connection with or arising out of the use of this material.

Philosophical PsychologyVol. 19, No. 5, October 2006, pp. 567–587

Cognitive Scientific Challenges toMorality

Neil Levy

Recent findings in neuroscience, evolutionary biology and psychology seem to threaten

the existence or the objectivity of morality. Moral theory and practice is founded,ultimately, upon moral intuition, but these empirical findings seem to show that our

intuitions are responses to nonmoral features of the world, not to moral properties. Theytherefore might be taken to show that our moral intuitions are systematically unreliable.I examine three cognitive scientific challenges to morality, and suggest possible lines

of reply to them. I divide these replies into two groups: we might confront the threat,showing that it does not have the claimed implications for morality; or we might bite the

bullet, accepting that the claims have moral implications, but incorporating these claimsinto morality. I suggest that unless we are able to bite the bullet, when confronted by

cognitive scientific challenges, there is a real possibility that morality will be threatened.This fact gives us a weighty reason to adopt a metaethics that makes it relatively easy to

bite cognitive scientific bullets. Moral constructivism, in one of its many forms, makesthese bullets more palatable; therefore, the cognitive scientific challenges provide us with

an additional reason to adopt a constructivist metaethics.

Keywords: Cognitive Science; Constructivism; Evolutionary Biology; Intuitions; Moral

Theory; Neuroscience; Psychology

1. Introduction

Scientific research into the human mind and human behavior is as much feared as

welcomed by laypeople. Many people see something profoundly threatening in the

new knowledge this research will likely bring. They worry that it will undermine some

of our most cherished and most comforting beliefs; that it will strip the universe of

Correspondence to: Neil Levy, Program on Ethics and the New Biosciences, Suite 7, Littlegate House, 16/17 St.

Ebbe’s Street, Oxford OX1 1PT, UK. Email: [email protected]

Neil Levy has appointments at the Centre for Applied Philosophy and Public Ethics, University of Melbourne,

and the Program on the Ethics of the New Biosciences, James Martin 21st Century School, University of Oxford.

ISSN 0951-5089 (print)/ISSN 1465-394X (online)/06/050567-21 � 2006 Taylor & Francis

DOI: 10.1080/09515080600901863

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

meaning and of morality. Those of us who work in the broadly naturalistic stream

of philosophy are often tempted to dismiss these fears contemptuously, but this is a

temptation we ought to resist. There is inductive evidence that scientific discoveries

can cause enormous disruptions in our shared conception of ourselves: Weber’s

so-called ‘‘disenchantment of the world’’ seems to be a genuine phenomenon, after

all. Human beings once saw their environment as intrinsically meaningful in a way

that now seems to be impossible for us, in this more mechanistic age (Weber, 1994).

The fear, then, is of a more radical disenchantment, stripping meaning not only from

the external environment, but from ourselves.

In this paper, I want to examine one variety of this threat: the apparent threat

to morality which some have seen posed by recent developments in what I shall call,

for the sake of a convenient shorthand, ‘‘cognitive science’’ (here understood to

encompass neuroscience, psychology and human evolutionary theory). I shall sketch

three particular challenges, each developed separately, and each motivated by

different concerns, yet each posing a similar problem for morality: each might be

taken to show that our moral beliefs and judgments are unreliable, because produced

by mechanisms that do not track genuine moral features of the world. Each therefore

apparently shows that morality, or a particular part of it, is a systematic and shared

illusion. I shall then attempt to outline some arguments and strategies with which

we might attempt to counter these threats, some aimed at particular challenges, and

some more general. Different metaethical views, I shall claim, find the challenges

posed by cognitive science more or less difficult to see off; though some strategies are

available to proponents of all views, the most powerful and promising is reserved for

moral constructivism. The challenge from cognitive science therefore gives us one

more compelling reason to favor one or another version of constructivism.

2. The Threats

In this first part of the paper, I want to outline three particular apparent threats to

morality. These are by no means the only challenges, to morality and meaning, posed

by recent developments in cognitive science.1 I select these three because they seem to

me representative, and because they are interesting in their own right. They come

from three different cognitive sciences, thus suggesting something of the breadth

of the challenges we face.Before I turn to the details of these challenges, it is worth outlining the manner in

which all three, in one way or another, seem to suggest that morality is illusory. All of

them appear to show that our moral intuitions are systematically unreliable, either

in general or some particular circumstances. But if our moral intuitions are

systematically unreliable, then morality (in total or as it applies to the particular

circumstances in which intuitions fail) is in serious trouble. Moral thought is,

at bottom, always based upon moral intuition. Intuitions play different roles, and are

differentially prominent, in different theories. But no moral theory can dispense with

568 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

intuitions altogether. Each owes its appeal, in the final accounting, to the plausibility

of one or more robust intuitions.Unfortunately, there is no widely accepted definition of ‘intuition’ in the literature.

Intuitions have been identified with intellectual seemings, an irrevocable impressionforced upon us by consideration of a circumstance, which may or may not cause us

to form an appropriate belief—something akin to a visual seeming, which normallycauses a belief, but which may sometimes be dismissed as an illusion (Bealer, 1998).

Other philosophers take intuitions to be, or to be like, judgments, such that an agenthas the intuition p only if the agent is disposed to assent to propositions assertingthat p is the case or even if the agent actually judges that p. For Weinberg, Nichols

and Stich (2001), for instance, an intuition is a spontaneous judgment. There is,however, a question about what counts as ‘‘spontaneous.’’ Do I cease to have an

intuition about p if I carefully consider a range of thought experiments and principlesrelevant to p? Or should we say that my intuition shifts as a result of my reflection?

For the purposes of this paper, I shall identify intuitions with spontaneousintellectual seemings. Doing so, I believe, captures the way the term is actually used

in philosophical debate. An intuition can be spontaneous even if it is the product(inter alia) of reflection, so long as the reflection causes the judgment actually to shift.After long reflection upon cases and counterexamples, I may have a different

intuition to the one with which I began; it now seems to me that not-p (though I maycontinue to have the belief that corresponds with my former intuition, but now my

grounds for belief are like my grounds for believing that the lines in the Muller-Lyerillusion are the same length: I believe in the face of, and not because of, the way

things seem to me). My intuition is spontaneous inasmuch as it arises unbiddenwhen I consider the case; it counts as spontaneous, in the sense in which I am using

the term, even though I would not have it had I not engaged in a great deal ofpreparatory work.

As the foregoing paragraph makes clear, to intuit that p is not necessarily to believethat p. It is therefore quite possible for people to have moral intuitions which do notcorrespond to their moral beliefs. Nevertheless, the intuition that p is normally taken to

have very strong evidential value. An intuition normally causes the corresponding belief,unless the agent has special reason to think that their intuition is, on this occasion,

likely to be unreliable. Intuitions are usually taken to have justificatory force, and, as amatter of fact, typically tend to lead to the formation of beliefs that correspond to them.

Intuitions play an important role in many, perhaps most, areas of enquiry. Butthey are especially central to moral thought, both practical and theoretical. We might

divide moral theories and moral philosophers into two classes: localist and globalist.Localists take local case intuitions seriously. Their goal, in developing a moral theory,is systematizing as many intuitions as possible. They aim to reach a reflective

equilibrium between their considered moral judgments—their intuitions with regardto local cases—and broader intuitions, intuitions about principles and theories. They

may engage in this process consciously or it may be implicit in their moral thought.It is plausible that a great deal of the moral thought of ordinary folk proceeds in this

kind of way (DePaul, 1998). Localists accord local case intuitions high evidential

Philosophical Psychology 569

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

value, and they are accordingly ready to make adjustments in their principles to

accommodate them.It is uncontroversial that localist moral theories are heavily dependent upon

intuitions. Indeed, one prominent localist theory is called, by its proponents,‘‘intuitionism.’’ However, it is more controversial whether globalist moral theories

are similarly dependent. Globalist moral theories are theories for which local caseintuitions have relatively low evidential value. The best known such globalist theory,

or group of theories, is consequentialism: the theory that an act is right just in cases itproduces the best consequences of all the actions available. Consequentialism, in itsvarious guises, is notoriously prone to conflict with local case intuitions, justifying

all kinds of apparently immoral actions. For this reason, consequentialists sometimesclaim that their moral theory is not intuition-based. Peter Singer, for instance, rejects

intuition-driven moral thought on the grounds that our intuitions are at least aslikely to reflect our prejudices as moral truth. They are, Singer (1974) claims ‘‘likely

to derive from discarded religious systems, from warped views of sex and bodilyfunctions, or from customs necessary for the survival of the group in social and

economic circumstances that now lie in the distant past’’ (p. 516). So we ought not torely upon them; instead, we should look for firmer foundations for our moral views.

But when we actually examine the alternative foundation that Singer and other

consequentialists propose, it turns out that they appeal to our intuitions just as muchas their opponents. For Singer (1974) himself, we ought to replace local case

intuitions with ‘‘self-evident moral axioms’’ (p. 516). But self-evidence is itselfintuitiveness, of a certain type: an axiom is self-evident (for an individual) if that

axiom seems true to that individual and their intuition in favor of that axiom isundefeated. Hence, appeal to self-evidence just is appeal to intuition. Similar points

can be made about other consequentialists. For Mill (2003), utilitarianism is justifiedby intuition, which delivers to us ‘‘a feeling in our own mind’’ which sanctions it;

this feeling attaches itself most strongly not to individual acts, he believes, but to theprinciple of utilitarianism: ‘‘If there is any principle of morals which is intuitivelyobligatory, I should say it must be that’’ (p. 206). Bentham (2003), too, justifies

utilitarianism by reference to its intuitive grip on us, as a consequence of ‘‘the naturalconstitution of the human frame’’ (p. 19).

Globalists like Singer, Mill and Bentham do not, therefore, dispense with intuitions,though they are sometimes tempted to think that they do. Instead, globalist moral

theories are based on one big intuition, as it were, rather than a plurality of local caseintuitions. For globalists, one particular moral principle is intuitively so plausible that

they believe it ought to outweigh such case-by-case intuitions (Pust, 2000). Whateverthe role intuitions play in justifying their principles or their case-by-case judgments,all moral theories seem to be based ultimately upon moral intuition.

It is this apparently indispensable reliance of moral reflection upon intuition thatleaves it open to the challenges examined here. Whatever their provenance and

their specific form, all these apparent threats share a common feature: they offerexplanations of our intuitions which seem to imply that moral truth is irrelevant to

their content. They thus suggest a picture upon which morality is an illusion, foisted

570 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

upon us by cognitive mechanisms which do not reliably track genuine, or genuinely

moral, features of the world.

2.1. Prospect Theory

The first threat of this kind we shall examine has already been the object of some

high-quality philosophical reflection (Horowitz, 1998; Kamm, 1998; Sinnott-

Armstrong, 2006; Van Roojen, 1999), but it is worth returning to it once more

since its (apparent) deflationary potential has not yet been exhausted. Prospect

theory (PT) was developed by Daniel Kahneman and Amos Tversky (1979) to explain

certain apparent anomalies in human decision-making, in particular the predictable

failure of people, in particular circumstances, to maximize subjective utility in

making decisions under uncertainty. For our purposes, only one aspect of the theory

is important: the centrality of framing effects. PT predicts that people will make

different choices in what are, from the viewpoint of standard utility theory, identical

situations, depending upon how their choices are framed.For instance, consider the famous Asian Flu Problem (Tversky & Kahneman, 1981).

Subjects were divided into two groups; each group was asked to choose between two

government responses to a coming epidemic of a deadly strain of influenza which

would kill 600 people if no action was taken. The groups were confronted with the

following options:

Group 1

A. If Program A is adopted, 200 of the 600 people who would otherwise die will be

saved.B. If Program B is adopted, there is 1/3 probability that all 600 people will be

saved, and 2/3 probability that no one will be saved.

Group 2A0. If Program A0 is adopted 400 people will die.B0. If Program B0 is adopted there is 1/3 probability that no one will die, and

2/3 probability that 600 people will die.

The majority of respondents in Group 1 chose (A), whereas those Group 2

overwhelmingly chose (B0). This seems odd, since (A0) is precisely the same program

as (A), just as (B) is the same as (B0). People choose (A) over (B), but (B0) over (A0)

simply because of the manner in which the options are framed in the alternative

descriptions. People are risk averse when it comes to gains, and risk-seeking when it

comes to losses, but what counts as a loss and what a gain is (sometimes) a function

of the manner in which options are presented. If the decision problem is framed

in terms of gains—‘‘lives saved’’—they choose the less risky option, but when it is

framed in terms of losses they take chances. Equivalently, our response to losses is

more extreme than our response to gains—the loss of $100 weighs more heavily in

our decision making than would the gain of the same amount, even if the final

Philosophical Psychology 571

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

amount in our possession would be exactly the same in both cases. These

experimental results have now been confirmed many times.

These results seem to show that people sometimes behave irrationally. Rather than

seeking to maximize expected utility, they switch between options simply on the basis

of the manner in which they are presented. For our purposes, however, what matters

here is that PT seems to throw doubt on the evidential value of some of our central

moral intuitions.To see how this works in practice, consider a current moral controversy: the debate

over voluntary euthanasia, which has raged in many countries over the past two

decades. Many people have the intuition that though it is permissible for a physician

to withdraw life-saving treatment from a patient, at their request, it is absolutely

impermissible for them to engage in actions which aim to hasten death. In general,

most people have a powerful intuition that acts and omissions are morally

asymmetrical. It is a much more serious, morally, to cause a death by acting then

it is to cause a death by omitting to do something. Consequently, there are thingswe are forbidden to do, even though these actions would lead to fewer deaths overall.

Compare these two cases:

1. You have available a batch of a rare medicine which halts the progression of

a deadly disease. You can drive to a distant town where one person has contracted

the disease, or to another in the opposite direction where there are five sufferers.

You cannot reach both in time to prevent the death of all six.

2. You have five people in the back of your van who require immediate

hospitalization if they are not to die of a rare disease. However, there is a manwho cannot be moved (but is otherwise healthy) lying across the road. Driving to

the hospital requires running the man over, thereby killing him.

Most people have the intuition that in case (1) we should choose to save more people

rather than fewer. However, the response to case (2) is crucially different: most

believe that you cannot run over the man lying in the road. In (1), you ought to bring

about the result that five live, rather than one, but in (2) you ought to bring about

the result that one lives, rather than five. What explains this asymmetry in our

intuitions?Philosophers have expended a great deal of energy in giving a rational explanation

of our intuitions here, but PT suggests a different response (Horowitz, 1998).

It suggests that our intuitions are a product of the same mechanisms at work in

Asian Flu. Harming someone is perceived as a negative deviation from the baseline,

which is here the state of affairs in which they continue to live, but failing to save is

merely a forgone gain. We weigh losses more heavily than gains, and are risk-averse

when it comes to gains. Accordingly, harms are perceived to be worse than forgone

gains. Since, in (2), the man lying in the road is not in any danger, running him over

involves the loss of his life; it is therefore seen to be more significant than the forgone

gain of the lives of the five people who are already in danger. In (1) however, we seem

simply to be concerned with forgone gains on both alternatives; accordingly,

we maximize expected utility.

572 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

Now, if our intuitions here are really the product of framing effects, then it seems

that their value as evidence is vitiated. When we considered Asian Flu, we saw that

framing effects undermine the rationality of our judgments, since they cause us to

assign different weights to precisely the same alternatives simply according to

whether we perceive them as losses or forgone gains. Similarly, we might claim,

apparently moral intuitions generated by framing effects should be distrusted.

Indeed, as Horowitz (1998) argues, if our intuitions here are generated by framing

effects, we might question whether they are moral intuitions at all, since they are not

responses to morally relevant features of the world. Instead they track morally

irrelevant framing effects. If Horowitz’s case against the harming-versus-not-aiding

(HVNA) distinction can be sustained, this will be bad news for those philosophers

who think that negative rights impose stricter correlative obligations on us than do

positive. Since, as Sinnott-Armstrong (2006) notes, the doing–allowing distinction is

central to our judgments in many cases, if framing effects are responsible for our

intuitions here, a great deal of our morality seems to be under threat.Notice, too, that one common response to claims that heuristics and biases

undermine our pretensions to rationality doesn’t seem to be available here.

Gigerenzer and his colleagues have influentially argued that although heuristics and

biases may occasionally lead us astray, they are optimally designed for rational

decision-making, given the constraints under which real-world agents have to decide:

fast and frugal algorithms will typically outperform slower but more accurate

decision procedures (Gigerenzer, Todd, & the ABC Research Group, 1999). This may

be a plausible response to many of the claims of the heuristics and biases tradition,

but it won’t work here. It may be that our intuitions about losses versus forgone gains

are the product of a heuristic which is generally rational, but the role it plays in moral

intuition seems too closely analogous to the role it plays in irrational decision-

making (in cases like Asian Flu) for the rationality of the heuristic, elsewhere in

human decision-making, to rescue its use in moral thought.

2.2. Brain Processes

Traditional moral dilemmas are also the focus of the second threat. In the first study

of the way in which brains process moral dilemmas, Joshua Greene and his colleagues

found significant differences in the neural processes of subjects considering personal

versus impersonal moral dilemmas (as well as nonmoral dilemmas). A personal

moral dilemma is a case like (2), which involves directly causing the death of

someone, whereas an impersonal moral dilemma is a case like (1), in which death

results from processes that the agent does not initiate. When subjects considered

impersonal dilemmas, regions of the brain associated with working memory showed

a significant degree of activation, but regions associated with emotion showed little

activation. But when subjects considered personal moral dilemmas, regions

associated with emotion showed a significant degree of activity, whereas regions

associated with working memory showed a degree of activity below the resting

baseline (Greene, Sommerville, Nystrom, Darley, & Cohen, 2001). The authors of the

Philosophical Psychology 573

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

study plausibly suggest that the thought of killing someone is much more personally

engaging than is the thought of failing to help someone.

In their original study, Greene and his coauthors explicitly deny that their results

have any direct moral relevance. Their conclusion is ‘‘descriptive rather than

prescriptive’’ (2001, p. 2107). However, it is easy to see how their findings might be

taken to threaten the evidential value of our moral intuitions. It might be suggested

that the degree of emotional involvement in the personal moral dilemmas clouds the

judgment of subjects. It is, after all, a commonplace that strong emotions can distort

our judgments. Perhaps the idea that the subjects would themselves directly cause the

death of a bystander generates especially strong emotions, which cause them to judge

irrationally in these cases. Evidence for this suspicion is provided by the under-

activation of regions of the brain associated with working memory. Perhaps subjects

do not properly think through these dilemmas. Rather, their distaste for the idea

of killing prevents them from rationally considering these cases at all (Sinnott-

Armstrong, 2006). But if this is so, if our intuitions here are distorted by our

emotions, then we ought to discount them. As Greene himself has put it more

recently, ‘‘maybe this pair of moral intuitions has nothing to do with ‘some

good reason’ and everything to do with the way our brains happen to be built’’

(2003, p. 848).

2.3. Morality as an Evolutionary Illusion

One final, rather more speculative, example: In a recent book, Richard Joyce (2001)

advances evolutionary considerations in favor of the view that morality is a myth.

Joyce has several arguments to this end, but we shall examine just one: the argument

from the fact that human moral dispositions are the product of evolution.Joyce claims that the evolutionary origin of morality undermines it. If morality is

the product of evolution, then we have it because it is adaptive. Moral behavior and

dispositions were selected for, roughly, because those of our ancestors who exhibited

such behavior and dispositions succeeded in passing on more copies of their genes,

in one way or another, than those who didn’t. So morality evolved because it

subserves a nonmoral function. But since this is the case, we have no reason to think

that our moral responses track genuinely moral features of the world. Instead, they

are more likely to track fitness-relevant features of our environment, which are not

plausibly identified with moral features. Joyce invites us to compare ourselves to

John, who judges that Sally is ‘‘out to get him.’’ John suffers from paranoia.

Accordingly, though it is possible that Sally really is out to get him, John would have

judged that she was out to get him regardless. We ought, therefore, to discount the

evidential value of his judgment heavily. Similarly, given the fact that morality is

adaptive, there is good reason to think that we would have evolved moral

dispositions, whether or not there were any moral properties. Since this is the case,

we ought to discount the evidential value of our moral intuitions.2

Joyce’s evolutionary argument against morality provides a general framework

within which we can situate all the deflationary arguments we have examined.

574 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

They all suggest the following argument against morality, insofar as the latter rests

upon an intuitive base:

1. Our moral theories, as well as our first-order judgments and principles, are all

based, more or less directly, upon our moral intuitions.2. These theories, judgments and principles are justified only insofar as our

intuitions track genuinely moral features of the world.

3. But our moral intuitions are the product of cognitive mechanisms which track

morally irrelevant features of the environment; therefore

4. Our moral theories, judgments and principles are unjustified.

The argument, whether it is motivated by concerns from psychology, evolutionary

biology or neuroscience, casts doubt upon our ability to know moral facts. It is

therefore a direct challenge to moral epistemology. It is also an indirect challenge to

the claim that there are moral facts to be known: if all our evidence for moral facts is

via channels which cannot plausibly be taken to give us access to them, we have little

reason to believe that they exist.Another way to approach the same conclusion is by pointing out that if our moral

intuitions are generated in the ways suggested, moral truth is irrelevant to the

explanation of our moral experiences. This way of putting the point brings out the

similarity between these cognitive scientific challenges to morality and Harman’s

(1977, 1986) well-known argument that moral properties are explanatorily impotent.

It is worth noting that whatever we might think of Sturgeon’s (1988) reply to

Harman, it seems not to be available here. Sturgeon argues that moral facts supervene

upon natural properties, and our moral judgments are responsive to those properties.

But the deflationary arguments under consideration here do not deny that moral

judgments track natural properties; instead they deny that we have any good reason

to believe that these natural properties constitute or form the supervenience base of

moral properties.

3. Replying to the Deflationist

How might those of us who want to defend the reality and rationality of morality

defend it against these threats from cognitive science? I think that different threats

require different responses, some specifically tailored to the details of the particular

claim under consideration, some more general. We can divide these responses into

two general classes: confronting the threat and biting the bullet.

Biting the bullet is the most general response available to us. We bite the bullet

when, rather than looking for reasons to reject the cognitive scientific claim,

we appropriate it. Rather than taking it to throw doubt on the objectivity or

the existence of morality, we take it to tell us something about what morality is or

how it is implemented in the mind or the brain. Confronting the threat is a response

to a particular finding, and consists of finding reasons to doubt the truth or

the significance of the claims made on its behalf. When we confront, we argue

Philosophical Psychology 575

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

that the cognitive scientific claim is false or insignificant; when we bite we agree that

it is significant, but argue that its significance is misconstrued as underminingmorality.

When should we bite and when confront? Confronting a threat is a risky business.We meet it on its own empirical and conceptual terrain, and there we hope to show

that it is faulty. We claim that there is something wrong with the experimentaldesign, or that the experimenters have not ruled out alternative explanations of their

data, we advance rival, equally empirical and therefore testable hypotheses—in otherwords, we engage with the findings in the normal manner of scientists, natural orsocial. It may be that we can detect flaws in the claims we are examining, but

of course we have no guarantee that we will succeed. Sometimes, findings pan out.If they do, we had better be able to find them palatable.

This seems like bad news, because it seems easy to imagine a bullet that is verybitter. That is, it seems eminently possible that claims will be made that cannot be

incorporated into morality, and which therefore, must be confronted. Consider thefollowing analogy: Oliver Sacks (1985) argues that the visions of Hildegard of Bingen

were ‘‘indisputably’’ the product of migraine attacks. He hastens to add that this doesnothing to cast doubt on their validity as religious revelations. He offers us a wayto bite the bullet here; rather than taking the migraine theory to be deflationary,

we might take it to tell us something about how revelations are implemented in thebrain. However, I think this move is implausible. The visual disturbances

associated with migraine are too well understood—we can explain too satisfactorilynot only the origins but also the content of these percepts, with reference only

to physiological processes—for the suggestion that they have a divine content to beat all credible. Sacks’ claim must, it seems, be confronted by anyone who wants

to maintain that Hildegard’s visions were divinely inspired, because they cannot bebitten.

The link between the experience and what it is supposed to be evidence for is tootenuous in the migraine case for biting to work here. Analogously, deflationaryclaims about morality can only be bitten when it is plausible to postulate some kind

of connection between the neural or mental process which gives rise to the moralintuition and morality itself. The danger that seems to confront us, therefore, is this:

we are hostages to empirical fortune whether confrontation will succeed with regardto this or that cognitive scientific claim. Sometimes biting will work when

confrontation doesn’t, but we cannot rule out the disquieting possibility that itmay be just a matter of time before a claim is made which resists both strategies.

When that happens, it seems, a part of morality, of greater or lesser extent, will fall.Actually, I think the danger is less than it seems, at least if we adopt the kind of

metaethics I recommend. Some metaethical views will find a great variety of bullets

unpalatable, whereas others can swallow a great deal without gagging. I shalltherefore suggest that the challenges we confront from cognitive science give us

(one more) good reason to adopt one or another constructivist moral theory.We should trade in moral theories which make bullets less palatable for those which

make them more.

576 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

3.1. Confrontational Approaches

Let’s return to the threats, beginning with PT. Philosophical responses to it thus far

have been of the confrontational variety. Frances Kamm (1998) has given us perhaps

the best so far. Kamm argues that if PT really explains our moral intuitions in moral

dilemmas, then the (putatively) irrational principle losses-outweigh-forgone-gains

(LOFG) will trump HVNA in every case. LOFG will explain our supposed moral

intuitions, which will therefore be suspect. But, she claims, this is false; it is the

distinctively moral HVNA that does the work, not the irrational LOFG. Kamm’s

strategy is therefore to present cases in which our intuitions diverge from the

predictions yielded by LOFG.For instance, suppose we have to choose between administering a scarce medicine

to four patients who are currently perfectly healthy but will certainly contract a fatal

illness, and five who are already dying but will recover if helped. LOFG predicts that

we shall choose to aid the four, preventing the loss of their lives, rather than the five,

since failing to save them represents only a forgone gain. But Kamm’s intuition—

which I share—is that we don’t: rather than preferring the well-being of the four,

we make the choice on purely consequentialist grounds. As Kamm (1998) puts it,

‘‘Avoiding loss does not trump no-gain here’’ (p. 477). Similarly, if we reframe cases

of harming so they involve losses rather than forgone gains, our intuitions track

HVNA and not LOFG. For instance, suppose case (1) were altered so that it involves

losses on either alternative: now you have the opportunity to drive five people to

hospital to be vaccinated against a disease which they have not yet contracted but

from which they will otherwise die, but someone who cannot be moved is lying in the

middle of the only road. Since both alternatives involve losses, LOFG predicts that

we will make our choice on purely consequentialist grounds. But we don’t; we refrain

from harming the one person. In this case, it seems that LOFG is trumped by HVNA.

If Kamm is correct, our moral intuitions will always be guided by HVNA, which is

a moral principle, and not LOFG. I’m not sure she’s right, though. Consider these

three cases:

(a) I can save one limb of each of five people by cutting off the limb of one other

person;

(b) I can choose to save the limbs of five people, but only by ignoring the plight of

one other who without my aid will lose a limb;

(c) I can save the limbs of five people by acting so as to prevent one person from

regaining a limb already lost.

Clearly (a) is impermissible while (b) is permissible or perhaps obligatory, just as

both principles predict. But what of (c)? Plausibly, if I act to prevent the one person

from regaining her limb I harm her (perhaps I deliberately move the medication she

needs out of her reach). But my intuition is that doing so here is permissible. If it is

permissible to harm here, this may be because doing so results in a forgone benefit,

rather than a loss. It appears that sometimes LOFG can trump HVNA. If that’s the

case, then some of our moral intuitions are the product of an allegedly irrational

Philosophical Psychology 577

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

principle, just as Horowitz claimed, and Kamm’s confrontational approach to

PT fails.

It may be that my intuition here is not widely shared. In any case, however, even if

Kamm has indeed won a victory against PT, there is reason to think that it is merely

temporary. Greene’s most recent interpretation of the brain mechanisms involved in

thinking through moral dilemmas seems to explain the original HVNA cases and

Kamm’s putative counterexamples, all without invoking moral features of the world.

Greene (2005) now gives his original fMRI results an evolutionary interpretation.

He suggests that the resistance to killing in ‘‘personal’’ moral dilemmas comes about

because such cases trigger innate and relatively primitive responses in us. Cases

involving directly harming bystanders by driving over them engage our emotions, in

a way that cases involving omissions (or more indirect harms) do not, simply because

direct harm could be easily understood by our ancestors in the environment of

evolutionary adaptation. Such harms are therefore far more emotionally salient

for us. Our emotions inhibit our acting (or even imagining acting) to cause such

harms, but omissions of aid, or the causation of harm at a distance, via modern

technology, do not trigger strong emotions and are therefore given a more

dispassionate, consequentialist treatment.

Greene’s claims are threatening to our moral intuitions because they suggest that

we treat cases differently depending upon a feature that is, surely, morally irrelevant:

would a chimpanzee be able to understand it as a case of harming? Surely that is

a bullet that cannot be bitten; surely it can’t be wrong to perform an action just in

case a chimpanzee could appreciate it? If that’s right, then we have to take a

confrontational approach to Greene’s claims, with all the risks that entails.I shall suggest there are good reasons to think that a confrontational approach

might work against Greene. However, it is worth considering the attractions of bullet

biting, in response to this particular claim and more generally.

3.2. Biting Bullets

We can divide the threats to morality into two classes. The first, into which PT and

Greene’s claim arguably fall, is the class of direct threats. A scientific finding is a direct

threat to the evidential value of our intuitions if it appears to show that the relevant

intuitions are the product of brain or mind mechanisms that we have good reason to

think are unreliable. Framing effects are unreliable because, as Asian flu illustrates,

the intuitions they elicit are sometimes irrational. If Greene is right, and our moral

intuitions differ according to whether the means we use would be graspable by

a chimpanzee, that too gives us reason to doubt that the intuition is reliable.

Like Hildegard of Bingen’s migraine-induced visions, the link between the intuition

and the claim for which it is taken to be evidence seems too weak for it to carry any

justificatory weight.

The second class of threat is illustrated by Joyce’s evolutionary claim. Here the

threat is not that we have independent reason to be suspicious of a particular

intuition, but instead comes from the knowledge that our moral intuitions generally

578 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

are the product of mechanisms which were not designed for morality. Our moral

responses are the product, for instance, of an evolutionary history in which moralityplayed no role. The selection pressures which shaped them were not moral but

(genetically) selfish.I noted earlier that some metaethical views will find these threats harder to see off

than others. Suppose, for instance, that Platonism were true, and that moral facts andstandards exist apart from human beings and their interests. In that case, we should

be very worried, for if our moral responses evolved under nonmoral selectionpressures, we have no reason to think that they are capable of tracking moral facts.We would have no more reason to trust them as a guide to morality than we have to

trust our hearing as a guide to infrared radiation; no reason to think that the facultygets any grip on its supposed target. This problem seems to generalize to other

nonnaturalist metaethical views, according to which moral facts are not constitutivelylinked to human psychology or to human needs and desires.

But these are not the only metaethics on offer, nor, I think, the most plausible.Among the metaethical rivals to nonnaturalism, the many varieties of moral

constructivism are, I suggest, especially attractive. Moral constructivism is the viewthat moral facts and properties are constituted by—constructed out of—the stances,attitudes, conventions or other properties of human beings. Moral constructivism

comes in many different flavors, depending upon the properties of actual or idealhuman beings they appeal to: contractualist (Scanlon, 1998), ideal-observer (Firth,

1952), Kantian (Korsgaard, 1986), relativist (Wong, 1984) or dispositional (Lewis,1989), among others.3 On these views, moral facts are not independent of us, in the

manner supposed by the Platonist. Instead, they are importantly the product of our(more or less idealized) moral responses. But if moral facts are, in important part,

the product of our responses, then there is much less danger that our intuitions willsystematically fail to be responsive to moral facts, in the manner suggested by Joyce.

It is worth noting that none of the constructivist approaches on the market todaywas developed as a response, ad hoc or otherwise, to the apparent threat to moralityfrom cognitive science. Instead their authors based their cases for them on

independent reasons they took to be decisive. There are, therefore, plausible moraltheories available which can greet these empirical findings with equanimity. There is

no reason to fear that the mere fact that morality has evolved, and requires forits realization a particular and contingent set of brain structures, will undermine it.

General threats, like that posed by Joyce’s argument, therefore seem relatively easyto see off, so long as one is a moral constructivist. Direct threats might prove more

troublesome, because they claim that particular moral intuitions are the products ofmechanisms that we have good reason to think are unreliable. Constructivists see inmoral intuitions, no matter how they are generated, the building blocks of morality.

But if those building blocks are generated irrationally, then morality as a rationalenterprise seems threatened. Greene’s most recent work can serve as a model here.

It seems plausible to think that if we respond so differently to cases depending uponthe degree to which actions are appreciable by relatively more primitive primates,

then our responses are irrational. Surely that can’t be a morally relevant property

Philosophical Psychology 579

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

of an action? If that’s right, then we cannot bite this particular bullet. Actually, I’m

not convinced it is right. Nevertheless, let’s explore the full range of options here,confrontational as well as bullet-biting.

Greene implicitly assumes that the brain processes he observed in his subjects aretypical of normal human beings in general. He may, however, be wrong. It may be

that his fMRI results, procured from undergraduate, presumably American, subjectsare more culturally specific than he thinks, or that they represent the brain processes

of untutored, philosophically naıve, subjects. These different options have differentimplications for his results.

Consider, by way of analogy, another empirical, and potentially deflationary,

finding from another area. Michael Persinger has demonstrated that transcranialmagnetic stimulation induces religious or mystical experiences in most people

(Holmes, 2001). On this basis, as well as on the basis of the experiences of sufferersfrom temporal lobe epilepsy, Persinger has identified the neural bases of religious

experiences in the temporal lobes. Persinger’s claim is that experience of God isinnate, in the sense that human beings have a hardwired disposition to have such

experiences in the right circumstances (Persinger, 1987). But it is surely possible thathe has identified something less interesting: perhaps the neural bases of religiousexperiences in people who, for reasons of culture or upbringing, are disposed to such

experiences (Persinger’s subjects were, after all, largely Americans, among whomrates of religious belief are extremely high compared to other developed countries).

Evidence for this latter hypothesis comes from the application of Persinger’sapparatus to Richard Dawkins (for the BBC television program Horizon; see BBC,

2003). Dawkins, who is a militant atheist, reported effects on his breathing andhis limbs, but no mystical experiences.

Persinger’s failure to stimulate a mystical response in Dawkins might beinterpreted in several ways that are relevant to Greene’s claims. First, it might

indicate that the disposition to spiritual experiences is not after all innate in humanbeings, but is instead the product of culture-bound experiences. Perhaps atheists arelargely immune to such experiences. On this view, the disposition would be the

product of learning, to an extent too great for it to count as innate.4 Second, it mightbe the case that the disposition is innate, in the sense that it develops in a very wide

range of typical human environments including the environment of evolutionaryadaptation, but that it can nevertheless be turned off by learning, with greater or

lesser expenditure of effort. On this view, the disposition to spiritual experienceswould be, as it were, the default setting for human brains. Finally, it might be that

the disposition is strongly innate, in the sense that (as a result, for instance, ofdevelopmental canalization) for all or the great majority of individuals it will developregardless of their religious education, or indeed convictions. Perhaps Dawkins’

response was atypical, perhaps even atypical for an atheist.5

Let’s consider the analogue of each possibility for Greene’s view. Suppose that

putting a convinced consequentialist, say, Jack Smart, or a thoroughgoing Kantian,say Christine Korsgaard, into the fMRI machine revealed a quite different pattern

of brain activation. On the first possibility, we might take Greene’s results to

580 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

demonstrate that different people have different responses as a consequence of their

education, culture or individual convictions, and that these responses are subservedby different regions of the brain. It might be that the responses of Korsgaard and

Smart are subserved by their working memory, despite the fact that they havedifferent intuitions here. What should we say, in that case? We might take it as

evidence that Greene’s undergraduate subjects are simply confused, as a result of adeficient moral education, and that it is this confusion that explains their inability

to think clearly about the cases. Similarly, it might be that repeating Greene’sexperiments with subjects from different cultures or different socio-economic classesmight give us different results. In that case, we might conclude that the pattern of

brain activation revealed was culture or class specific: perhaps East Asians, forinstance, who emphasize social factors in their intuitions concerning knowledge

(Weinberg et al., 2001), might also emphasize the greater good in their considerationof moral dilemmas, and have consequentialist intuitions subserved by working

memory.Second, suppose that the responses of the undergraduates are (in some sense)

innate in the human mind, but that at least some people who have reflected a greatdeal on these cases have different patterns of brain activation (such as those weimagine Korsgaard and Smart to exhibit). If this were so, it might plausibly be

claimed that philosophical training allows our subjects, but not Greene’s, to considerthese cases rationally. What implications for morality, as a theoretical and a practical

undertaking, would this finding have? Moral theory would not thereby become anymore difficult than we already believe it to be. Presumably, philosophers can follow

Smart and Korsgaard in becoming moral experts, overcoming their initial responseto the moral dilemmas and allowing themselves to consider them properly. Would

moral practice be harder, given that most people do not have the time or theinclination to become moral experts? Not necessarily. A great deal depends upon the

difficulty of rejecting our intuitive responses. Recall Persinger’s results once more:perhaps they show that religious belief is innate; nevertheless millions of people haveno such beliefs. Innateness does not equal unalterable, not even difficulty in altering.6

Perhaps this result would do no more than vindicate Aristotle, by suggesting thenecessity for a thorough moral education for each person.

Third, suppose that our philosophers’ responses are atypical, even for philosophers(perhaps even for consequentialists), and this is so because the intuitions are (for

practical purposes) unalterable. Should we conclude that ethical thought and actionis therefore impossible? Surely not. First, to the extent to which the response really

can be shown to be irrational, it might lose its grip on us. Think of the experience ofseeing the sun rise: though we know that the Earth moves and not the sun, we cannothelp but see things the other way round. Nevertheless, we can bracket our perceptual

experience when it is appropriate to do so. In some ways we are better off in moraldilemmas than in the astronomical case. Suppose that our responses to such

dilemmas are irrational. If this is so, then consequentialism begins to look veryattractive, since the kind of cases that Greene takes his work to undermine

are paradigm counterexamples to consequentialism (Sinnott-Armstrong, 2006).

Philosophical Psychology 581

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

Since many of us also have consequentialist, as well as deontological, intuitions

in these cases, we have here a clash of intuitions. We are not called upon to reject ourintuition, given our knowledge that it is irrational; we are asked to throw our weight

behind one intuition rather than another.There are, therefore, confrontational approaches to Greene’s claims that promise to

take the sting out of them. Some of these approaches turn on questions of fact, whichare testable, some on reinterpretations of Greene’s results, which are contestable. What

they have in common is that all claim to show that his results do not tell us anythingsignificant about morality. Suppose, however, that our tests reveal that though Smart’sintuitions are subserved by his working memory, Korsgaard’s (or McDowell’s,

or whoever it may be) are not; that they are the product of the regions of thebrain associated with emotion. Would that be bad news, for deontology or for

morality? Here, I think, the bullet-biting approach comes into its own.If we bite, we accept the claim that the results are indeed significant for morality.

Rather than taking them to throw doubt on it, in whole or in part, however,we incorporate the results. We go constructivist, and claim that the given responses

are some of the building blocks out of which morality is constructed.As we saw, Sinnott-Armstrong (2006) suggests that our intuitions in the moral

dilemmas might be irrational because such cases trigger emotions so strong that we

do not give all the options a fair hearing. But there is another way to think of ourresponses here. We might think of our strong emotion as a moral guide. When we

cannot bear even to contemplate causing certain harms, this may be a product ofright morality, rather than irrationality (Nussbaum, 2001). It is a sign that our

characters are (perhaps only minimally) virtuous that we react so strongly to thethought of causing harm. There seems to be nothing incoherent in this line of

thought, so far as I can see. Indeed, we can even use cognitive science to bolster it,at least somewhat. If it is true that emotions can cloud our judgments, recent

empirical work demonstrates that the exact opposite is true as well: having the rightemotions can help us to make better judgments, both moral and prudential. Damasio(1994) has shown that subjects with damage to their prefrontal cortex have great

difficulty, both within and outside the laboratory, in making prudential decisions,apparently because they lose the capacity for certain emotions, while Blair (1995)

seems to show that psychopaths are unable to distinguish moral from conventionaltransgressions because they lack the capacity for certain emotions. We can therefore

plausibly claim that strong emotions do not distort moral judgment; they areessential to it. The mere fact (if it is a fact) that our emotions might short-circuit our

(cold) reasoning processes in some moral dilemmas does not show that ourresponses have no evidential weight as a guide to morality. Nor does it show thatmorality is irrational, even in part; perhaps ‘‘hot’’ processes should be considered

rational processes.7

Let’s turn to PT one last time. We saw earlier that philosophical responses to

it have been confrontational. Some such approach might work, though I am a littleskeptical that Kamm’s suggestion suffices. What if no confrontational approach is

satisfactory? Can we bite the bullet? Recall that the argument against doing so is that

582 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

framing effects seem irrational, insofar as they are vulnerable to redescriptions, which

generate conflicting intuitions. The thought underlying the claim seems to be this:since, as cases like Asian Flu show, we respond to losses more strongly than to

forgone gains, even when the final result is exactly the same, our responses must beirrational, and losses cannot really be worse than forgone gains.

There are several responses we can make to claims that PT undermines the viewthat there is a moral difference between harming and failing to aid. First, we can

point out that Asian Flu has a feature that is absent in most moral dilemmas; it isvulnerable to redescription so that people have different intuitions in response to thesame end result. The dilemmas we are considering do not seem vulnerable to

redescription in the same way; there really is a fact of the matter whether the agentsinvolved are already endangered or not. For all PT shows, it still may be the case that

losses just are more morally significant than forgone gains, especially when they arelosses of lives. Skeptics about harming versus not aiding argue that because the

heuristic that generates our inconsistent responses in cases that can be reframed leadus astray, this same heuristic must be unreliable in cases that are not vulnerable to

reframing. But this argument is not compelling: it does not follow from the fact that aheuristic is unreliable in one kind of case, that it is equally unreliable in another. Thisis precisely the point that Gigerenzer has long defended.

But what about Asian Flu itself? Isn’t it a moral case? If so (and it seems veryplausible to think that it is), then some moral cases are vulnerable to redescription,

and therefore to framing effects. Doesn’t this show that our responses to such casesare irrational? To the extent to which we have conflicting intuitions, of course, they

cannot both be uniquely right. It may, however, be that there is correct way to frameAsian Flu, and that thinking about alternative framings will help us to see what it is.

There may be a fact of the matter, for instance, whether Asian Flu is properly thoughtof as involving losses, rather than forgone gains. If there is, this may help vindicate

LOFG as a moral principle. But even if there isn’t, this won’t be sufficient toundermine it. We might simply think that Asian Flu is a true moral dilemma, andthat there just is no uniquely correct response to it. In that case, our intuitions are

in order just as they are.Bullet biting therefore remains an attractive response to the challenges from

cognitive science. If we embrace moral constructivism, at least, we might concludethat losses just are more significant than forgone gains, that our shared and innate

emotional responses just are good guides to moral value, or even that it is worse tocause harm using means appreciable by a relatively more primitive primate. These are

bullets we can bite and find palatable.Cognitive scientists and philosophers who advance deflationary accounts of moral

intuitions claim that our moral intuitions do not track moral properties, and that

therefore they have little or no evidential value. But, for all they have shown so far, orseem likely to show in the near to middle future (at least), moral constructivists

needn’t be concerned by their claims. Of course, moral constructivism still facesa great many problems, not the least of which is the internecine battle among its

variants. Moral constructivists need to establish which properties of human beings or

Philosophical Psychology 583

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

human societies constitute moral properties, and what degree of idealization is

required. Are moral facts constituted by, or tracked by, our intuitions as they are, or

by the intuitions we would have if we were fully rational, or under some other

conditions entirely? These questions crop up in the interpretation of the findings

of cognitive scientists, when, for instance, we are faced with the question of whether

we ought to build moral facts out of the responses of Greene’s subjects to moral

dilemmas, or whether instead we should identify them with the (imagined) responses

of philosophers like Korsgaard or Smart. These are difficult questions, answering

which requires detailed empirical and conceptual work. But there is no reason to

think that the findings of cognitive science make these questions more difficult for us

than they already were.

4. Conclusion: the Value of Intuitions

So what if our intuitions are the product of peculiarities of our brains or our minds?

If moral constructivism is true, why should morality not be built out of whatever

moral intuitions we have, no matter their source? If our intuitions clash with

prudential rationality, then we might as easily conclude that morality is autonomous

as that it is irrational. If our intuitions are products of different brain regions, both or

all of which plausibly serve moral functions, we simply confirm what many of us

already believed: that moral intuitions can conflict and that genuine moral dilemmas

are possible. The moral life is difficult, and sometimes we are faced with situations

in which whatever we do, we do wrong. We did not need cognitive scientists to tell us

that.

Morality is especially well placed to see off threats of the nature we have

considered. Other areas are far more vulnerable, because the link between intuitions

and their subject matter is much more tenuous. When we do metaphysics, for

instance, we are interested in what the world is really like. But our intuitions,

plausibly, do not track what the world is really like: they track our folk beliefs about

the world. Since we know that our folk beliefs about such matters are often wrong,

we should accord them relatively little evidential value. If we continue to formulate

our theories in their light, we simply are not doing metaphysics at all. Instead, we are

engaged in a branch of psychology: the description of a subset of our beliefs

(Cummins, 1998; Miller, 2000).8

If moral facts were the kinds of things nonnaturalists supposed—e.g., Platonic

entities, or ideas in the mind of God—then morality would be vulnerable in just the

same way. But if morality exists only for rational creatures capable of guiding their

behavior by its demands, this is plausibly because it can be identified with

the (suitably regimented) deliverances of our moral intuitions. Intuitions give us

access to moral facts because it is, in significant part, constituted by our moral

dispositions.9 In other words, though the gap between folk metaphysics and

metaphysics is, plausibly, too great for our metaphysical intuitions to have much

evidential value, the gap between folk morality and morality itself is relatively small.

584 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

Morality just is folk morality suitably constrained, in reflective equilibrium, or

whatever the case may be, and for this reason our intuitions are quite reliable guides

to its contents. We should settle normative questions in the way we always having: by

arguing over which intuitions—global or local—are genuinely moral and which

should be rejected as we search for reflective equilibrium.If you accept that moral facts are natural facts, constituted by human needs, pains

and pleasures, dispositions or whatever it may be, then whatever mechanisms track

these natural properties and give rise to moral intuitions are moral mechanisms

(whatever else they might be). That they evolved under nonmoral selection

pressures is irrelevant to their current function. Morality is constituted by the

systematic reactions of moral beings; so long as we accept this, we have little reason

to fear that the discovery of the mechanisms which underlie those reactions willundermine it.

Acknowledgments

Many people have provided helpful comments on this paper. In particular I want

to thank Tim Bayne, an audience at the Centre for Applied Philosophy and Public

Ethics, Canberra, and two anonymous referees for Philosophical Psychology.

Notes

[1] I shall not here consider, e.g., the apparent threat to morality posed by the decline of groupselectionist hypotheses in evolutionary theory, in favor of ‘‘selfish gene’’ theories, accordingto which behaviors are selected for only if they promote the genetic interests of individuals(or of the genes they carry): some have seen in the gene’s eye view of evolution bad newsfor altruism, and therefore for morality. On the extent to which this fear is well founded,see Sober and Wilson (1998).

[2] Michael Ruse (1998) has advanced a similar argument for the conclusion that morality is anillusion: because moral properties play no role in our coming to possess moral beliefs,we have no reason to believe that there are any such properties.

[3] Here I adopt the terminology of Shafer-Landau (2003). Shafer-Landau is himself anopponent of constructivism, preferring a nonnaturalist moral realism which seems to meespecially vulnerable to the cognitive scientific challenge.

[4] Innateness is not incompatible with the need for learning. Language is plausibly taken to beinnate in human beings, but obviously requires learning to be actualized. On the view inquestion, however, too much seems to be learned for religious belief to count as innate.For instance, if a domain-general capacity to acquire explanatory beliefs about the worldexplains the religious beliefs of those who have them, these beliefs cannot be said to beinnate.

[5] Something like this suggestion is Persinger’s own explanation for his failure to stimulatea religious experience in Dawkins. He argues that the propensity to these experiences isnormally distributed, and that Dawkins lies at the extreme end of the continuum.

[6] It is tempting but false to assume that because a disposition is innate, it must thereforebe hard to alter it. As Richard Dawkins (1983) has pointed out, it may be easy to alter anydisposition if we can discover the right agent to apply. This agent might be chemical, genetic,or environmental.

Philosophical Psychology 585

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

[7] Shaun Nichols (2004) has recently defended a constructivist account of morality which

makes heavy use of the results of Blair and Damasio in just this manner. I have criticized

certain aspects of Nichols’s account (Levy, 2005), but I believe that Nichols’s general

approach is the right one to these findings.[8] A related fear, that our epistemic intuitions track our culturally variable epistemic beliefs

and not a universally shared concept, is expressed by Nichols, Stich, and Weinberg; see

Weinberg et al. (2001) and Nichols, Stich, and Weinberg (2003). Of course, there are well

known arguments from cultural variability against moral objectivity as well.[9] In saying this, I am adopting the view Goldman and Pust (1998) call ‘‘mentalism.’’ If moral

properties are (importantly) the product of the human mind, then the worry that we are

investigating our minds only is disarmed.

References

BBC. (2003). God in the brain. Retrieved July 28, 2006, from http://www.bbc.co.uk/science/horizon/

2003/godonbrain.shtmlBealer, G. (1998). Intuition and the autonomy of philosophy. In M. R. DePaul & W. Ramsey (Eds.),

Rethinking intuition: The psychology of intuition and its role in philosophical inquiry

(pp. 201–239). Lanham, MD: Rowman & Littlefield.Bentham, J. (2003). Selections from An introduction to the principles of morals and legislation.

In M. Warnock (Ed.), Utilitarianism (pp. 17–51). Malden, MA: Blackwell.Blair, R. J. R. (1995). A cognitive development approach to morality: Investigating the psychopath.

Cognition, 57, 1–29.Cummins, R. (1998). Reflections on reflective equilibrium. In M. R. DePaul & W. Ramsey (Eds.),

Rethinking intuition: The psychology of intuition and its role in philosophical inquiry

(pp. 113–127). Lanham, MD: Rowman & Littlefield.Damasio, A. R. (1994). Descartes’ error: Emotion, reason, and the human brain. New York: Putnam.Dawkins, R. (1983). The extended phenotype: The long reach of the gene. Oxford, England: Oxford

University Press.DePaul, M. (1998). Why bother with reflective equilibrium? In M. R. DePaul & W. Ramsey (Eds.),

Rethinking intuition: The psychology of intuition and its role in philosophical inquiry

(pp. 293–309). Lanham, MD: Rowman & Littlefield.Firth, R. (1952). Ethical absolutism and the ideal observer theory. Philosophy and Phenomenological

Research, 12, 317–345.Gigerenzer, G., Todd, P.M., & the ABC Research Group. (1999). Simple heuristics that make us

smart. New York: Oxford University Press.Goldman, A., & Pust, P. (1998). Philosophical theory and intuitional evidence. In M. R. DePaul &

W. Ramsey (Eds.), Rethinking intuition: The psychology of intuition and its role in philosophical

inquiry (pp. 179–197). Lanham, MD: Rowman & Littlefield.Greene, J. (2003). From neural ‘‘is’’ to moral ‘‘ought’’: What are the moral implications of

neuroscientific moral psychology? Nature Reviews Neuroscience, 4, 847–850.Greene, J. (2005). Cognitive neuroscience and the structure of the moral mind. In P. Carruthers,

S. Laurence, & S. Stich (Eds.), The innate mind: Structure and content (pp. 338–352). New York:

Oxford University Press.Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI

investigation of emotional engagement in moral judgment. Science, 293, 2105–2108.Harman, G. (1977). The nature of morality. New York: Oxford University Press.Harman, G. (1986). Moral explanations of moral facts—Can moral claims be tested against moral

reality? Southern Journal of Philosophy, 24, 57–68.Holmes, B. (2001). In search of God. New Scientist, 170, 24.

586 N. Levy

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2

Horowitz, T. (1998). Philosophical intuitions and psychological theory. In M. R. DePaul &W. Ramsey (Eds.), Rethinking intuition: The psychology of intuition and its role in philosophicalinquiry (pp. 143–160). Lanham, MD: Rowman & Littlefield.

Joyce, R. (2001). The myth of morality. Cambridge, England: Cambridge University Press.Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk.

Econometrica, 47, 263–291.Kamm, F. M. (1998). Moral intuitions, cognitive psychology, and the harming-versus- not-aiding

distinction. Ethics, 108, 463–88.Korsgaard, C. (1986). The sources of normativity. Cambridge, England: Cambridge University Press.Levy, N. (2005). Imaginative resistance and the moral/conventional distinction. Philosophical

Psychology, 18, 231–241.Lewis, D. (1989). Dispositional theories of values. Proceedings of the Aristotelian Society

Supplementary Volume, 63, 113–137.Mill, J. S. (2003). Utilitarianism. In M. Warnock (Ed.), Utilitarianism (pp. 181–235). Malden, MA:

Blackwell.Miller, R. B. (2000). Without intuitions. Metaphilosophy, 31, 231–250.Nichols, S. (2004). Sentimental rules: On the natural foundations of moral judgment. Oxford,

England: Oxford University Press.Nichols, S., Stich, S., & Weinberg, J. M. (2003). Meta-skepticism: Meditations in ethno-

epistemology. In S. Luper (Ed.), The skeptics (pp. 227–247). Aldershot, England: Ashgate.Nussbaum, M. C. (2001). Upheavals of thought: The intelligence of emotions. New York: Cambridge

University Press.Persinger, M. A. (1987). Neuropsychological bases of God beliefs. New York: Praeger.Pust, J. (2000). Intuitions as evidence. New York: Garland.Ruse, M. (1998). Taking Darwin seriously: A naturalistic approach to philosophy. Amherst, NY:

Prometheus Books.Sacks, O. (1985). The man who mistook his wife for a hat. London: Picador.Scanlon, T. (1998). What we owe to each other. Cambridge, MA: Harvard University Press.Shafer-Landau, R. (2003). Moral realism: A defence. Oxford, England: Clarendon Press.Singer, P. (1974). Sidgwick and reflective equilibrium. The Monist, 58, 490–517.Sinnott-Armstrong, W. (2006). Moral intuitionism meets empirical psychology. In M. Timmons &

T. Horgan (Eds.), Metaethics after Moore (pp. 339–365). New York: Oxford University Press.Sober, E., & Wilson, D. S. (1998). Unto others: The evolution and psychology of unselfish behavior.

Cambridge, MA: MIT Press.Sturgeon, N. (1988). Moral explanations. In G. Sayre-McCord (Ed.), Essays on moral realism

(pp. 229–255). Ithaca, NY: Cornell University Press.Tversky, A., & Kahneman, D. (1981). The framing of decisions and the psychology of choice.

Science, 211, 453–458.Van Roojen, M. (1999). Reflective moral equilibrium and psychological theory. Ethics, 109,

846–857.Weber, M. (1994). Sociological writings. New York: Continuum.Weinberg, J., Nichols, S., & Stich, S. (2001). Normativity and epistemic intuitions. Philosophical

Topics, 29, 429–460.Wong, D. (1984). Moral relativity. Berkeley, CA: University of California Press.

Philosophical Psychology 587

Dow

nloa

ded

by [

Mon

ash

Uni

vers

ity L

ibra

ry]

at 2

0:48

20

May

201

2