ethics and risk communication

24
Science Communication 34(5) 618–641 © 2012 SAGE Publications Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/1075547012459177 http://scx.sagepub.com 459177SCX 34 5 10.1177/10755470124 59177Science CommunicationThompson © 2011 SAGE Publications Reprints and permission: http://www. sagepub.com/journalsPermissions.nav 1 Michigan State University, East Lansing, USA Corresponding Author: Paul B. Thompson, Michigan State University, 503 South Kedzie Hall, East Lansing, MI 48822, USA Email: [email protected] Ethics and Risk Communication Paul B.Thompson 1 Abstract One source of ethical failure in risk communication lies in the inadequate theoretical understanding of ethically significant assumptions embedded in risk discourses. A second source of failure resides in the way that risk communication serves ethical purposes that come into conflict in particular cases. The ethical motivation for risk communication may be to improve effectiveness of decision making, to empower recipients of a message, or to ensure that exposure to risk is consistent with ethical criteria of informed consent. Philosophical theories of ethics provide a useful way to frame these important differences in emphasis. Keywords antismoking messages, biotechnology, health communication, normative social influence, philosophy, perception, risk communication, policy, ethics in technology development, technology Among other things, ethics can be understood as the philosophical attempt to articulate and understand human betterment. It is obvious that risk analysis and risk communication are attempts to marry science to a form of human betterment associated with the ability to foresee and plan for various types of harm. Nevertheless, ethics has had a rather spotty history of participation in the five-decade-long emergence of these multidisciplinary fields of scholar- ship. A full discussion of why this is the case is well beyond the scope of the

Upload: michiganstate

Post on 28-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Science Communication34(5) 618 –641

© 2012 SAGE PublicationsReprints and permission:

sagepub.com/journalsPermissions.navDOI: 10.1177/1075547012459177

http://scx.sagepub.com

459177 SCX34510.1177/1075547012459177Science CommunicationThompson© 2011 SAGE Publications

Reprints and permission: http://www.sagepub.com/journalsPermissions.nav

1Michigan State University, East Lansing, USA

Corresponding Author:Paul B. Thompson, Michigan State University, 503 South Kedzie Hall, East Lansing, MI 48822, USA Email: [email protected]

Ethics and Risk Communication

Paul B. Thompson1

Abstract

One source of ethical failure in risk communication lies in the inadequate theoretical understanding of ethically significant assumptions embedded in risk discourses. A second source of failure resides in the way that risk communication serves ethical purposes that come into conflict in particular cases. The ethical motivation for risk communication may be to improve effectiveness of decision making, to empower recipients of a message, or to ensure that exposure to risk is consistent with ethical criteria of informed consent. Philosophical theories of ethics provide a useful way to frame these important differences in emphasis.

Keywords

antismoking messages, biotechnology, health communication, normative social influence, philosophy, perception, risk communication, policy, ethics in technology development, technology

Among other things, ethics can be understood as the philosophical attempt to articulate and understand human betterment. It is obvious that risk analysis and risk communication are attempts to marry science to a form of human betterment associated with the ability to foresee and plan for various types of harm. Nevertheless, ethics has had a rather spotty history of participation in the five-decade-long emergence of these multidisciplinary fields of scholar-ship. A full discussion of why this is the case is well beyond the scope of the

Thompson 619

present article. Instead of rehashing the past, I will concentrate on two forward-looking points. First, practitioners of risk assessment and risk com-munication have adopted conceptualizations of risk that are naively oblivious to ambiguities in the way that the word “risk” functions grammatically in ordi-nary language. Risk communication must take ordinary, nontechnical usage into account. Second, risk scholars have been (implicitly) using frameworks from ethical theory when designing a risk analysis or a related communication effort. There are contentious cases of failed or troubled risk communication that could be better understood and managed if the communicators paid explicit attention to the ethical framework(s) they were adopting.

A brief excursus of what is meant by “ethics” in this analysis is followed by a discussion of how ordinary language discourse about risk involves an implicit set of ethical commitments that often (but not always) run askance from those of risk experts. The opportunities for a failed risk communication strategy that arise in connection with the divergence between lay and expert usage are amplified by implicit (but systematic) embedding of value commit-ments typified by utilitarian and Kantian traditions in philosophical ethics, respectively. These philosophical schools are given a brief, schematic, and stylized overview before giving examples of how each approach has influ-enced strategies of risk communication. In conclusion, I suggest that more explicit attention to the ethical dimensions of risk can help clarify scholarly debates within the field of risk studies and can also improve practical efforts to develop risk communication strategies.

Ethics (for Beginners)Ethics (noun plural) are norms, standards, and expectations that circum-scribe, direct, and possibly motivate conduct appropriate for a given situa-tion. Among professional groups, ethics (p.) reflect those practices essential to the performance of skill and proficiency thought to be peculiarly charac-teristic of the profession. They may be articulated by codes that specify rules for behavior in specific situations, but ethics are just as frequently communicated through stories and the celebration of iconic individuals (heroes and villains) in a manner that does not easily translate into impera-tives indicating specific actions. Journalistic ethics, for example, might be promulgated by rules such as “Always tell both sides of the story” or by reference to an individual such as Edward R. Murrow or Katharine Graham whose conduct was thought to exemplify good ethical character.

Ethics (noun singular) is a domain of discussion, discourse, and theory devoted to the critical analysis of both individual and social conduct. In

620 Science Communication 34(5)

academic circles, it is sometimes characterized as a subfield of philosophy, though academic programs in ethics are often interdisciplinary. Sociological studies in ethics, for example, often undertake empirical work to identify the norms, value judgments, and opinions about ethics (p.) that are most widely shared in a given social group, while philosophers and practitioners of ethics (s.) may be engaged in a critical debate whose purpose is to forge agreement on what actions should be undertaken in a given context, as distinct from those actions and practices that typically are undertaken. There may also be discus-sion, discourse, and theory devoted to more general structural, logical, and psychological dimensions of ethics (p.). Ethical theory is an attempt to derive a very general set of prescriptive procedures that identify right actions, while metaethics is an attempt to characterize the nature of morality and ethical conduct without necessarily offering any basis for prescriptive judgment. During the 20th century, metaethics took a strong turn toward the analysis of language expressing moral judgments and prescriptions.

However, ethics (p.) and ethics (s.) are not fully distinct. Key aspects of ethics (s.) involve the empirical study of ethics (p.). Attempts to formulate ethical theories or complete metaethical analyses are increasingly informed by these empirical studies (Appiah, 2008). There are also situations in which commonly accepted norms either conflict or do not adequately specify the conduct that is demanded of people who wish to act in an ethical manner. The form of critical inquiry typical of ethics (s.) may help the person or persons choose which of several courses of action are appropriate. Similarly, large-scale cultural, organizational, or group change calls for widespread reexamination of traditions, habits, and norms. Transitions associated with changing views on gender, race, and sexuality have led to extensive ethical debates in recent years, and these debates are widely understood to have practical as well as scholarly significance.

This article is an appeal to ethics (s.). There is no attempt to specify rules or standards for the practice of risk communication. My approach draws on the “ordinary language” movement in 20th-century philosophy. It presumes that speakers of English possess a broad competency for using the word “risk” in ordinary conversation and that the concept of risk is circumscribed by the meanings that can be derived from this usage. Related grammatical forms (e.g., the adjective “risky,” the adjectival nominative “riskiness,” or the gerund “risking”) are also assumed to partake in the ordinary grammar of risk. These grammatical forms associate the concept of risk with ethical con-tent: They typically convey a value orientation toward the conducts or events that are described as risks or as risky. However, as will be discussed below, the precise nature of this value orientation varies from one context to another.

Thompson 621

Variance in ethical content is normally unproblematic for ordinary conversa-tional usage, because context resolves the potential for ambiguity.

Although ordinary speakers have no trouble with ambiguity, scholarly and scientific traditions have congealed around the distinct ways in which ethical content is implicit in our quotidian conversations about risk. In contrast to the general concept of risk that unifies usage among contemporary speakers of English, a specific conception of risk will be developed in contexts where a given methodology or problem-solving task is indicated (Thompson & Dean, 1996). For example, financial analysts and epidemiologists are both highly conversant in risk problems, but it is clear that they conceptualize risk in very different ways. Not only are the outcomes (financial loss vs. epidemic dis-ease) quite distinct, financial losses are incurred by the individual or corpo-rate entity whose assets were “at risk,” while the epidemiological conception of risk relates observed outcomes to a general population. Indeed, some epi-demiologists would insist that risk is definable only in reference to popula-tions. It is uncontestable that specific conceptions of risk are necessary for specific research and technical, decision, and analytic activities. It is also a commonplace of science communication that experts are advised to maintain cognizance of the way in which their specialized knowledge may not be shared by the larger public. Yet it is a thesis of this article that this common-place is not fully appreciated with respect to its implications for communicat-ing expert conceptions of risk.

Metaethics and Risk CommunicationScholars of risk, risk analysis, and risk communication have struggled to develop terminology for the analysis of a frequently observed pattern of dif-ference in the way that experts from different disciplines (as well as the lay public) approach risk. In an early and frequently cited article, Judith Bradbury (1989) draws a distinction between a technical community of risk experts who presume that risks are objective features of the world and a com-munity of experts in the social and behavioral sciences who presume that risks emerge as a blend of perception, culture, and the social situation of parties who experience risk. Experts in the technical group presume that risk is fully described by two discrete quantities. The first is a scientific account of some phenomenon or state of affairs, and the second is the probability or likelihood that this phenomenon or state of affairs will occur. Experts in the second group are more likely to view variation in a given person’s state of knowledge, his or her values about what matters, and more conventional social variables such as race, class, and gender as affecting risk. Bradbury

622 Science Communication 34(5)

wrote that experts in the second group believe that risks are sociocultural constructions.

This distinction has recently been reprised by Fern Wickson, who closely follows Bradbury’s characterization, but draws the distinction between real-ist views, which she associates with biological and physical scientific exper-tise, and social constructionist views, which she ties to the psychometric paradigm developed by Paul Slovic and to the cultural theory developed by Mary Douglas and Michael Thomspon. Wickson also identifies a third school of thought derived from science and technology studies, with which she associates herself as well as Brian Wynne. This third school emphasizes different types of uncertainty, indeterminacy in the state of knowledge about risk, and value disagreements (Wickson, 2012). To this list, one might add scholars who discuss the social amplification of risk and also influential work by Ulrich Beck (1992), who argues that social movements once orga-nized around issues of wealth distribution and class division have now shifted to a complex and socially unstable set of alliances focused on risk. The multiplication of terminology threatens to undermine effective commu-nication at the outset.

Some of the confusion is associated with the concept of “social construc-tion” itself. In most usage, social construction is an ontological classifica-tion, often introduced to assert the role of social process in establishing the meaning of a term. It has been used felicitously in connection with race, which was once believed to indicate a natural or biologically grounded clas-sification that could be applied to human beings but is now generally regarded as having biological relevance solely as a statistical property of populations. Both specific races (Black, White) as well as the racial identity of individuals are characterized as social constructions: They reflect patterns of linguistic use that are systematically tied to forms of behavior. In that sense, race is a cultural reality that must be dealt with. Yet it lacks support from any scientifically rigorous account of the individual human organism (Harris, 1999; Jackson, 1989). This usage of social construction implies that while some terms lack an objective or scientifically certifiable referent, oth-ers do not (Hacking, 1999). Thus we might say that Mt. Fuji is not a social construction. Mt. Fuji was constructed by a series of volcanic flows over geologic time. There are, however, alternative ways to understand the ori-gins of Mt. Fuji. It is believed by some to have arisen suddenly in around 285 bce and by others to be the home of a fire deity. But none of these views imply that Mt. Fuji is a figment of the Japanese imagination or that the name “Mt. Fuji” is an empty term utilized to accomplish social purposes. It is thus possible to hold that although Mt. Fuji itself is not a social construction, all

Thompson 623

accounts of its origin or construction are.1 Applied to the above discussion of race, such a way of understanding social construction would imply that all theories of race are social constructions and also that race itself is the prod-uct of social processes.

In moving to a discussion of risk, we would similarly recognize that any theory of risk is a social construction. Thus, if toxicologists take risk to be the conditional probability or likelihood that a specified pathology will material-ize given a specified exposure to a hypothesized causative agent, that theory or characterization of risk is fraught with social construction. It involves the way in which toxicologists as a group use terms such as pathology, probabil-ity, and causation and also the methods they deploy to arrive at agreement on presence or absence of a measurable quantity taken to be indicative of risk. Scholars in science studies are keen observers of the social processes that underpin theory construction and acceptance; hence, the observation that theories, paradigms, or characterizations of phenomena are social construc-tions is a commonplace within science studies. However, Wickson’s (2012) characterization suggests that risk itself is a social construction, at least for those theorists who work in the psychometric and cultural theory paradigms. This would imply that as with race, no corresponding phenomenon amenable to analysis using the methods of natural science exists and that risk is merely the upshot of people talking and acting in certain patterned ways.2 The claim that risk itself is a social construction would also imply that toxicologists who believe themselves to be utilizing methods of the natural sciences to deter-mine a meaningful quantitative relationship are as mistaken as those who thought themselves to have identified the biological basis of race.

However, it is doubtful that many practitioners of the psychometric or cultural theory paradigm would make such a claim. The more reasonable view would be that toxicologists are studying phenomena that, like Mt. Fuji, are not social constructions, while psychometric and cultural theory social scientists are studying phenomena that are rich in social construction. All of these phenomena are then dimensions of risk. I believe that such a view is widely (though far from universally) held among risk scholars. There is also no contradiction in toxicologists saying that although risk is not a social construction, their own attempts to understand and characterize it are. Many scientists understand the inherent fallibility of science in just such terms. Nor is there contradiction in asserting that a given social construction (e.g., theory, analysis, or account) is the best available, though defending such a claim will depend on the purposes for which the account is either intended or being utilized. Nevertheless, there continue to be some risk researchers in the natural sciences who would insist that social phenomena have nothing

624 Science Communication 34(5)

whatsoever to do with risk, and there are social scientists who elide their accurate observations on the social construction of various theories or view-points with the stronger and unsupportable claim that risk itself is a social construction.

In asserting that the superiority of any given theory or approach to risk depends on the purposes to which it will be put, I bring the role of value judgments to the foreground. Such a move is potentially contentious. Some theorists have argued that the scientific analysis of risk should be “value free,” while acknowledging that any attempt to manage risks will inevitably involve a decision maker in value judgments (Cross, 1998; Rowe, 1977). This view appears to assert the universal superiority of a “scientific analy-sis” irrespective of the managerial purpose to which it will be put. Others have countered that any conceptualization of risk will involve at least the value judgment that possible outcomes are regarded as adverse. On the latter view, people do not discuss the risk of happiness and satisfaction unless they are speculating on the possibility that these situations might have some unnoticed downside potential (Cranor, 1997; Hansson, 1996; Rayner & Cantor, 1987; Schulze & Kneese, 1981). Indeed, scholars who were con-vinced that risks were objectively measurable accused the social construc-tionists of “relativism” or of allowing risk assessment to be done by popular vote. By the end of the 1990s, scholars of risk were writing about “the risk wars,” implying that the two schools of thought were antagonistic toward one another and founded on wholly distinct concepts of risk. What is more, it was experts in the social constructionist school of thought who were most responsible for bringing communication studies to the forefront, especially the so-called Orange Book. On one hand, this National Research Council Report, titled Understanding Risk, emphasized a number of ways in which lay publics faced challenges in understanding the quantifications of risk typical of the technical community. On the other, it took the technical com-munity to task for neglecting legitimate concerns of lay publics that were not easily incorporated into natural scientific approaches to risk (Slovic, 1999).

However, it is likely that the key issues that appear to be dividing people in the risk wars are less intractable than they are sometimes made to appear. For example, in many areas of risk assessment relevant to public health, the value judgment implied by taking human mortality and morbidity to be a bad thing is utterly uncontroversial. To point out that it is nonetheless a value judgment is not to imply that it should be debated. Furthermore, advocates of a so-called value-free judgment are clearly correct to insist on the rele-vance of science to both the identification of hazards and the derivation of probabilities to estimate the likelihood that harm from hazards will actually

Thompson 625

materialize. The scientific analysis of these elements should be shielded from forces that would bias the analysis toward favored outcomes. Progress toward recognizing that the word “should” in the previous sentence expresses a value that is critical to risk analysis has been made by drawing a distinction between social values, which express preferences toward specific social out-comes, and epistemic values, which specify the norms for scientific inquiry (Ben-Ari & Or-Chen, 2009). The rough distinction noted by Bradbury (1989) and Wickson (2012) continues to be significant, so long as it is not overinterpreted.

It is particularly important to notice how advocates of the value-free per-spective have exhibited ways of speaking that make effective communication in nontechnical situations difficult if not impossible (Vaganov and Yim, 2000). Their tendency to insist that risk, properly understood, must always be understood as a function of hazard and exposure is a case in point. Within the context of scientific risk assessment, clear definitions of hazard, or the poten-tial for harm that may be associated with a phenomenon or activity, and expo-sure, or the factors that contribute to the materialization of this harm, are essential. Once characterizations of hazard and exposure are available, it is possible to interpret the risk associated with a possible event e according to the following broadly specifiable formula:

Riske = f(v

e, p

e),

where ve is the harm or harmful outcome (often specified simply in terms of

morbidity and mortality) and pe is the probability that the event occurs, given

scientifically investigable conditions of exposure. In this formula, f can incorporate a number of complex mathematical concepts. In the simple case of quantifying the risk of losing a standard coin flip, v

e is the amount

wagered, pe is the probability of losing (i.e., .5) and f is multiplication, so the

expected adverse value (i.e., risk) equals one half of −v.3

The simple coin flip illustrates the communicative challenge. If you ask someone what they think is the risk of a coin flip, they might well respond by saying that it is what they stand to lose (i.e., v, the amount wagered). But this quantity does not reflect the fact that their chance of losing is .5. A represen-tation of the risk of a coin flip is thus not equivalent to whatever one stands to lose. When activities become complex and the calculation of probabilities becomes embroiled in uncertainty, this basic communication challenge is heightened. Scientists who have undertaken painstaking work to relate haz-ard and exposure are, in one sense, understandably frustrated when the word “risk” is used in popular contexts in ways that connote the mere potential for

626 Science Communication 34(5)

harm, with little or no recognition of its likelihood. What they do not seem to realize is that the function f is in fact only one of the ways to conceptualize risk. While the social theorists noted by Wickson (2012) have emphasized the way that gender, class membership, and political/cultural orientations lead people to evaluate risk in ways that are poorly represented by the function f, I will emphasize the way in which this function fails to satisfy the elementary grammar of risk as the word is used in ordinary language.

The ordinary language grammar of risk is attentive to utterances or writ-ten expressions from nontechnical contexts. Analysis of this grammar can proceed by examining representative sentences in which the word “risk” appears. A fully satisfactory account of elementary grammar would yield an expression that could be substituted for the word “risk” in an ordinary English sentence without altering its grammatical validity (e.g., the sentence would still be a grammatically proper English sentence) and without sub-stantially altering its meaning. Many (perhaps most) English words lack a fully satisfactory account in this sense because they are vague and ambigu-ous or depend on context or sentence placement for their syntax. Words (e.g., risk) that can function as both nouns and verbs are especially subject to this kind of ambiguity. Yet, as I will show, the relationship between verb and noun form may be important for the way that a given word is associated with normative or ethical content.

The verb form of the word risk indicates a weak linkage to action and responsibility that is entirely absent in f(v

e, p

e). When used as a verb, the

word “risk” implies action and at least the capability of intentionality or choice on the part of the subject in a risk sentence. People risk, animals and corporations risk. Rocks, trees, and the solar system do not risk, though they certainly can, with some probability p, cause events to occur (Thompson, 2007). If the subject of a risk sentence lacks intentionality, the sentence can-not be readily parsed:

That rock risks coming down the hill in a landslide.The tree risked being blown over by the hurricane.The solar system will risk destruction after the death of the sun.

These sentences are examples of jabberwocky. They are strange and jar-ring, and they invite poetic or metaphorical readings. Rocks, trees, and the solar system can be placed at risk, but the point here is to notice a peculiarity of grammar that occurs when the word “risk” is used as a verb (which it is not in the expression “placed at risk”). If the noun, verb, and adjectival forms all contribute to our ordinary language concept of risk, the “take-home” message

Thompson 627

is simply this: There is at least a weak sense in which use of the word risk is more closely tied to agency and intentionality than it is to things that merely happen through no one’s action. I have argued that this grammatical linkage between risk and agency is most evident when a communication notes the mere possibility of a hazard in order to direct subsequent discourse toward consideration of what to do about it, or who should act (Thompson, 1999; Thompson & Dean, 1996). However, the scientific risk analysts’ emphasis on statistics and probability effectively defines risk as a form of quantitative association. This is much more closely tied to the ordinary language concept of causality than it is to action.

A philosophical analysis of grammar can be supported or controverted by empirical studies of lay discourse. Only recently have empirical studies on the way nonspecialists talk about risk begun to appear. “From the frame semantics perspective, linguists found that at its core, as both a noun and a verb, ‘risk’ emphasized actions, agents or protagonists, and bad outcomes such as loss of a valuable asset” (Hamilton, Adolphs, & Nerlich, 2007, p. 178). This quan-titative study of written texts notes similarity between the core properties of the word “risk” when used as either a noun or a verb. The authors also remark on the “interesting” fact that the word is used as a noun much more frequently in scholarly databases than in databases including usage from nonscholarly contexts. This quantitative study supports a qualitative meta-ethics analysis linking the word to contexts in which agency is present, as distinct from natural hazards or random events. This finding contradicts the view that risk is always adequately characterized by a formula such as Risk

e

= f(ve, p

e). The probability of a harmful outcome is connected to causality

and hence to the predictability of events, but it does not convey the inten-tionally we associate with action. The “interesting” observation that schol-arly databases exhibit an oversampling of the noun form also suggests that scientists working on risk diverge systematically from the ordinary grammar of the word “risk” in just this way. As such, I interpret these findings as sup-porting my nonempirical, philosophical analysis (Thompson, 1999, 2007; Thompson & Dean, 1996).

These metaethical considerations suggest the hypothesis that there is a flow typical of risk discourse in which early messages bring risk to the attention of the audience and are interpreted according to what I have called the “act-classifying” sense of the word. On this interpretation, a key conversational function of the word “risk” is to move the topic under discussion from the category of the unexceptional and quotidian (things that are generally not worth talking about) into the category of those topics, events, or activities that call for action by someone, often the person to whom the communication is directed. Once the topic has been accepted as one calling for action, it may

628 Science Communication 34(5)

become pertinent to consider the details of probability more carefully. In these contexts, an “event-predicting” sense of the word “risk” that is quite consis-tent with the Risk

e = f(v

e, p

e) formula preferred by experts may take over. It is

not as if ordinary people never think or talk about risk in the same way as experts. Rather, we should expect any message that interrupts the normal flow of events to raise the subject of risk will be interpreted as a signal that some sort of action needs to be taken (Thompson, 1999, 2010). Further empirical studies by sociologists were not designed to investigate this hypothesis, but the data presented are consistent with it (see Tulloch & Lupton, 2003).

This hypothesis suggests that early-stage discussions of risk organize thoughts and collaborative action toward moral and prudential responsibili-ties. Other work from risk communication studies provides indirect support for this view. Risk communication scholars working to effect behavior change have struggled with problems that arise when audiences react to mes-sages fearfully or take them to threaten their self-concept (Covello & Sandman, 2001; Lapinski & Boster, 2001). Kim Witte has proposed a model that emphasizes attention to efficacy, that is, the recipient of a communica-tion message’s ability to take action to remediate or otherwise address the risk (Cho & Witte, 2005; Witte & Allen, 2000). An ordinary language meta-ethics suggests that participants expect a conversation on risks to involve some point of action that they or others should initiate. There would be no point to a risk communication focus on, for example, background rates of radiation exposure or the eventual death of the sun.

In fact, the history of at least some efforts to communicate about risk shows that communicators err when they raise the topic of risk (even implic-itly) only for the purpose of reassuring people that everything is just fine, that no action need be taken. Risk messaging on foods containing genetically engineered ingredients was calculated to assure consumers that these foods were “substantially equivalent” to other items on their grocery shelves. The message that many took from this messaging was quite the opposite (Katz, 2001). The contrast between a risk communication effort deliberately struc-tured to induce behavior change and one developed simply to inform sug-gests another way in which ethics (p.) inform risk communication. This second point can be illustrated by noting a widely recognized philosophical schism in ethical theory.

Ethical Theory and RiskPhilosophers have developed many approaches to the theorization of ethical conduct, and even a cursory development of just one ethical theory is beyond the scope of this article. However, two patterns of thought were especially

Thompson 629

influential throughout the 20th century, and a brief account of the way in which these two contenders opposed one another is useful to understanding the ethics of risk. Utilitarian ethical theories define right action according to the criterion of the utilitarian maxim: Do that which produces the greatest good for the greatest number of affected parties. Kantian, Neo-Kantian, or deontological ethical theories (henceforth simply Kantian) define right action according to the categorical imperative: Act only according to that rule whereby you can, at the same time, will that it should become a univer-sal law. Both approaches are discussed briefly below. Both traditions have rich literatures of philosophical development and both are beset by numerous difficulties. The focus here concerns the way that risk is both implicated within and addressed by each approach. The development of these theories that follows should not be regarded as a representative summary of their principal claims or intent, nor should this section be read as an overview of ethical theory in general. Given the orientation to risk, it is useful to notice that both ethical theories assume that the function of an ethical theory is to dictate which of numerous possible actions is ethically correct. They are thus particularly suited to the situation of choice in which a decision maker explicitly and deliberatively considers what should be done.

In definitive formulations of utilitarianism developed by Jeremy Bentham and John Stuart Mill, ethical decision making is portrayed as a problem in evaluating alternative possible actions in terms of their expected benefit and harm: An ethical decision maker treats choice as an optimization problem where the worth of decision outcomes to all affected parties is the quantity for which an optimal value is sought. This very general specification leaves many questions entirely open. What counts as a benefit or a harm? How should trade-offs between benefit to one party and harm to another party be reflected in the optimization process? Who counts as an affected party? For example, Bentham is often noted for his opinion that decisions should take account of their impact on nonhuman animals (Derrida, 2006/2008; Singer, 2002). What is relevant in the present context is that ethical conduct is a func-tion of the expected value that can be assigned to the consequences of each decision option, once these additional ethical questions have been resolved.

Bentham was clearly aware that the consequences of a decision can rarely be predicted with certainty. His approach was consistent with a long tradi-tion of decision analysts who had studied gambling problems. The solution is to reflect both benefit and harm as quantities of worth or value that include the probability that the beneficial or harmful outcomes will actually materi-alize (Bentham, 1789/1948). Thus, Bentham’s approach prefigures that of 20th-century decision theorists who characterize decision making as a

630 Science Communication 34(5)

problem of making the best available trade-off between risk and benefit and who presume that a risk analysis advises the decision maker of the value (or harm) associated with potential hazards and also the likelihood that the haz-ards will actually occur. Much of present-day risk analysis operates under the implicit assumption that decision makers understand their task in the terms that Bentham specified more than 200 years ago. The formulaic repre-sentation Risk

e = f(v

e, p

e) (discussed above) is particularly well adapted to

the utilitarian approach to ethics.In contrast to the utilitarian approach, a Kantian stresses the way in

which any decision maker will be making a choice according to some prin-ciple or rule. Immanuel Kant (1785/2002) provides an approach for testing decision rules according to the aforementioned categorical imperative. Decision makers are instructed to ask themselves whether they could endorse the rule in question as a universal law, as a principle that would be used by any decision maker to determine relevantly similar cases. One point of this question is to make the decision maker aware of possible sources of favoritism or bias. In particular, asking whether one would be willing to have others make decisions according to a given rule should sen-sitize a decision maker to whether or not he or she would consider the deci-sion to have been ethically correct even in the position of an affected party. Hence, some have suggested that the overall thrust of Kantian ethical the-ory is to promote fairness as an overriding standard for ethical decision making (Rawls, 1980), while others have suggested that the categorical imperative is roughly equivalent to the Golden Rule: Do unto others as you would have them do unto you (Hirst, 1934).

Kantian ethical theory can be made more immediately applicable to prac-tical decision making by emphasizing the way the categorical imperative can be linked to freedom and rights. Philosophers and political theorists have spilled much ink making this connection (see Guyer, 1995; O’Neill, 1991), but the detailed philosophical debate is not germane to the present discussion. Intuitively, application of the categorical imperative suggests that an agent should be particularly sensitive to the impact of his or her action on the free-dom of others. If other parties remain free to act according to their own lights, it is often plausible to presume that the decision under review has little or no ethical significance from a Kantian perspective. For many who take this approach, actions that affect only the agent are not strictly moral at all but should be considered purely prudential (Vaccari, 2008). This suggests that ethical questions do not arise in cases where agents themselves will bear all the risk accruing from their decision. From the Kantian perspective, only situations in which risks are imposed on others take on ethical significance.

Thompson 631

From this perspective, the question of whether risk-benefit optimization procedures (e.g., the utilitarian maxim) can pass the test set by the categorical imperative is crucial. Abstract philosophical debates on this question may be set aside in the present context, for real-life cases of risk-based decision mak-ing have provoked widespread consensus that it cannot. In the 1930s, the U.S. Public Health Service conducted extensive research on syphilis, includ-ing an observational study of untreated syphilis in Macon County, Alabama. The Tuskegee Study continued into the 1970s, well after penicillin had been recognized as an effective treatment for syphilis. The researchers’ rationale for allowing the study to continue was influenced by multiple factors (Rothman, 1982), but the National Institutes of Health report on the incident (henceforth, “The Belmont Report”) stressed the way that a narrowly utilitar-ian approach to the ethical evaluation of medical research could lead research-ers astray (Jones, 1993).

A utilitarian evaluating the Tuskegee Study might have reasoned that ben-efits from continuing to observe the progress of untreated disease would accrue in the form of a better clinical understanding of the disease and its effects and that these benefits would offset the risk of continuing to observe untreated disease to the research subjects enrolled in the study. Nazi scientists also rationalized horrific research on prisoners being held in concentration camps, apparently applying a similar application of the utilitarian maxim. However, if there is any instance in which the test suggested by the categori-cal imperative would suggest that utilitarian reasoning cannot be universal-ized, the case of the Nazi doctors would qualify (Macklin, 1992). Following the Belmont Report, the principle of informed consent has been the standard for ethical acceptability of risk to human subjects in research settings. Succinctly, research that imposes risk on human subjects is ethically accept-able only if subjects have been fully informed of all risks and researchers have secured freely given consent.4

Optimization Versus Consent in Risk CommunicationOne important strand of research on risk communication involves problems in which communicators hope to induce behavioral change among individu-als and social groups in light of research on risk factors that contribute to mortality and morbidity (Fan & Holway, 1994; Yanovitzky & Stryker, 2000). The presumption behind these efforts is that target groups do not know that their behavior is risky and that an effective information campaign will induce voluntary behavioral change once people have become aware of

632 Science Communication 34(5)

the relationship between their conduct and their exposure to hazards. A 1977 report from Willard Cates, David Grimes, Howard Orey, and Carl Tyler described an apparently successful risk communication effort in the public media to advise women of the risks associated with leaving an IUD (intra-uterine device) in place after conception. Risk communication efforts asso-ciated with smoking, HIV-AIDs, and numerous other behaviors have had mixed results in changing behavior.

The evaluation of these cases from the perspective of competing ethical theories illustrates the difference between the respective ethical principles as well as the reason why this difference may not matter much for some paradig-matic cases of risk communication. From a utilitarian perspective, the crite-rion for ethical management of risk is to achieve optimal trade-offs between beneficial and risky activities. Since the benefits from activities that expose people to the hazards of IUDs, smoking, and HIV-AIDS appear trivial in comparison to the harm that prevails when risk is realized, application of the utilitarian maxim suggests that behavior change is strongly indicated. If so, the standard for ethical success of a risk communication would appear to reside in its impact on behavioral change. The principle of informed consent, in contrast, requires only that people actually know what the relevant risks are and not that they also engage in behavior change. An Environmental Protection Agency official appeared to endorse this view when he wrote,

Success in risk communication is not to be measured by whether the public chooses the set of outcomes that minimizes risk as estimated by the experts. It is achieved instead when those outcomes are knowingly chosen by a well-informed public. (Russell, 1987, p. 21)

Utilitarian and Kantian ethical theories thus imply different ethical stan-dards for risk communication. For a utilitarian, risk communication should facilitate choices that optimize the expected value of outcomes, and this may not require that decision makers have accurate or detailed knowledge of risks. For a Kantian, adequate respect for the autonomy of decision makers requires that they be placed in a position of having both knowledge about risks to which they will be exposed and the ability to give or withhold consent to that exposure. But this does not imply that utilitarians and Kantians will disagree in their ethical assessment of every risk communication strategy. On one hand, if there are truly negligible benefits associated with a risky activity, as would seem to be the case in failing to remove an IUD, informed consent can be expected to produce the behavioral change indicated by the utilitarian maxim. This is a case in which any reasonable person will almost certainly

Thompson 633

respond to a risk communication message with behavioral change. There is no reason to expect divergence between what individuals choose and what experts would regard as optimizing behavior. On the other hand, when there is little chance that interference in individual freedom will successfully bring about optimal results, it is unlikely that a utilitarian using the optimizing approach will recommend actions that would violate the categorical impera-tive. In the case of sexual behavior, for example, the long history of failed efforts to regulate behavior provides a reason not to recommend risk manage-ment strategies that would diverge from what is required by a principle informed consent: that is, providing the information on risk and simply hop-ing for behavioral change. The theoretical difference between optimizing and informed consent makes little practical difference in either case.

However, there are risk policy domains where ethically based debates can rage. The management of risks from traffic accidents illustrates the point. From a utilitarian perspective, mandatory precautions such as seat belt and motorcycle helmet laws are justifiable if one assumes that data on reduction of mortality and morbidity are accurate (Hauser, 1974; Watson, Zador, & Wilks, 1981; Evans, 1986). From a perspective that stresses individual informed con-sent, such laws intrude on the freedom of an individual decision maker (Irwin, 1987; Rollin, 2006). As in the cases of sexual activity, decisions about whether to use seat belts or helmets do not directly influence hazards for anyone other than the decision maker. Thus, from the standpoint of informed consent, it is reasonable to provide drivers with information, but there would be no ethical imperative to insure that they use information in the manner that a risk optimi-zation paradigm would suggest. Literature in risk communication implying that efforts in risk communication should induce behavioral change would thus appear to have underlying utilitarian commitments.

But it is when an action imposes risk on someone else that the consider-ations that arose in connection with the Belmont Report become most rele-vant. Clearly, many activities in contemporary society impose risk on others, yet provide little or no opportunity for giving or withholding consent to the parties exposed to hazards. Air and water pollution from industrial manufac-turing, pesticides, and a host of other industrial activities fall into this cate-gory. Decision making on the regulation of these activities falls to public authorities. Debate over this decision making mirrors classic philosophical debates over whether public policies should aim to optimize risk and benefit, as utilitarianism claims, or whether they should strive to protect affected parties from intrusion into their sphere of personal freedom. An extreme interpretation of the Kantian view might hold that there are no cases in which environmental risks could be ethically justified, short of an effort that

634 Science Communication 34(5)

actually secures consent from affected parties (Machan, 1984). More realis-tically, the view might be that our political system reaches a compromise in which willingness of most citizens to agree to accept risks in exchange for the benefits of the economic activity derived from polluting activity reflects a kind of implied consent (Killingsworth & Palmer, 1992). In either case, the function of risk communication resides in the need for those who will bear risks to have a clear understanding of them, rather than in whether or not their behavior produces optimal risk-benefit trade-offs.

In contrast to this perspective, a great deal of the scholarly literature on environmental risks appears to simply assume that the utilitarian viewpoint is ethically correct. On this view, what matters ethically is whether optimal trade-offs are made (Cross, 1998). Public opinion figures indirectly in the process of reaching this optimum, because it is widely believed that irrational attitudes, heuristics, and biases; misperceptions; and the NIMBY (not in my back yard) syndrome can create political roadblocks to the implementation of those policies that would, in fact, most fully satisfy the optimizing goals of the utilitarian maxim (Lewis, 1990; Starr & Whipple, 1980). For a thorough-going utilitarian, the goal of risk communication is thus unabashedly one of manipulating opinion and behavior so that an optimal balance of risk and benefit can be achieved. This need not imply that the question of whether risks are voluntarily accepted is irrelevant to a utilitarian. Chancey Starr, one of the founding figures in contemporary risk analysis, argued that standards of risk acceptability for involuntary risks would indeed be much lower than for voluntary risk. But Starr’s (1972) approach was thoroughly utilitarian in its ethical commitments. It is also quite possible to reconcile utilitarianism with the need for good faith communication efforts, particularly when nonex-perts have information that experts lack (Wynne, 1996). Knowing how peo-ple will act in a given situation will generally be important for anyone who hopes to achieve a predetermined outcome. Placing them in a position where they can engage in politics on an informed basis may not be.

Policy change over smoking represents an interesting and illustrative case. Prior to research results demonstrating the hazards to third parties from sec-ondhand smoke, there had been little momentum behind efforts to introduce mandatory laws governing smoking behavior. Although there was a gradual increase in legislation intended to limit or discourage smoking from the early 1970s, during this era smoking was largely conceptualized as an individual behavior. Exposure to secondhand smoke was regarded as an annoyance by nonsmokers but not as an activity that exposed them to risk (Syme & Alcalay, 1982). Once the health effects of exposure to secondhand smoke became well understood, a new rationale emerged for legal remedies (Ezra, 1990; Walsh

Thompson 635

& Gordon, 1986). In the ethical terminology developed here, it became pos-sible to see smoking as a behavior that imposed risk on nonsmokers, and given that ethical framing of the issue, a morally compelling case for legal bans on smoking in public areas could be mounted by mobilizing rationales derived from Kantian ethical theory. Laws that regulate exposure to second-hand smoke have proliferated in the past quarter century (Eriksen & Cerak, 2008). Interestingly, some analysts of public policy appear so wedded to a utilitarian framework that they attribute this shift in attitudes to a victory of self-interest over ethics (Green & Gerken, 1989).

There have been other cases where the ethical framing for a risk issue has remained contentious. Labeling for food containing ingredients derived from genetically engineered crops (or so-called genetically modified organ-isms [GMOs]) provides an example. When these foods began to appear on U.S. markets during the late 1990s, officials at the U.S. Food and Drug Administration (FDA) determined that people would derive no benefit from either avoiding GMOs or consuming them nor would they suffer harm from either choice. On the basis of this evaluation, FDA initially imposed high barriers to any mention of whether GMOs were present on product labels. FDA neither required labeling nor permitted most forms of voluntary label-ing. This policy was a fairly straightforward application of utilitarian rea-soning and was defended as such in analyses that supported the FDA’s decision (Vogt & Parrish, 1999). Arguments supporting labels called on citi-zens’ “right to know” and linked values that individual consumers might hold regarding personal risks with religious and other personal freedoms that are relevant to food choice (Thompson, 2002). As such, the “prolabel-ing” argument appealed to a Kantian approach, which presumed that it was up to each individual consumer to decide whether there is any benefit to be derived from consuming or avoiding GMOs.

In summation, the overall ethical framework in which a given risk prob-lem is conceptualized can play a significant role in shaping the way one would develop an appropriate communication effort. In cases where informed consent has been clearly identified as the appropriate standard, as in develop-ing protocols for the use of human subjects in research contexts, a communi-cation tool should clearly try to minimize its persuasive component. At the same time, some of the most widely studied problem areas involve educa-tional and persuasion efforts where the attempt to achieve behavior change would appear to be ethically well justified. But the role of risk communica-tion in a wide range of policy-relevant cases is far less clear. Is it to replicate the views of people who have studied comparative risks widely in the general populace? Is it to enable the accomplishment of those social goals that would

636 Science Communication 34(5)

be endorsed by either utilitarian or Kantian ethical theories? Even if one remains agnostic about the answers to such questions, the above analysis sug-gests that specialists in risk communication would be well advised to exam-ine the relationship between criteria for ethical decision making and the functions of risk communication on a case-by-case basis. At a minimum, sim-ply presuming that the least ethically complex cases of risk communication are typical is unjustified.

ConclusionRisk analysis and risk communication inevitably involve ethical consider-ations because to discuss the risks of something implicitly names it as risky. Human beings operate with a broad categorical distinction between those things that deserve attention and those quotidian practices that can be allowed to slide into the background of consciousness. In ordinary language, use of the word “risk” functions to mark this distinction. Hence, to mention risk is to imply that people have reason to avoid or at least to be mindful about the subject at hand. In risk communication, one is thus intimating that the audience has a reason to expect that they could have something to lose with regard to the topic under discussion.

The metaethics of risk shows that risk is at least weakly tied to agency, and agency is tied to responsibility. The technical conception that holds that risk just is a function of the form f(v

e, p

e) fails to take account of these basic points

of grammar. At the same time, some advocates of social construction may lay entirely too much stress on agency. Lay speakers can become quite interested in functions of the form f(v

e, p

e), once their attention has been captured by a

communication that tells them that there is something to which it is important to pay heed. This suggests that one place for attention to ethics in risk com-munication is with regard to the timing of a risk message and, furthermore, to the rationale for even undertaking risk communication when there is nothing that an audience can or should do with information about f(v

e, p

e).

The other main point is that risk communications generally presume a utilitarian or Kantian rationale, whether communicators are cognizant of it or not. On one hand, it is striking how little attention is given to ethical theory in the literature of risk communication. For the most part, those scholars who study attitude formation and behavior change in communication practices related to risk display virtually no interest in the ethical question of whether using communication techniques to influence attitudes and change behavior is ethically justified. On the other hand, this fact may not be so striking when one realizes that many of the risk communication efforts that have been

Thompson 637

mounted clearly are justified and could be shown to be justified using any of the strategies for critical thinking that ethics provides. Nevertheless, I hope that I have shown that such happy conditions of easy justification are not universally the case.

Utilitarian and Kantian moral frameworks present somewhat different ways of thinking ethically about risk and risk communication. They suggest that in some cases, we think that getting trade-offs right is the goal, while in others, we think that the goal of analyzing and then communicating about risk is simply one of placing people in a position where they can make their own decisions. And then there are the cases where these two goals get tangled up, and it is either controversial or perhaps simply confusing as to what we are trying to achieve. On top of this, there are tangles and confusions that arise in connection with the message that a risk communication is expected to con-tain, regardless of its overarching purpose. Are risk messages simply sup-posed to tell us how the world works so that we can better estimate the probability of harm or loss? Or are they intended to grab our attention, shake us by the shoulders, and engage in a deliberative search for what we should do? Although forays into philosophical ethics probably do too little to help would-be risk communicators and scholars of risk communication answer these questions, it may nonetheless be important to shake them a bit by the shoulders and provoke a bit more thoughtful and deliberative inquiry into the nature and function of communicating scientific messages about risk.

Declaration of Conflicting Interests

The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding

The author received no financial support for the research, authorship, and/or publica-tion of this article.

Notes

1. The discipline of philosophy is replete with more finely grained approaches to ontological categories. One view holds that no physical entities exist (pheon-omenalism), while another holds that no entities of interest in modern science actually exist as objects independent of scientific activity (antirealism). It is thus certainly possible to defend more radical versions of social construction than the position taken here. My view is characteristic of pragmatism, which emphasizes the functional significance that ontological categories have outside the context of pure philosophical theory building.

638 Science Communication 34(5)

2. Of course, words are “the upshot of people talking and acting in certain patterned ways.” I assume that social construction is not simply a broad ontological claim about language or abstract ideas. Such an analysis would vitiate the importance of claiming that race is a social construction.

3. The positive outcome (e.g., winning the flip) is one half of +v, making the total expected value of a standard coin flip equal to zero.

4. Many contemporary scholars advocating an application of Kantian principles find fault with the doctrine of informed consent. The development given above is illustrative of the contrast between utilitarian and Kantian ethics. It is not intended to serve as an exhaustive summary of the extensive ethical debates that have followed in the wake of the Belmont Report.

References

Appiah, K. A. (2008). Experiments in ethics. Cambridge, MA: Harvard University Press.Beck, U. (1992). Risk society: Towards a new modernity. London, England: Sage.Ben-Ari, A., & Or-Chen, K (2009). Integrating competing conceptions of risk: A call

for future direction of research. Journal of Risk Research 12, 865-877.Bentham, J. (1948). An introduction to the principles of morals and legislation (with

an introduction by Laurence J. Lafleur). New York, NY: Hafner. (Original work published 1789)

Bradbury, J. A. (1989). The Policy Implications of Differing Concepts of Risk. Sci-ence, Technology and Human Values, 14, 380-399.

Cates, W., Jr., Grimes, D. A., Ory, H. W., & Tyler, C. W., Jr. (1977). Publicity and public health: The elimination of IUD-related abortion deaths. Family Planning Perspectives, 9, 138-140.

Cho, H., & Witte, K. (2005). Managing fear in public health campaigns: A theory-based formative evaluation process. Health Promotion Practice, 6, 482-490.

Covello, V., & Sandman, P. M. (2001). Risk communication: Evolution and revolu-tion. In A. Wolbarst (Ed.), Solutions to an environment in peril (pp. 164-178). Baltimore, MD: Johns Hopkins University Press.

Cranor, C. F. (1997). The normative nature of risk assessment: Features and possibili-ties. Risk: Health, Safety and Environment, 8, 123-136.

Cross, F. B. (1998). Facts and values in risk assessment. Reliability Engineering & System Safety, 59, 27-40.

Derrida, J. (2008). The animal that therefore I am (M.-L. Mallet, Ed.; D. Wills, Trans.). New York, NY: Fordham University Press. (Original work published 2006)

Eriksen, M. P., & Cerak, R. L. (2008). The diffusion and impact of clean indoor air laws. Annual Review of Public Health, 29, 171-185.

Evans, L. (1986). The effectiveness of safety belts in preventing fatalities. Accident Analysis & Prevention, 18, 229-241.

Thompson 639

Ezra, D. B. (1990). Smoker battery: An antidote to second-hand smoke. Southern California Law Review, 63, 1061.

Fan, D. P., & Holway, W. B. (1994). Media coverage of cocaine and its impact on usage patterns. International Journal of Public Opinion Research 6, 139-162.

Green, D. P., & Gerken, A. E. (1989). Self-interest and public opinion toward smoking restrictions and cigarette taxes. Public Opinion Quarterly, 53, 1-16.

Guyer, P. (1995). The possibility of the categorical imperative. Philosophical Review, 104, 353-385.

Hacking, I. (1999). The social construction of what? Cambridge, MA: Harvard University Press.

Hamilton, C., Adolphs, S., & Nerlich, B. (2007). The meanings of “risk”: A view from corpus linguistics. Discourse & Society, 18, 163-181.

Hansson, S. O. (1996). What is philosophy of risk? Theoria, 62, 169-186.Harris, L. (1999). Racism. Amherst, NY: Humanity Books.Hauser, D. E. (1974). Is freedom of choice worth 800 fatalities a year? Canadian

Medical Association Journal, 110, 1418-1426.Hirst, E. W. (1934). The categorical imperative and the Golden Rule. Philosophy, 9,

328-335.Irwin, A. (1987). Technical expertise and risk conflict: An institutional study of the

British compulsory seat belt debate. Policy Sciences, 20, 339-364.Jackson, P. (1989). Maps of meaning: An introduction to cultural geography. London,

England: Unwin Hyman.Jones, J. (1993). Bad blood: The Tuskegee syphilis experiment (New and expanded

ed.). New York, NY: Free Press.Kant, I. (2002). Groundwork for the metaphysics of morals (T. E. Hill Jr. & A. Zweig,

Eds.; A. Zweig, Trans.). New York, NY: Oxford University Press. (Original work published 1785)

Katz, S. B. (2001). Language and persuasion in biotechnology communication with the public: How to not say what you’re not going to not say and not say it. AgBioForum, 4, 93-97.

Killingsworth, M. J., & Palmer, J. (1992). Ecospeak: Rhetoric and environmental politics in America. Carbondale: Southern Illinois University Press.

Lapinski, M. K., & Boster, F. J. (2001). Modeling the ego-defensive function of atti-tudes. Communication Monographs, 68, 314-324.

Lewis, H. W. (1990). Technological risk. New York, NY: W. W. Norton.Machan, T. J. (1984). Pollution and political theory. In T. Regan (Ed.), Earthbound: Intro-

ductory essays in environmental ethics (pp.74-106). New York, NY: Random House.Macklin, R. (1992). Universality of the Nuremberg Code. In G. J. Annas & M. A. Grodin

(Eds.), The Nazi doctors and the Nuremberg Code (pp.240-257). New York, NY: Oxford University Press.

640 Science Communication 34(5)

O’Neill, O. (1991). Kantian ethics. In P. Singer (Ed.), A companion to ethics (pp. 175-185). Oxford, England: Blackwell.

Rawls, J. (1980). Kantian constructivism in moral theory. Journal of Philosophy, 77, 515-572.

Rayner, S., & Cantor, R. (1987). How fair is safe enough? The cultural approach to societal technology choice. Risk Analysis, 7, 3-9.

Rollin, B. E. (2006). “It’s my own damn head”: Ethics, freedom and helmet laws. In B. E. Rollin, C. M. Gray, K. Mommer, & C. Pineo (Eds.), Harley-Davidson and philosophy: Full-throttle Aristotle (pp. 133-144). New York, NY: MJF Books.

Rothman, D. J. (1982). Were Tuskegee and Willowbrook “studies in nature”? The Hastings Center Report, 12, 5-7.

Rowe, W. D. (1977). An anatomy of risk. New York, NY: Wiley.Russell, M. (1987). Risk communication: Informing public opinion. EPA Journal,

13, 20-21.Schulze, W. D., & Kneese, A. V. (1981). Risk in benefit-cost analysis. Risk Analysis,

1, 81-88.Singer, P. (2002). Animal liberation. New York, NY: Ecco.Slovic, P. (1999). Trust, emotion, sex, politics, and science: Surveying the risk-

assessment battlefield. Risk Analysis, 19, 689-701.Starr, C. (1972). Cost-benefit studies in socio-technical systems. In Perspectives on

benefit-risk decision making: Report of a colloquium conducted by the committee on Public Engineering Policy, National Academy of Engineering, April 26-27, 1971 (pp. 17-42). Washington, DC: National Academies Press.

Starr, C., & Whipple, C. (1980). Risks of risk decisions. Science, 208, 1114-1119.Syme, S. L., & Alcalay, R. (1982). Control of cigarette smoking from a social perspec-

tive. Annual Review of Public Health, 3, 179-199.Thompson, P. B. (1999). The ethics of truth-telling and the problem of risk. Science

and Engineering Ethics, 5, 489-511.Thompson, P. B. (2002). Why food biotechnology needs an opt out. In B. Bailey &

M. Lappé (Eds.), Engineering the farm: Ethical and social aspects of agricultural biotechnology (pp. 27-44). Washington, DC: Island Press.

Thompson, P. B. (2007). Norton’s sustainability: Some comments on risk and sustain-ability. Journal of Agricultural & Environmental Ethics, 20, 375-386.

Thompson, P. B. (2010). Agrifood nanotechnology: Is this anything? In U. Fiedeler, C. Coenen, S. R. Davies, & A. Ferrari (Eds.), Understanding nanotechnology: Phi-losophy, policy and publics (pp. 157-170). Heidelberg, Germany: Akademische Verlagsgesellshchaft AKA GmbH.

Thompson, P. B., & Dean, W. E. (1996). Competing conceptions of risk. Risk: Health, Safety and Environment, 7, 361-384

Tulloch, J., & Lupton, D. (2003). Risk and everyday life. London, England: Sage.

Thompson 641

Vaccari, A. (2008). Prudence and morality in Butler, Sidgwick, and Parfit. Etica & Politica, 10(2), 72-108.

Vaganov, P. A., & Yim, M.-S. (2000). Societal risk communication and nuclear waste disposal. International Journal of Risk Assessment and Management, 1, 20-41.

Vogt, D. U., & Parrish, M. (1999). Food biotechnology in the United States: Sci-ence, regulation and issues (Congressional Research Service Report RL 30198). Washington, DC: U.S. Government Printing Office.

Walsh, D. C., & Gordon, N. P. (1986). Legal approaches to smoking deterrence. Annual Review of Public Health, 7, 127-149.

Watson, G. F., Zador, P. L., & Wilks, A. (1981). Helmet use, helmet laws and motor-cycle fatalities. American Journal of Public Health, 71, 297-300.

Wickson, F. (2012). Nanotechnology and risk. In D. Maclurcan & N. Radywyl (Eds.), Nanotechnology and global sustainability (pp. 217-240). Boca Raton, FL: CRC Press.

Witte, K., & Allen, M. (2000). A meta-analysis of fear appeals: Implications for effec-tive public health campaigns. Health Education & Behavior, 27, 591-615.

Wynne, B. (1996). May the sheep safely graze? A reflexive view of the expert-lay knowledge divide. In S. Lash, B. Szerszynski, & B. Wynne (Eds.), Risk, environ-ment and modernity: Towards a new ecology (pp. 44-83). London, England: Sage.

Yanovitzky, I., & Stryker, J. (2000). Mass media, social norms, and health promotion efforts: A longitudinal study of media effects on youth binge drinking. Communi-cation Research, 28, 208-239.

Bio

Paul B. Thompson occupies the W. K. Kellogg Chair in Agricultural, Food and Community Ethics at Michigan State University. He has contributed over 160 articles and book chapters on ethical issues connected with technological innovation, imple-mentation, and adaptation, many focused on technologies in the food system. The topic of technological risk has been a central focus of his work since completing a 1980 dissertation on nuclear power plants. His 2007 book Food Biotechnology in Ethical Perspective contains the most fully developed synthesis of his work on risk and risk communication.