seductive simulations? uncertainty distribution...

28
ABSTRACT This paper discusses the distribution of certainty around General Circulation Models (GCMs) – computer models used to project possible global climatic changes due to human emissions of greenhouse gases. It examines the trope of distance underpinning Donald MacKenzie’s concept of ‘certainty trough’, and calls for a more multi-dimensional and dynamic conceptualization of how uncertainty is distributed around technology. The certainty trough describes the level of certainty attached to particular technoscientific constructions as distance increases from the site of knowledge production, and proposes that producers of a given technology and its products are the best judges of their accuracy. Processes and dynamics associated with GCM modeling challenge the simplicity of the certainty trough diagram, mainly because of difficulties with distinguishing between knowledge producers and users, and because GCMs involve multiple sites of production. This case study also challenges the assumption that knowledge producers always are the best judges of the accuracy of their models. Drawing on participant observation and interviews with climate modelers and the atmospheric scientists with whom they interact, the study discusses how modelers, and to some extent knowledge producers in general, are sometimes less able than some users to identify shortcomings of their models. Keywords atmospheric sciences, certainty trough, general circulation models, global climate change, simulation technology, uncertainty Seductive Simulations? Uncertainty Distribution Around Climate Models Myanna Lahsen ‘[D]istance from the cutting edge of science is the source of what certainty we have’, Harry Collins (1985, quoted in MacKenzie, 1990: 370) has stated, arguing that there is an inverse relationship between the level of certainty attached to any particular scientific construction and proximity to its site of construction. Donald MacKenzie’s (1990) analysis of antiballistic missile technology extends Collins’ (1985: 145) slogan that ‘distance lends enchantment’ to technology. MacKenzie (1990: 370–71) proposes a dis- tinction between two ‘quite different’ kinds of uncertainty about missile accuracy. The first kind of uncertainty is that of the ‘alienated and those committed to an alternative weapon system’. The second, ‘more private and more limited’ form of uncertainty is that of ‘those closest to the heart of the production of knowledge of accuracy’ (p. 371). Knowledge producers themselves are ‘certainly not critics’ of their own technology, writes MacKenzie; they do believe that missile accuracy – the ability to render Social Studies of Science 35/6(December 2005) 895–922 © SSS and SAGE Publications (London, Thousand Oaks CA, New Delhi) ISSN 0306-3127 DOI: 10.1177/0306312705053049 www.sagepublications.com

Upload: donhan

Post on 10-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

ABSTRACT This paper discusses the distribution of certainty around GeneralCirculation Models (GCMs) – computer models used to project possible global climaticchanges due to human emissions of greenhouse gases. It examines the trope ofdistance underpinning Donald MacKenzie’s concept of ‘certainty trough’, and calls fora more multi-dimensional and dynamic conceptualization of how uncertainty isdistributed around technology. The certainty trough describes the level of certaintyattached to particular technoscientific constructions as distance increases from the siteof knowledge production, and proposes that producers of a given technology and itsproducts are the best judges of their accuracy. Processes and dynamics associated withGCM modeling challenge the simplicity of the certainty trough diagram, mainlybecause of difficulties with distinguishing between knowledge producers and users,and because GCMs involve multiple sites of production. This case study also challengesthe assumption that knowledge producers always are the best judges of the accuracyof their models. Drawing on participant observation and interviews with climatemodelers and the atmospheric scientists with whom they interact, the study discusseshow modelers, and to some extent knowledge producers in general, are sometimesless able than some users to identify shortcomings of their models.

Keywords atmospheric sciences, certainty trough, general circulation models, globalclimate change, simulation technology, uncertainty

Seductive Simulations?

Uncertainty Distribution Around Climate Models

Myanna Lahsen

‘[D]istance from the cutting edge of science is the source of what certaintywe have’, Harry Collins (1985, quoted in MacKenzie, 1990: 370) hasstated, arguing that there is an inverse relationship between the level ofcertainty attached to any particular scientific construction and proximity toits site of construction. Donald MacKenzie’s (1990) analysis of antiballisticmissile technology extends Collins’ (1985: 145) slogan that ‘distance lendsenchantment’ to technology. MacKenzie (1990: 370–71) proposes a dis-tinction between two ‘quite different’ kinds of uncertainty about missileaccuracy. The first kind of uncertainty is that of the ‘alienated and thosecommitted to an alternative weapon system’. The second, ‘more private andmore limited’ form of uncertainty is that of ‘those closest to the heart of theproduction of knowledge of accuracy’ (p. 371). Knowledge producersthemselves are ‘certainly not critics’ of their own technology, writesMacKenzie; they do believe that missile accuracy – the ability to render

Social Studies of Science 35/6(December 2005) 895–922© SSS and SAGE Publications (London, Thousand Oaks CA, New Delhi)ISSN 0306-3127 DOI: 10.1177/0306312705053049www.sagepublications.com

missiles accurate in their targeting of enemy sites – is in principle knowable.However, because of their intimate awareness of the ‘human vagaries’ in theproduction of the missile technology, they are not inclined to defend accuracyas fact (p. 366). Between these two positions are the ‘program loyalists’ andthose who simply ‘believe what the brochures tell them’ (p. 371).

MacKenzie represents these divergent positions in what he refers to asa ‘schematic and impressionistic’ diagram in the shape of a trough (Figure1). The attitude of ‘program loyalists’ is represented by a dip (thusproducing a ‘trough’) in the connecting line representing the higher levelsof uncertainty found among the two other groups, the knowledge pro-ducers and the alienated. MacKenzie (p. 371) suggests that this diagram‘might describe . . . the distribution of certainty about any establishedtechnology’.

After 40 years of development, computerized models of climate dy-namics are an established – albeit always evolving – technology. Suchmodels have become a cornerstone for the atmospheric sciences and forthe bodies of evidence supporting claims about human-induced climatechange. Although model developers and defenders do acknowledge manyuncertainties, as MacKenzie’s diagram would predict, I argue that thecertainty trough only partly captures the distribution of uncertainty aroundclimate models and their projections of future human-induced climatechanges.

MacKenzie may have intended the certainty trough to serve as littlemore than a heuristic. However, the diagram deserves critical analysisbecause of the widespread influence of MacKenzie’s work, and of thecertainty trough in particular an analysis of the social dimensions ofcomputer models (Shackley & Wynne, 1996; Jasanoff & Wynne, 1998;Smyth, 2001; Jasanoff, 2003).1

FIGURE 1Mackenzie’s certainty trough (MacKenzie, 1990: 372).

High

Low

Unc

erta

inty

Directly involvedin knowledgeproduction

Committed totechnologicalinstitution/program butusers rather thanproducers ofknowledge

Alienated frominstitutions/committedto different techology

Source: MacKenzie, 1990. © MIT Press.

896 Social Studies of Science 35/6

When applied to climate simulation technology, MacKenzie’s frame-work does not capture the complex dynamics shaping when and howclimate models are accepted, questioned, or rejected. In what follows, Iidentify the limited applicability, in this case, of the certainty trough andthe trope of distance. I identify four interrelated factors that complicate thecertainty trough: (1) General Circulation Models (GCMs) are produced atmultiple sites dispersed in time and space; (2) GCM model developers aregenerally also users, and some other users also may be said to be knowl-edge producers; (3) identification of some model inaccuracies requiresempirical understanding more prevalent among empiricists than modelers;and (4) social and psychological factors can reduce modelers’ ability toretain critical distance from their own creations. The implication of (1),(2), and (3) is that there is no single vantage point from which to bestevaluate the performance of a single complex GCM. The last point (4)highlights the need to develop the certainty trough and the concept ofdistance so that the model also accounts for the influence of professionalinvestments, social networks, and the broader political controversy thatsurrounds the GCMs and the issue of climate change.

The paper draws from ethnographic fieldwork carried out over 6 years(1994–2000) among US climate scientists at institutions where climatemodeling is carried out. The research analyzed the social, cultural, andpolitical dimensions underpinning disputes among scientists regarding thepresent and future reality of human-induced climate change, and theseverity of its impacts. The author was based at the US National Center forAtmospheric Research (NCAR) throughout this time, an institution thathosts numerous important modeling efforts, and traveled to other USmodeling institutions around the country to study scientists there. Theresearch involved participant observation supplemented by more than 100semi-structured interviews with atmospheric scientists, about 15 of whomwere climate modelers. Some of the latter were interviewed numeroustimes. Participant observation over the 6-year period involved conversa-tions with many more modelers, and provided the opportunity to engageand listen in on their formal and informal conversations.

General Circulation Models

Computer simulations are central tools in areas of global change sciencewhere wholly empirical methods are infeasible (Edwards, 1996). They arepart of a broader trend in science towards simulation technology thatallows scientific investigation of, and experimentation with, complex sys-tems without some of the time and access constraints of traditionalexperimentation. Scientists experiment with modeled earth systems inorder to understand how human activities may alter the global climate andits associated natural and social processes.

GCMs are simulations used to model climate dynamics.2 GCMs usecomputations to simulate complex interactions between the components ofthe earth system; time-dependent three-dimensional flows of mass, heat,

Lahsen: Seductive Simulations? 897

and other fluid properties. The models divide the earth system into three-dimensional grids, mathematically representing the physical movement ofgaseous or liquid masses and the transfer, reflection and absorption ofenergy. They compute these processes at each grid-point at appropriatetime intervals and repeat and speed-up these processes to simulate futurestates of the climate system. The more complex GCMs include anthro-pogenic effects and couple oceanic, atmospheric and land-surfaceprocesses.

GCMs are based on the numerical weather prediction computermodels that also provide the forecasts and visual displays commonly shownon television weather programs. Climate models can be programmed toprovide similar visual displays of projected temperature changes and otherclimatological variables mapped onto geographical maps. For globalGCMs, such maps represent the contours of land masses and oceanscovering the entire globe. As in the case of television weather maps, theglobal climate projections frequently use colors to represent temperaturedifferences, indicating warm temperatures with red and orange, and coldertemperatures with shades of blue and green.

Climate modelers come from a variety of disciplines. The majority ofthem have degrees in atmospheric science, mathematics, and physics.Currently, two dozen or so scientific groups around the world use GCMsto consider the potential consequences of anthropogenic greenhouse gasesin the atmosphere. The resources required to run the increasingly complexclimate models are so extensive and expensive that relatively few countriesand institutions can afford them. Most of these countries (for example,England, Germany, and Japan) have chosen to focus efforts on a singlenational model. The USA, however, has numerous modeling efforts, withno single national model. Consequently, model groups compete againsteach other for access to research funds from national agencies such as theUS National Science Foundation, the Environmental Protection Agency,the National Oceanic and Atmospheric Administration, and the Depart-ment of Energy.

The Epistemology of Models

The biggest problem with models is the fact that they are made byhumans who tend to shape or use their models in ways that mirror theirown notion of what a desirable outcome would be. (John Firor [1998],Senior Research Associate and former Director of NCAR, Boulder, CO,USA)

In climate modeling, nearly everybody cheats a little. (Kerr, 1994)

Climate models are impressive scientific accomplishments with importancefor science and policy-making. They also have important limitations andinvolve considerable uncertainties. The present discussion focuses on un-certainties about the realism of climate simulations – rather than themodels’ significant strengths – in order to highlight features of models thatare overlooked when their output is taken at face value.

898 Social Studies of Science 35/6

Climate simulations are based on the assumption that nature can bequantified and that it constitutes a sufficiently deterministic system thattherefore, in principle, can be forecast far into the future (Shackley &Wynne, 1996). Whether or not long-term climate predictions are reliable isan open scientific question, however (Somerville, 1996). Some skepticsargue that indeterminacies perturb the climate system to the point ofalways frustrating attempts at long-term forecasting (Wiin Nielsen,1987).

Modelers generally agree that the climate system is a chaotic system inboth a technical and practical sense, rendering short-term weather patternsunpredictable beyond a few weeks. Nevertheless, they do believe thatcertain predictions can be made for the climate system, and that themodels ‘get the sign right’, meaning that they are accurate in the overalleffect they predict: that increased concentrations of greenhouse gases inthe atmosphere will result in net warming rather than cooling. Asked aboutthis, one climate modeler simply answered: ‘when you heat a kettle itdoesn’t get cold’. By contrast, at least a few atmospheric scientists Iinterviewed thought it unlikely, but not impossible, that the models mightbe getting the sign wrong.

Paul Edwards (1999, 2001) has analyzed the blurred and elusiveboundaries between models and observational data in global climateresearch. Edwards describes the uncertain relation between global climatesimulations and the data they integrate and against which their accuracy ischecked. Some of these uncertainties are quantifiable and manageable bymeans of empirical and computational improvements, while others repre-sent ‘unquantifiable, irreducible epistemological limits related to inductivereasoning and to the nature of model-based global science’ (Edwards,1999: 439).

To gauge the realism of long-term GCMs that project climate condi-tions many years into the future, scientists initially run the models for pastperiods, in order to compare the simulations against empirical data. Thispractice – often referred to as ‘historical forecasting’ – faces importantchallenges as well, due to the lack of independent, reliable, consistent, andglobal data against which to check the GCM output. In the absence ofsuch data, modelers often gauge any given model’s accuracy by comparingit with other models. However, the different models are generally based onthe same equations and assumptions, so that agreement among them mayindicate very little about their realism.

Model uncertainties are a function of a multiplicity of factors. Amongthe most important are limited availability and quality of empirical dataand imperfect understanding of the processes being modeled. As a result ofdata limitations, GCMs are used in order to ‘massage’ the very data setsfed into the GCMs in the first place to render them consistent and broadlyapplicable (Edwards, 1999, 2001). Thus, the GCMs are used to fill in andsmooth data sets derived from geographically dispersed measuring sta-tions, to render them fit for the process of producing and validating

Lahsen: Seductive Simulations? 899

GCMs. This circularity characterizes processes of model production andvalidation, which ideally should be kept separate.

Unable to calculate atmospheric changes everywhere, because of lim-ited observational data and computer power, GCMs break the atmosphereinto a manageable number of blocks (grid boxes) and calculate relevantprocesses within each block. The region represented by each block rangesfrom 100 to 500 km2 in GCMs, and commonly includes about 10 km ofthe atmosphere above Earth. Each block is partitioned into multiplevertical layers starting with the lowest 2 m of the atmosphere, at the earth’ssurface, and proceeding above the atmosphere where pressure approacheszero. In the process of dividing atmosphere and oceans into such largethree-dimensional grid boxes, GCMs do not resolve climate processes andfactors that are smaller than the grid’s scale, even though these factorsvariously cool and warm the climate system. These unresolved factorsinclude key hydrological processes, such as cloud formations and theirmovement in the atmosphere, as well as eddy currents, evaporation, andsurface exchanges.

Instead of being empirically represented, sub-grid and insufficientlyunderstood phenomena are ‘parameterized’. Rather than actually incorpo-rating such phenomena into the models based on physical measurements,modelers treat them indirectly by seeking to include their estimatedclimatic effects. A successful parameterization requires understanding ofthe phenomena being parameterized, but such understanding is oftenlacking. For example, the intricate microphysical processes that make upclouds are not well understood, nor is the overall climatic effect of clouds.Depending on their composition – which varies in time and space – thefeedback processes created by clouds might trap heat around the earth orreflect radiation away from it, thus either warming or cooling the planet.How best to parameterize various processes is a contentious subject amongmodelers and model analysts.

When confronted with limited understanding of how the climatesystem works, modelers seek to make their models conform to how theclimate system is expected to behave. The adjustments may ‘save appear-ances’ (Baker, 2000) without integrating precise understanding of thecausal relationships the models are intended to simulate. For example, aparticular tuning or tweaking technique is called ‘flux adjustment’ or ‘fluxcorrection’. It was used in the past for large coupled models at almost allthe major international modeling centers. Flux correction has generallybeen considered a necessary adjustment to get model outputs to conformmore closely with reality. In the coupled models (for example, ocean–atmosphere models), the ocean might tend to ‘drift’ away uncontrollably, aconsequence of the linear rather than non-linear feedback structure of themodel. In addition, large regions of modeled oceans have at times turnedinto solid ice.

Oreskes et al. (1994) have noted the difficulty of distinguishing what isand is not based on existent and validated knowledge in exploratory

900 Social Studies of Science 35/6

climate modeling. They write that when there is a lack of full access in timeor space to the phenomena of interest,

. . . there are certain similarities between a work of fiction and a model:[J]ust as we may wonder how much the characters in a novel are drawnfrom real life and how much is artifice, we might ask the same of a model;How much is based on observation and measurement of accessiblephenomena, how much is based on informed judgment, and how much isconvenience? (Oreskes et al., 1994)

As is the case of scientific knowledge generally, models cannot be verifiedin the sense of having their truth status confirmed with certainty (Oreskeset al., 1994; Shackley & Wynne, 1995; Norton & Suppe, 2001).3 Identify-ing model errors is particularly difficult in the case of simulations ofcomplex and poorly understood systems such as the earth’s atmosphericsystem, simulations that sometimes extend hundreds of years into thefuture. Whereas the accuracy of weather forecasts is established withindays, climate forecasts may only be proven wrong in decades or centuries.Moreover, though computer resources have greatly expanded in the lastfew years, at the time of the research reported here, modelers lackedsufficient computer capacity and time to perform a large number of slightlyvaried runs of the same complex GCM model, which was required for thecreation of error bars, and for testing, diagnosis, and documentation ofmodel characteristics (National Research Council, 1998). For relatedreasons, computer models (model codes) are seldom subjected to peerreview (Bankes, 1993) and large-scale model studies are never replicated intheir entirety by other scientists, because this would require them toreimplement the identical conceptual models. Replication in science isgenerally difficult (Collins, 1985; Collins & Pinch, 1993), and in the fieldof climate modeling, the exact reproduction of a climate model outcomewill never happen due to the ‘internal model variability’ that results inchaotic dynamic perturbations. The nearest climate models come to closescrutiny of their subcomponents is in the comparison of international peer-reviewed studies. This process is also the closest the models come to peerreview. A kind of replication also occurs when model groups ‘benchmark’ acode to ensure near-exact (replicable) conditions. This practice is per-formed when model developers integrate code developed by other groupson other platforms (that is, other computers) with their own platform.

The Distribution of Uncertainty Around General CirculationModels

In his historical study of physics, Peter Galison observes that ‘the computerbegan as a “tool”, an object for the manipulation of machines, objects, andequations . . . [b]ut bit by bit . . . came to stand not for a tool, but fornature’ (Galison, 1997: 777). His account evokes theoretical discussions ofthe role of simulations in postmodernity (Poster, 1988; Baudrillard, 1993).Galison does not discuss variations in levels of certainty attributed tosimulation technology and output, a challenge taken up by Simon Shackley

Lahsen: Seductive Simulations? 901

and Brian Wynne (1995, 1996) in their analysis of climate models and theirinterface with policy.

General Circulation Model Users

Shackley and Wynne draw from MacKenzie’s certainty trough to explainthe distribution of uncertainty around GCMs. They credit modelers withbeing more aware of their own models’ uncertainties compared to users,and as inclined to freely discuss these uncertainties, ‘at least amongthemselves’ (Shackley & Wynne, 1996). Shackley and Wynne suggest thatawareness of flaws, limitations, and uncertainties associated with GCMsand their output diminishes as these technologies travel from the produc-tion site to other areas of science and society where they are used.Scientists (‘impacters’) who use GCM output as input in other models thatsimulate socioeconomic and environmental impacts of GCMs’ projectionsof future climate change serve as a case in point. Shackley and Wynnedescribe the impacters in terms of MacKenzie’s certainty trough, suggest-ing that their location further from the site of GCM construction rendersthem less aware than modelers of the uncertainties associated with theoutput.

Shackley and Wynne’s representation of the distribution of uncertaintyresonates with the perspective of model developers. In interviews andconversations during my ethnographic research, modelers typically identi-fied the problem of users’ misuse of their model output, suggesting that thelatter interpret the results too uncritically. Model developers profess thatthey, as one of them put it, take the models ‘seriously but not literally’(Somerville, 1996). Some among them even publicly criticize the mostcomplex modeling efforts. For instance, veteran US-based climate modelerSyukuro Manabe told The New York Times that the models have ‘gone toofar’ and that ‘[p]eople are mixing up qualitative realism with quantitativerealism’ (Revkin, 2001). His assertion supports the claim that modelers areintimately aware of the limitations of the models they themselves helpproduce, but leaves ambiguous exactly who the ‘people’ are to whom herefers: do they include climate modelers or only users – and which types ofusers, with what kinds of relationship to model development?

GCM developers’ discourses support Shackley and Wynne’s analysis ofthe distribution of certainty. In what follows, however, I identify limitationsin the applicability of the certainty trough and the trope of distance whenapplied to simulation technology.

Several important dimensions of climate modeling complicate theapplicability of the certainty trough in particular, and the trope of distancein general. First, model developers typically are also model users. Becauseof the complexity of the models and of the phenomena they seek torepresent, GCM model developers build only parts of a model, integratingsub-models and representational schemes (‘parameters’) developed byother modeling groups. Even scientists (‘model users’) who are not prima-rily model developers typically modify the models they have obtained from

902 Social Studies of Science 35/6

elsewhere. Moreover, much research in the atmospheric sciences todayconsists of modifying a subset of variables in models developed elsewhere.This complicates clear-cut distinctions between users and producers ofmodels.

Second, the difficulty with distinguishing developers from users alsocomplicates clear identification of the exact site of production; any com-plex GCM involves multiple sites of production. Any single model developeracts at a distance from at least some of the multiple sites in which modelsand their components are developed.

Third, increased specialization has reduced the amount of time modeldevelopers have to study the atmosphere by means of empirical data.Model accuracy is gauged by checking models against empirical knowledgeof how the real atmosphere and larger Earth system behaves. For thisreason, some empiricists are perhaps in the best position to identify somemodel inaccuracies. These empiricists may also be described as users andeven, in some respects, as alienated from the GCMs.

Fourth, climate modelers’ psychological and social investments inmodels and the social worlds of which they are a part can at times reducetheir critical distance from their own creations. Modelers sometimes iden-tify with their own models and become invested in their projections, whichin turn can reduce sensitivity to their inaccuracy. Such personal andprofessional investments are not unique to the field of modeling.

Because of the above factors, model producers are not always willingor able to recognize weaknesses in their own models, contrary to what issuggested by the certainty trough and by analyses of climate modelsdrawing on that framework. Below, I argue that some users – the empiri-cists who help validate the models – may be better positioned to identifyinaccuracies than are model developers.

Multiple Sites of Production; Integration of Model Development and ModelUse

The certainty trough best describes innovations with unitary functions andunitary sites of production. It applies poorly to GCMs because they are arenot carried out at a singular site and because GCM creators are alsocommonly model users.

In the first decades of climate modeling (from the 1960s until the1980s), individual model developers constructed their models fromscratch. Today, no scientist single-handedly develops a complex GCMmodel from the bottom up. In any given modeling institution, there istypically a division of labor. For instance, a coupled ocean–atmospheremodel typically involves at least three groups of scientists: one group ofscientists focuses on the development of an atmospheric model, anothergroup develops an ocean model, and a third group couples the twotogether. Moreover, developers of each of the sub-models often build uponand integrate elements from other sub-models (for example, convection

Lahsen: Seductive Simulations? 903

schemes) developed by other groups. As a result, no single person deeplyknows, or is ‘close to’, all aspects of a given GCM.

Modelers’ empirical knowledge of the physical atmosphere is limitedby the fact that they have to devote the vast majority of their time tostudying their models rather than independent data sets and actual weatherphenomena. MacKenzie’s diagram is too simple conceptually to accountfor the fact that complex GCMs involve a multiplicity of production sitesand that model developers also are model users. Especially in the contextof ‘big science’ (Galison & Hevly, 1992; Gibbons et al., 1994), actors arenot easily captured by singular and unchanging labels (‘knowledge pro-ducers’, ‘users’, ‘alienated’) or characteristics (as knowledgeable, under-critical, or over-critical). Since producer/users vary in their familiarity withdifferent parts of the same overall model, no single person intimately knows alldimensions of any given GCM, or the associated uncertainties. In otherwords, the overall distribution of certainty varies depending on the sub-modelor issue in focus. Moreover, as I discuss in the sections that follow, theremay be variation in the distribution of certainty through time: the sameperson may think differently of the same technology at different momentsin time.

Knowledge Producers – Are They the Most Reliable Evaluators of Their OwnTechnology?

Even disregarding the fact that model developers often are users as welland work at some distance from a given model’s multiple sites of produc-tion, the certainty trough does not account for the possibility that knowl-edge producers may not be the most reliable evaluators of their own work.MacKenzie (1990: 366) indicates some awareness of this when he ac-knowledges that knowledge producers ‘certainly [are] not critics’ of theirown technology.

However, more subtle indications of knowledge producers’ limitationsmay be overshadowed by MacKenzie’s description of them as beingintimately aware of the shortcomings of the products of their own labor.Shackley and Wynne reinforce this view when they draw uponMacKenzie’s certainty trough to discuss impacters’ less critical under-standing of the GCMs, while suggesting that modelers openly discuss theirmodels’ shortcomings among themselves, and also when proposing thatmodelers’ tendency to downplay uncertainty is a function of strategicchoice.

According to Shackley and Wynne, modelers sometimes deliberatelypresent their models in ways that suggest and encourage exaggerated faithin their accuracy. The authors identify a duality in climate modelers’discourse, observing that they shift between strong claims to scientificauthority (models as ‘truth machines’) and more modest claims aboutmodels as aids to thinking about the world (models as ‘heuristics’) (Shack-ley & Wynne, 1995, 1996). Though modelers lack a conceptual basis forknowing whether the long-term climate is predictable, their discourse often

904 Social Studies of Science 35/6

moves from describing long-term global climate predictions as beingpossible in principle, but presently unrealized and uncertain, to suggestingthat their models are in fact predictive. My interviews documented thistendency with testimony from modelers themselves, confirming my ownobservations.

Such oscillation may signal a need for a more radical reconceptualiza-tion, but Shackley and Wynne’s distinction between how modelers speakamong themselves and how they speak to external audiences preserves thecertainty trough, as does their portrayal of modelers’ shifting representa-tions as a function of strategic choice. They suggest that modelers – keen topreserve the authority of their models – deliberately present and encourageinterpretations of models as ‘truth machines’ when speaking to externalaudiences.

Shackley and Wynne thus identify an important aspect of modelers’public communications. Like scientists in other fields, modelers might‘oversell’ their products (as acknowledged in quotes presented below),because of funding considerations. In a highly competitive funding envi-ronment they have an interest in presenting the models in a positive light.

The centrality of climate models in politics can also shape howmodelers and others who promote concern about climate change presentthem. GCMs figure centrally in heated political controversies about thereality of climate change, the impact of human activities, and competingpolicy options. In this context, caveats, qualifications, and other acknowl-edgements of model limitations can become fodder for the anti-environmental movement (Gelbspan, 1995, 1997; Helvarg, 1994; Lahsen,1998b, 2005). Media-propelled political campaigns have erupted regularlysince the early 1990s, often with help from public relations companies,politically conservative think tanks, and industry groups with interests incontinued high-level dependence on fossil fuels. These organized oppo-nents attack the scientific evidence of human-induced climate change –and hence climate models – to undermine policy measures designed toprevent or mitigate the problem (Lahsen, 1998b).

In such a charged political context, modelers learn to exercise care inhow they present their models in public forums. The need for such care issometimes impressed explicitly upon them by scientists who have experi-ence in national and international climate politics. Speaking to a full roomof NCAR scientists in 1994, a prominent scientist and frequent govern-mental advisor on global change warned an audience mostly made up ofatmospheric scientists to be cautious about public expressions of reserva-tions about the models. ‘Choose carefully your adjectives to describe themodels’, he said, ‘Confidence or lack of confidence in the models is thedeciding factor in whether or not there will be policy response on behalf ofclimate change.’ While such explicit and public references to the politicalimpact of the science are rare (I only encountered this one instance duringmy fieldwork), a similar lesson is communicated in more informal andsubtle ways. It is also impressed on many who witness fellow atmosphericscientists being subjected to what they perceive as unfair attacks in

Lahsen: Seductive Simulations? 905

media-driven public relations campaigns, such as the case of the LawrenceLivermore National Laboratory (Livermore, CA) modeler BenjaminSanter, in the controversy over the 1995 Intergovernmental Panel onClimate Change (IPCC) report (Lahsen, 1998b).

It is thus correct to distinguish, as Shackley and Wynne do, betweenhow modelers speak among themselves and how they speak to ‘externalaudiences’. Climate modelers and advisory scientists use strong claims –invoking models as ‘truth machines’ and downplaying uncertainty – incommunications directed ‘outside’ of the modeling community, and suchdiscourse does not necessarily reflect their more private discussions.

But is it accurate to suggest, as MacKenzie, Shackley, and Wynneappear to do, that the highest level of objectivity about a given technologyis to be found among those who produced it? Shackley & Wynne (1996)write that ‘at least among themselves’, model producers typically acknowl-edge many uncertainties and indeterminacies in their models. Knowingexactly how modelers speak among themselves is complicated by an‘outsider’s’ difficulty to directly observe unmediated, informal commu-nication among modelers. However, my participant observation andinterview data suggest that model producers are not always inclined,nor perhaps able, to recognize uncertainties and limitations with theirown models.

With its trope of distance, the certainty trough may inadvertentlyreinforce – and perhaps draw persuasive power from – a common, ide-alized construct of scientists as autonomous knowers of truth. Shackleyand Wynne use comparative and qualifying terms such as ‘likely’, ‘more’(for example, ‘more aware’) and ‘quite’ (‘modelers are quite prepared todiscuss model deficiencies’ [1995: 118]), ‘typically’ (‘Typically, the pro-ducers of knowledge acknowledge its many uncertainties . . . ’ [1996:277]), ‘may’ (‘so practitioners may attribute greater certainty to knowledgefrom another specialty than the practitioners in the first specialty’ [1995:114]). Nevertheless, the trope of distance embedded in their analysissuggests that, at sites where models are developed, people involved in theirproduction know the models’ limitations in a broadly consistent andreliable way, even though the latter may be hidden or ignored when themodels ‘travel’ out into the world.

Yet, as discussed earlier, GCMs are developed and modified at amultiplicity of sites dispersed in time and space; they are in fact compositesof multiple production processes. As a result, any given part of a largerGCM is transparent only to some of the persons who helped develop it.No single person has deep knowledge of all of the subcomponents andassumptions that make up a given GCM. To function properly, it wouldseem that the certainty trough should be applied not to a single overallGCM, but to each of the innumerable subcomponents that comprise it.

Similar to Shackley and Wynne’s portrayal of modelers as being able toidentify and discuss model weaknesses among themselves, DeborahDowling’s (1999) construct of the ‘competent professional’ aguably serves

906 Social Studies of Science 35/6

to preserve an idealized image of how scientists use and relate to simula-tions. On the basis of interviews with scientists in a variety of fields,Dowling describes how the ‘competent professional’ relates to the simula-tion technology they develop and use: they have a ‘clear, analytic grasp’ (p.266) of the mathematics built into the program; they ‘get into the code’and ‘know the strengths, the approximations, the difficult points, and thepits you can fall into’ (ibid.). In the words of a pharmaceutical chemistquoted by Dowling, responsible users try to ‘break down that sense that,because this number came out on the printed form, it has got to be right’(Dowling, 1999: 266). Dowling does not discuss the possibility thatcompetent scientists at times may fail to live up to this standard. Heraccount of modelers’ relation to their own models recalls RobertMerton’s norms of science (Merton, 1973 [1942]), which have beencriticized as ideals serving an ideology that promotes scientists’ self-interest(Mulkay, 1976).

MacKenzie may also verge on idealizing knowledge producers in hiscontrast between them, on the one hand, and users and critics on theother. Users’ and critics’ credibility is undermined in contrast to that ofknowledge producers. In the certainty trough diagram, users are presentedas under-critical and critics as over-critical, in contrast to producers, whohave detailed understanding of the technology’s strengths and weaknesses.In this framework, which valorizes closeness to the site of knowledgeproduction, the critics’ relatively greater distance from the site of knowl-edge production also weakens their credibility. MacKenzie portrays ‘users’as less knowledgeable than producers, and lumps them with persons whouncritically believe the most idealized representations of the models (thosewho simply ‘believe the brochures’).

MacKenzie notes that users sometimes consider knowledge producersto be ‘liars’ and demand that their results be subjected to elaborate andexpensive independent testing (MacKenzie, 1990: 373). This suggests thatat least some users maintain critical distance from the technology inquestion, and that a great deal of heterogeneity is lost when such users arelumped together with others who are uncritically committed to the techno-logical program. The possibility of critical distance on the part of (some)users is not well reflected in the certainty trough diagram. In contrast tothe critics’ ‘alienation’ from the technology in question, and their ‘commit-ment’ to alternative technology, users are assumed to be predisposedto accept the technology, because of their ‘loyalty’ to the program.MacKenzie appears to imply that their judgment is based relatively moreon external interests and possibly emotional factors, as opposed to in-timate, technical, and objective understanding. MacKenzie’s descriptionwould seem to imply that knowledge producers, by contrast, are notsimilarly swayed by external and subjective factors.

By contrast, my case study indicates the extent to which the certaintytrough overlooks variation over time in knowledge producers’ willingnessand ability to recognize inaccuracies in their own models. In what follows, Ipresent interview data suggesting that modelers do not necessarily like to

Lahsen: Seductive Simulations? 907

discuss their models’ weaknesses even among themselves, and that in somerespects they may be less inclined and less able to identify them than aresome other scientists. I also present evidence of greater heterogeneity inthe distribution of certainty within the categories of ‘knowledge producers’and ‘users’ than the certainty trough assumes.

The Power of Simulations

During modelers’ presentations to fellow atmospheric scientists that Iattended during my years at NCAR, I regularly saw confusion arise in theaudience because it was unclear whether overhead charts and figures werebased on observations or simulations. In one such presentation about therole of clouds in the climate system, observational data were comparedagainst model simulations. I grew confused as to which charts representedempirical data and which represented simulations. I realized that I was notalone in my confusion when scientists in the audience stopped the pre-senter to ask for clarification as to whether the overhead figures were basedon observations or model extrapolations. The presenter specified that thefigures were based on models, and then continued his presentation.

Sometimes such confusion resulted from imprecise communication onthe part of modelers. It is understandable that modelers easily forget topreface each of their representations with the words ‘simulated’ or‘modeled’ (‘the simulated ocean’, ‘the modeled ocean–atmosphere dynamic’,and so on). At other times, modelers may have been strategic whenalternating between speaking of their models as heuristics and presentingthem as ‘truth machines’. However, the oscillation also may reflect howsome modelers think and feel about their models at particular momentswhen they fail to maintain sufficient critical distance. In interviews, mod-elers indicated that they have to be continually mindful to maintain criticaldistance from their own models. For example:

Interviewer: Do modelers come to think of their models as reality?

Modeler A: Yes! Yes. You have to constantly be careful about that [laughs].

He described how it happens that modelers can come to forget known andpotential errors:

You spend a lot of time working on something, and you are really trying todo the best job you can of simulating what happens in the real world. It iseasy to get caught up in it; you start to believe that what happens in yourmodel must be what happens in the real world. And often that is not true . . .The danger is that you begin to lose some objectivity on the response of themodel [and] begin to believe that the model really works like the real world . . .then you begin to take too seriously how it responds to a change in forcing.Going back to trace gases, CO2 models – or an ozone change in thestratosphere: if you really believe your model is so wonderful, then thedanger is that it’s very tempting to believe that the way it responds to achange in forcing must be right. [Emphasis added]

This modeler articulates that the persuasive power of the simulations canaffect the very process of creating them: modelers are at times tempted to

908 Social Studies of Science 35/6

‘get caught up in’ their own creations and to ‘start to believe’ them, to thepoint of losing awareness about potential inaccuracies. Erroneous assump-tions and questionable interpretations of model accuracy can, in turn, besustained by the difficulty of validating the models in the absence ofconsistent and independent data sets.

Critical distance is also difficult to maintain when scientists spend thevast majority of their time producing and studying simulations, rather thanless mediated empirical representations. Noting that he and fellow mod-elers spend 90% of their time studying simulations rather than empiricalevidence, a modeler explained the difficulty of distinguishing a model fromnature:

Modeler B: Well, just in the words that you use. You start referring to yoursimulated ocean as ‘the ocean’ – you know, ‘the ocean gets warm’, ‘theocean gets salty’. And you don’t really mean the ocean, you mean yourmodeled ocean. Yeah! If you step away from your model you realize ‘this is justmy model’. But [because we spend 90% of our time studying our models]there is a tendency to forget that just because your model says x, y, or zdoesn’t mean that that’s going to happen in the real world.

This modeler suggests that modelers may talk about their models in waysthey don’t really mean (‘you don’t really mean the ocean, you mean yourmodeled ocean . . . ’). However, in the sentence that immediately follows,he implies that modelers sometimes actually come to think about theirmodels as truth-machines (they ‘forget to step away from their models torealize that it is just a model’; they have a ‘tendency to forget’).

The following interview extract arguably reflects such an instance offorgetting. This modeler had sought to model the effects of the possible‘surprise’ event of a change in the ocean’s climate-maintaining thermoha-line circulation. On the basis of his simulation he concluded that the widelytheorized change in the ocean’s circulation due to warmer global tem-peratures is not likely to be catastrophic:

Modeler C: One of the surprises that people have been worrying about iswhether the thermohaline circulation of the oceans [the big pump thatcould change the Gulf Stream] shuts off . . . . If the models are correct, theeffect even of something like that is not as catastrophic as what mostpeople think. You have to do something really nasty to [seriously perturbthe system] . . . The reality is, it really is an ocean thing, it is basically anocean phenomenon; it really doesn’t touch land very much.

Interviewer: But wouldn’t it change the Gulf Stream and therefore . . . ?

Modeler C: Yes, look right here [shows me the model output, which looks like amap]. If the model is right. [Slight pause] I put that caveat in at thebeginning [laughs]. But right there is the picture.

Modeler C struggles to not speak of his model as a ‘truth machine’, butlapses before catching himself when presented with a question. Though hestarts off indicating that the models could be wrong (‘if the models are

Lahsen: Seductive Simulations? 909

correct’), he soon treats the model as a truth machine, referring to themodeled phenomena as reliable predictions of future reality (‘The realityis, it really is an ocean thing’). Catching himself, he then refers back to thecaveat, followed by a little laugh.

The following modeler suggests that the above tendencies are perva-sive in the field of climate modeling:

Modeler D: There are many ways to use models, and some of them I don’tapprove of. [Pause] It is easy to get a bad name as a modeler, among boththeoreticians and observational people, by running experiments and see-ing something in the model and publishing the result. And pretending tobelieve what your model gives – or, even, really believing it! [small laugh] –is the first major mistake. If you don’t keep the attitude that it’s just amodel, and that it’s not reality . . . I mean, mostly people that are involved inthis field really have that, they have the overtone that it is.

Interviewer: They do tend to think that their model is the reality?

Modeler D: Or even if they don’t think that, they tend to oversell it,regardless.

Interviewer: And why do they oversell it?

Modeler D: Because people get wrapped up in what they have done. Youknow, I spent years building this model and then I ran these experiments,and the tendency is to think: ‘there must be something here’. And then theystart showing you all the wonderful things they have done . . . And youhave to be very careful about that.

Confirming Shackley and Wynne’s argument, modeler D suggests thatmodelers sometimes ‘oversell’ their models, strategically associating themwith more certainty than is warranted. However, echoing others quotedearlier, Modeler D also suggests that modelers sometimes lose criticaldistance from their own models and come to think of them as reliablerepresentations of reality.

The increasingly realistic appearance of ever-more comprehensivesimulations may increase the temptation to think of them as ‘truth-machines’. As Shackley et al. (1998) have noted, there is a tendency amongmodelers to give greater credence to models the more comprehensive anddetailed they are, a tendency they identify as cultural in nature because ofa common trade-off between comprehensiveness and error range. AsGCMs incorporate ever more details – even things such as dust andvegetation – the models increasingly appear like the real world, but theaddition of each variable increases the error range (Syukuro Manabe,quoted in Revkin, 2001).

Some may resist this temptation better than others, and even the samemodeler may resist it at some moments better than at others. Modelers’statements suggest that the level of certainty any given modeler attaches tohis or her model varies in time; they do not necessarily have a consistent,‘healthy skepticism’. At the ‘inner core’ of knowledge production – among

910 Social Studies of Science 35/6

modelers at the individual and the group level – awareness of uncertaintiesthus can wax and wane depending on the situation.

This confusion of simulations with real data within the atmosphericsciences may be part of a more general phenomenon: similar conflation ofsimulations with ‘observations’, ‘samples’, and ‘data’ has been identified instudies of scientists in other fields of research (Dowling, 1999). Simulationtechniques may especially encourage such conflation, however. For exam-ple, Stefan Helmreich’s ethnographic study of artificial life simulators(1998) revealed the powerful effect of simulations on the imagination oftheir creators and users. Similar to global climate simulations, visualsimulations of artificial life afford a ‘god’s eye view’ that contributes to theillusion that they are empirical results. These simulators conflate theirartificial worlds with real life, Helmreich suggests: the researchers recog-nize that the life they simulate is not carbon-based, but they neverthelessconsider it to be an alternative, electron-based kind of life, insisting thattheir admittedly artificial worlds are real life-forms in the sense that theyare composed of real matter and energy. It would seem reasonable toassume that attempts to simulate the real world only intensify suchconflation.

Downplaying/Ignoring Model Uncertainties and Limitations among Modelers:The Role of Emotional Attachment and Social Worlds.

Modelers’ professional and emotional investment in their own modelsreduces their inclination and ability to maintain critical awareness aboutthe uncertainties and inaccuracies of their own simulations. Shackley andWynne suggest that modelers talk freely about their models’ shortcomingsamong themselves. However, the following researcher identified a generalreluctance on the part of modelers to discuss their models’ weaknesses,even among themselves:

Modeler E: What I try to do [when presenting my model results to othermodelers] . . . is that I say ‘this is what is wrong in my model, and I thinkthis is the same in all models, and I think it is because of the way we’reresolving the equations, that we have these systematic problems’. And itoften gets you in trouble with the other people doing the modeling. But it rarelygets you in trouble with people who are interested in the real world. They aremuch more receptive to that, typically, than they are if you say ‘here, thisis my result, doesn’t this look like the real world?’ And ‘this looks like thereal world, and everything is wonderful’.

Interviewer: Why do you get in trouble with modelers with that?

Modeler E: Because . . . when I present it, I say ‘this model is at least asgood as everyone else’s, and these problems are there and they are ineverybody else’s models too.’ They often don’t like that, even if I am notsingling out a particular model, which I have done on occasion [smiles] –not necessarily as being worse than mine but as having the same flaws.Not when they are trying to sell some point of view and I go in theresaying ‘Hey, this is where I go wrong [in my model], and you are doing thesame thing! And you can’t be doing any better than that because I knowthat this isn’t a coding error problem’ [laughs].

Lahsen: Seductive Simulations? 911

This modeler confirmed statements about modelers I encountered in othercontexts, who also identified a disinclination on the part of modelers tohighlight, discuss, and sometimes even perceive problems in their modeloutput.

In a paper consisting of an imaginary dialogue about global environ-mental science and policy between two sirens of Greek mythology,Shackley & Darier (1998) suggest that when modelers discursively asso-ciate their models with unwarranted or unproven levels of accuracy, thismay reflect how they also think of their models. In response to the firstsiren’s suggestion that such discourses are strategic in nature and intendedto seduce outside audiences, the second siren argues that modelers have‘(self)seducing powers’. She notes that modelers ‘trust’ their models, thatthey have some degree of ‘genuine confidence, maybe over-confidence’ intheir quantitative projections. She comments: ‘It is not simply “calculatingseduction” but a sincere act of faith!’

These claims about modelers’ relation to their own creations remain atthe level of speculation in Shackley and Darier’s text. They are backed upby a reference to the paper by Risbey et al. (1996) on integrated assess-ment (IA) modeling. Risbey et al. do not discuss modelers’ relation to themodels in detail but note that IA modelers do not always ‘bear in mind’ thedistinction between models as heuristics versus truth machines, and thatthey can fail to ‘maintain familiarity’ with the limitations of their models(1996: 331). Despite such limited documentation, it would seem that theseanalysts of model communities have witnessed dynamics of the sortdescribed here.

Modeler E, in the excerpt quoted above, distinguished some modelersfrom ‘people who are interested in the real world’. He thus implied thatmodelers sometimes become so involved in their models that they losesight of, or interest in, the real world, ignoring the importance of knowinghow the models diverge from it.

Recognition of this tendency may be reflected in modelers’ jokesamong themselves. For example, one group joked about a ‘dream button’allowing them – Star Wars style – to blow up a satellite when its data didnot support their model output. They then jokingly discussed a second bestoption of inserting their model’s output straight into the satellite dataoutput.

These psychological and social dimensions of modelers’ investment inthe accuracy of their models, and the seductive power of simulations ingeneral, are not accounted for by the certainty trough, or, more generally,by the trope of distance. The evidence from the present study suggests aneed to revise the certainty trough to indicate greater certainty amongclimate modelers. Better yet, assuming it could be represented visually, thecurve indicating knowledge producers’ awareness of uncertainty ought tovary depending on context, as their acknowledgement of uncertainty andinaccuracy waxes and wanes. As I suggest in what follows, levels ofcertainty among users also vary: some users are sometimes more aware thanproducers are of model inaccuracies.

912 Social Studies of Science 35/6

General Circulation Model Users and Critics

Shackley & Wynne (1995: 114) suggest that ‘practitioners may attributegreater certainty to knowledge from another speciality than the practi-tioners in the first speciality would attribute to it themselves.’ This isundoubtedly an important dynamic in the distribution of certainty, but itmay be the case that practitioners in neighboring fields – or who are in thesame field but have different interests and rely on different methods –sometimes see limitations in the models that the modelers themselves cannot or will not see.

The complexity of GCMs undermines modelers’ ability to gauge thevalidity of their own models. Computer models have grown so complexand scientists so specialized that modelers spend little time checking theirown models against available observations. This weakens their ability toidentify where their models fail to accurately represent empirical evidence.A physicist interviewed in a study on computers and identities confirmedthat scientists today can come to know more about computer models thanabout actual biogeophysical dynamics that their computers represent:

My students know more and more about computer reality, but less andless about the real world. And they no longer even really know aboutcomputer reality, because the simulations have become so complex thatpeople don’t build them anymore. They just buy them and can’t getbeneath the surface. If the assumptions behind some simulation wereflawed, my students wouldn’t even know where or how to look for theproblem. So I am afraid that where we are going here is towards Physics:The Movie. (Turkle, 1984: 66)4

Because modelers’ limited empirical knowledge of the atmospheric systemreduces their ability to identify shortcomings in their models, more empiri-cally inclined scientists are brought in to help evaluate the models’ per-formance. Some of these empirical scientists, who refer to themselves as‘close’ or ‘feedback’ users, are better able than the knowledge-producers tojudge a given GCM’s accuracy, at least as far as identifying gaps betweenthe simulations and observations is concerned.

Modeler E noted that theoreticians and empiricists often criticizemodelers for claiming unwarranted levels of accuracy, to the point ofconflating their models with reality. My fieldwork revealed that suchcriticisms circulate widely among atmospheric scientists. Sometimessuch criticisms portray modelers as motivated by a need to secure fundingfor their research, but they also suggest that modelers have genuinedifficulty with gaining critical distance from their models’ strengths andweaknesses. Moreover, they criticize modelers for lacking empirical under-standing of how the atmosphere works (‘Modelers don’t know anythingabout the atmosphere’).

Presentations at NCAR provide a forum for witnessing the tensionsthat exist between modelers and empiricists. Following modelers’ pre-sentations of their work, empiricists frequently feel a need to emphasize thedifference between model output and observational data. In interviews,

Lahsen: Seductive Simulations? 913

empiricists often voice criticisms along the lines of this one expressed by ameteorologist: ‘I joke about modelers: they have a charming and naive faithin their models.’

Such comments were especially common among empirical meteorolo-gists trained in synoptic weather forecasting techniques, who conductempirical research on a regional or local scale. They have not beencentrally involved in the process of model development and validation, andthus may fall within MacKenzie’s category of the ‘alienated’. These empiri-cists trained in synoptic methods are particularly inclined to criticizeGCMs. Such criticism may have to do with the fact that there is consider-able resentment among various subgroups of atmospheric scientists aboutthe increased use of simulation techniques, and such resentment may beechoed in other sciences in which simulations are ascendant (Lahsen,1998a). Moreover, models and simulations are ‘misfits that do not sitcomfortably in established categories’ (Sismondo, 1999: 253). They existin an ambiguous space between theory and experiment and occupy anambiguous social position in science as well.

Synoptically trained empirical meteorologists have particular motiva-tion to resent models. Their methods and lines of work were in importantpart replaced by global numerical models. The environmental concernabout human-induced climate change, and the associated politics, alsofavored the GCMs and those working with them. The applied aspect ofthese meteorologists’ work was thus being taken over by numerical weatherforecasting, pushing them in the direction of basic research. Their com-ments should be understood as potentially interested instances ofboundary-work (Gieryn, 1995) whereby they, as a competing scientificgroup, seek to promote the authority of their own lines of research incompetition with GCMs. This placed them at a competitive disadvantagewhen national funding priorities changed in favor of research with obvioussocial benefits, whereas GCM modeling seemed relevant to predictingfuture manifestations of human-induced climate change.

The emergence of numerical techniques also represented a loss inepistemic status as well as funding for the empirical meteorologists. So-called ‘objective’ numerical methods resulted in the demotion and relabel-ing of their more qualitative approach as ‘subjective’, an unattractive labelin the context of a cultural preference for ‘hard’ science within thescientific community. As a meteorologist jokingly said to me when express-ing resentment at the label: ‘so I change it. I refer to [synoptic analyses]saying: “these are the intelligent analyses, and those [GCM studies] areartificially intelligent.”’

Compared with modelers, such empirical research meteorologists withbackground in weather forecasting are part of a different social world;these two groups partake in different, albeit overlapping, social networksdefined by different scientific orientations and cultural norms. The empiri-cists are less committed to GCMs or to the theory of human-inducedclimate change.5 They manifest skepticism about numerical forecasts in

914 Social Studies of Science 35/6

general, especially beyond a period of 10 days or so. These attitudes arepart of a common culture among weather forecasters, including scientistswho may only have done forecasting early in their careers. This culture alsoinvolves humility about the accuracy of forecasts of atmospheric condi-tions, which they trace to experiences of regularly seeing synoptic andnumerical weather forecasts proven wrong. In my judgment, many of thesemeteorologists are rightly classified as ‘alienated’ with regard to GCMs.

Nevertheless, they may have important insight into model inaccu-racies, both for technical reasons (a function of their expertise) and forsocial and psychological reasons (a function of their lesser investment inthe GCM enterprise). As suggested by Modeler D above, non-modelersmay at times be better able to keep uncertainties in mind than themodelers themselves. The fact that some of them are engaged in the pro-cess of validating GCMs also indicates that the empiricists in somerespects are in a privileged position to discuss inaccuracies; sometimes theybecome quite knowledgeable about GCMs. Those who help with validat-ing GCMs are not necessarily alienated from the technology, and may havea distinct investment in the models. However, at least some of these usersinteract closely with empirical meteorologists trained in weather forecast-ing, and in my judgment they appear to be influenced by the forecasters’‘alienated’ attitude.

If some of these empirical meteorologists and self-identified users areproperly classified as alienated, this also confounds the axis of distance inthe certainty trough diagram. Some empirical research meteorologists arealienated and committed to alternative technology (for example, empirical‘synoptic’ methods), but they are not necessarily more distant from the siteof knowledge production than are many users.

Moreover, the attitude of the ‘alienated’ towards the models may beambivalent rather than consistently critical. They criticize GCMs, andresent the way institutional funding has been drawn away from synopticapproaches.6 At the time of my fieldwork, such atmospheric empiricistscomplained about insufficient funding even for validation of the GCMswith empirical data, a complaint supported by analysts (Risbey et al.,1996). However, I found that even the meteorologists whose empiricalresearch competed with the numerical models were, overall, inclined toacknowledge the value of GCMs. They considered GCM modeling as animportant contribution to science. This underscores once again the weak-ness of the certainty trough in terms of ambivalence and complexity in thelevel of commitment to a given technology.

Compared with modelers, the empiricists may at times be in a betterposition to identify model shortcomings because of their deeper knowledgeof empirical processes simulated by the GCMs, and because of their lowerpsychological and professional investment in the models. This is not tosuggest that empiricists are invariably better identifiers of truth. Indeed,their expertise, methodologies, and commitments are also limited. Rather,the point is that scientists tend to become emotionally involved with their

Lahsen: Seductive Simulations? 915

own creations. Simulation of complex, uncertain, and inaccessible phe-nomena leaves considerable room for emotional involvement to underminethe ability to recognize weaknesses and uncertainties.

Empiricists complain that model developers often freeze others outand tend to be resistant to critical input. At least at the time of myfieldwork, close users and potential close users at NCAR (mostly synop-tically trained meteorologists who would like to have a chance to validatethe models) complained that modelers had a ‘fortress mentality’. In thewords of one such user I interviewed, the model developers had ‘builtthemselves into a shell into which external ideas do not enter’. Hiscriticism suggests that users who were more removed from the sites ofGCM development sometimes have knowledge of model limitations thatmodelers themselves are unwilling, and perhaps unable, to countenance. Amodel developer acknowledged this tendency and explained it as follows:

Modeler F: There will always be a tension there. Look at it this way: I spentten years building a model and then somebody will come in and say ‘well,that’s wrong and that’s wrong and that’s wrong’. Well, fine! And then theysay, ‘well, fix it!’ [And my response to them is:] ‘you fix it! [laughs] I mean,if I knew how to fix it, I would have done it right in the first place!!![Laughs] And what is more, I don’t like you anymore – all you do is youcome in and tell me what is wrong with my model! Go away!’ [laughter]. Imean, this is the field.

Modeler F’s acknowledgement of inaccuracies in his model is implied inhis comment that he would have improved the model if he knew how.Nevertheless, the interview excerpt is another indication of modelers’emotional investment in their models. Modeler F acknowledges a commontendency on the part of modelers to distance themselves from criticism andreact personally to criticisms (‘I don’t like you anymore’; ‘go away!’). Onemight dismiss the seriousness of this account, because of the jokingmanner in which Modeler F expresses it. Yet anthropologists identifyhumor as a common marker of cultural sensitivity and charged emotions.In this context, it is interesting to note other instances (such as Modelers Aand C, above) in which modelers laughed when discussing their relation-ship to their own models, particularly the temptation to perceive them astruth-machines.

Aside from the stakes involved with environmental policy and withmodeling groups’ need to sustain funding for their work, it is important torecognize model developers’ stake in their own creations. Building a good,complex GCM is a daunting task that requires years and even decades ofdedication. As a result, model developers’ entire professional careers are onthe line; the performance of their model will reflect on their careers as awhole. It is thus understandable that they also are sensitive to criticismsand at times may tend to give their models the benefit of the doubt. Duringmy fieldwork, I witnessed prolonged and acrimonious fights in whichmodel developers defended their models and engaged in serious conflictswith colleagues within the larger modeling group who rejected them infavor of other models they deemed more accurate.

916 Social Studies of Science 35/6

The point to stress here is that modelers’ desire to produce accuratesimulations sometimes weakens their willingness and ability to identifyweaknesses and uncertainties. Aside from the general phenomenon inscience of competition and conflict, the excerpts presented above revealemotional aspects of modeling not captured by the certainty trough.Modelers’ careers and identities become intertwined with, and partlydependent on, the quality of their models, to the point that they sometimesmay be tempted to deny it when their models diverge from reality.

Revising the Certainty Trough

The certainty trough may account for the distribution of certainty at abroad, general scale. Generally speaking, atmospheric scientists are betterjudges than, for example policy-makers, of the accuracy of model output.However, the distribution of certainty about GCM output within theatmospheric sciences reveals complications in the categories of ‘knowledgeproducers’ and ‘users’, and the privileged vantage point from which modelaccuracies may be gauged proves to be elusive.

Model developers’ knowledge of their models’ inaccuracies is en-hanced by their participation in the construction process. However, devel-opers are not deeply knowledgeable about all dimensions of their modelsbecause of their complex, coupled nature. Similarly, the empirical trainingof some atmospheric scientists – scientists who may be described as users –limits their ability to gauge GCM accuracies in some respects whileenhancing their ability to do so in other respects; and, generally, they mayhave better basis than the less empirically oriented modelers for evaluatingthe accuracy of at least some aspects of the models.

Professional and emotional investment adds another layer of complex-ity. Model developers have a professional stake in the credibility of themodels to which they devote a large part of their careers. These scientistsare likely to give their models the benefit of doubt when confronted withsome areas of uncertainty. By contrast, some of the empirically trainedatmospheric scientists, who are less invested in the success of the models,may be less inclined to give them the benefit of the doubt, maintainingmore critical understanding of their accuracy.

The distinction between modelers’ public and non-public representa-tions of models reflects modelers’ investment in the perceived accuracy oftheir models. The distinction implies that in their interaction with externalaudiences, modelers at times downplay model inaccuracies because theyare interested in securing their authority. I argue that this framework needsto be stretched further to account for limitations in modelers’ ability toidentify such inaccuracies, limitations that may arise from a combination ofpsychological, social, and political factors. Attitudes towards technologyare affected by these factors, which introduce (lack of) distance of adifferent kind than what the certainty trough highlights. The notion of‘alienation’ in MacKenzie’s framework may hint at the impact of emotionalcommitments on perceptions of accuracy, but such an interpretation is

Lahsen: Seductive Simulations? 917

somewhat undercut by the Marxist conception (alienation from the meansof production) implied by MacKenzie’s usage.

In light of the above, one might revise the certainty trough diagram asshown in Figure 2.

However, even the revised diagram fails to capture the complexity ofthe dynamics involved. To account for the complexities discussed in thispaper, the certainty trough diagram would have to do the following:

• recognize that any given model involves multiplicity sites ofproduction;

• represent greater heterogeneity within the general categories of knowl-edge producers and users;

• reflect the shifting roles of individual actors involved: model developersare also model users in some ways, users contribute to the productionof the technology;

• show that users in some instances may be closely interlinked with‘alienated’ groups and share in some of those groups’ criticisms (forexample, share some resentment toward funding patterns that privi-lege GCM approaches over more empirical lines of research);

• acknowledge that some users may be better positioned to identifysome model inaccuracies than producers of the models in question;

• show oscillation through time in the level of uncertainty and skepti-cism acknowledged by producers and users of models.

A single, visual diagram seems unable adequately to represent the complexways in which political, professional, and psychological investment in themodels, as well as variations in time and context, affect awareness ofinaccuracies and uncertainties. Adequate representation of the complexity

FIGURE 2Revised certainty trough. The dashed line (------) indicates uncertainty levels notaccounted for in MacKenzie (1990).

High

Low

Unc

erta

inty

Directly involvedin knowledgeproduction

Committed totechnologicalinstitution/program butusers rather thanproducers ofknowledge

Alienated frominstitutions/committedto different techology

Source: MacKenzie, 1990. © MIT Press.

918 Social Studies of Science 35/6

would require a more dynamic model capable of showing multiple dimen-sions and variations through time. It would also have to account for thedifferent combinations of socio-cultural influence – the different socialworlds – that shape different actors’ relationships to the technology.Modelers and empirical atmospheric scientists are part of different, albeitpartly overlapping, social worlds involving different inclinations to questionthe models’ scientific and political value as well as their accuracy. As I havesuggested, these actors relate differently to the climate models for reasonsnot captured by the certainty trough, including reasons unrelated todistance. Other, more obviously social and psychological factors come intoplay, complicating the meaning of ‘distance’ and ‘proximity’. When we takethese other factors into account, it appears that distance lends enchant-ment, but proximity lends enchantment as well.

NotesThis research was made possible thanks to financial support from the National ScienceFoundation’s Ethics and Values in Science Program, the Environmental Protection Agency’s‘STAR’ Fellowship Program, and a Postdoctoral Fellowship in the National Center forAdvanced Research’s Advanced Study Program. I want to thank the atmospheric scientistswho were willing to talk with me and have me in their midst for the duration of thisresearch. Special thanks to Richard Somerville, Anji Seth, John Firor, and Norman Miller,who also helped strengthen the technical accuracy of this paper. I am also grateful fordetailed, insightful comments provided by Sheila Jasanoff, Michael M.J. Fischer, StephanHelmreich, Roger Pielke Jr, Radford Byerly, Robert Frosch, two anonymous reviewers forSocial Studies of Science, and, last but not least, Social Studies of Science Collaborating EditorSergio Sismondo.

1. The cited works explicitly discuss and integrate MacKenzie’s certainty trough. Moregeneral influence of MacKenzie’s work is reflected in the significant number of citationsit has received, especially in the field of science studies (see for instance the Web ofScience database, < www.isinet.com/products/citation/wos/ > ).

2. For simplicity, I will use the term ‘models’ and ‘climate models’ rather than specify ineach instance that I am referring to ‘GCM modeling’. It should be made clear, however,that scientists of all disciplines and practices use models, and that a wide range ofmodels exists. For example, a mouse may serve as a ‘model’ in medical research.

3. Stephen D. Norton and Frederick Suppe (2001) criticize the analysis of Oreskes et al.(1994) for singling out models for criticism, when such criticism applies to scientificknowledge in general. They note that scientific claims never can be established withabsolute certainty and that Oreskes et al. operate with a criterion of certainty that is‘epistemologically irrelevant to actual scientific knowledge acquisition’ (Norton & Suppe,2001: 103). Norton and Suppe note that models are useful precisely because theysimplify the otherwise baffling complexity of the phenomena modeled. As atmosphericscientist Kevin Trenberth has noted, ‘All models are of course wrong because, by design,they depict a simplified view of the system being modeled’ (Trenberth, 1997).

4. This physicist thus also refers to the common practice of obtaining and using modelsdeveloped by others. As noted above, atmospheric scientists often modify modelsobtained from elsewhere, complicating clear identification of users versus producers, aswell as the exact site of production.

5. I base this on my years of fieldwork among such meteorologists. This claim is alsosupported by the proportionally large representation of synoptically trainedmeteorologists among signatories of petitions designed to weaken policy action on behalfof human-induced climate change. See for instance S. Fred Singer’s ‘LeipzigDeclaration’ (Olinger, 1996). See also < www.sepp.org/pressrel/meteorLD.html > .

Lahsen: Seductive Simulations? 919

6. One veteran synoptic research meteorologist expressed his exasperation in a 1999 letterto the Undersecretary of Commerce for Oceans and Atmosphere and Administration ofthe National Oceanic and Atmospheric Administration (NOAA). Noting his significantrecord in a particular area of weather forecasting, he criticized NOAA’s bias towardsnumerical climate prediction methodology. He suggested that rival climate forecastmethodologies were being squashed to ‘prevent embarrassing comparisons’. In aninterview with me, he claimed to speak for many fellow colleagues trained in synopticmethods, and my subsequent interviews among synopticians confirmed his claim.

ReferencesBaker, Victor R. (2000) ‘“Saving the Appearances” of Beach Behavior’, Journal of Coastal

Research 16(1): iii–iv.Bankes, Steve (1993) ‘Exploratory Modeling for Policy Analysis’, Operations Research

41(3)(May/June): 435–49.Baudrillard, Jean (1993) ‘Simulacra and Simulations: Disneyland’, in C. Lemert (ed.),

Social Theory: The Multicultural and Classic Readings (Boulder, CO, San Francisco, CA,& Oxford: Westview Press): 524–29.

Collins, Harry (1985) Changing Order: Replication and Induction in Scientific Practice(Chicago, IL & London: University of Chicago Press).

Collins, Harry & Trevor Pinch (1993) The Golem: What Everyone Should Know About Science(Cambridge: Cambridge University Press).

Dowling, Deborah (1999) ‘Experimenting on Theories’, Science in Context 12(2): 261–73.Edwards, Paul N. (1996) ‘Global Comprehensive Models in Politics and Policymaking’,

Climatic Change 32(2): 149–61.Edwards, Paul N. (1999) ‘Data-Laden Models, Model-Filtered Data: Uncertainty and

Politics in Global Climate Science’, Science as Culture 8(4): 437–72.Edwards, Paul N. (2001) ‘Representing the Global Atmosphere: Computer Models, Data,

and Knowledge About Climate Change’, in C.A. Miller & P.N. Edwards (eds),Changing the Atmosphere: Expert Knowledge and Environmental Governance (Cambridge,MA: MIT Press): 31–65.

Firor, John (1998) ‘Human Motives Sometimes Mar Models’, Boulder Daily Camera (25October): 12F.

Galison, Peter (1997) Image and Logic: A Material Culture of Microphysics (Chicago, IL:University of Chicago Press).

Galison, Peter & Bruce Hevly (eds) (1992) Big Science: The Growth of Large-Scale Research(Stanford, CA: Stanford University Press).

Gelbspan, Ross (1995) ‘The Heat is On’, Harper’s Magazine (December): 31–37.Gelbspan, Ross (1997) The Heat is On: The High Stakes Battle Over Earth’s Threatened

Climate (New York: Addison-Wesley Publishing Company, Inc.).Gibbons, Michael, Camille Limoges, Helga Nowotny, Simon Schwartzman, Peter Scott &

Martin Trow (1994) The New Production of Knowledge: The Dynamics of Science andResearch in Contemporary Societies (Thousand Oaks, CA: SAGE Publications).

Gieryn, Thomas (1995) ‘Boundaries of Science’, in S. Jasanoff, G.E. Markle, J.C. Petersen& T. Pinch (eds), Handbook of Science and Technology Studies (Thousand Oaks, CA,London & New Delhi: SAGE Publications): 393–443.

Helmreich, Stefan (1998) Silicon Second Nature: Culturing Artificial Life in a Digital World(Berkeley, CA: University of California Press).

Helvarg, David (1994) The War Against the Greens: The ‘Wise-Use’ Movement, the New Right,and Anti-Environmental Violence (San Francisco, CA: Sierra Club Books).

Jasanoff, Sheila (2003) ‘(No?) Accounting for Expertise’, Science and Public Policy 30(3):157–62.

Jasanoff, Sheila & Brian Wynne (1998) ‘Science and Decisionmaking’, in S. Rayner & E.L.Malone (eds), Human Choice and Climate Change, Vol. 1 (Columbus, OH: BatellePress): 1–87.

920 Social Studies of Science 35/6

Kerr, Richard A. (1994) ‘Climate Modeling’s Fudge Factor Comes Under Fire’, Science265(9) (9 September): 1528.

Lahsen, Myanna (1998a) Climate Rhetoric: Constructions of Climate Science in the Age ofEnvironmentalism, PhD Thesis, Rice University, Houston, TX (published by UMIDissertation Services, A Bell & Howell Company).

Lahsen, Myanna (1998b) ‘The Detection and Attribution of Conspiracies: The ControversyOver Chapter 8’, in George E. Marcus (ed.), Paranoia Within Reason: A Casebook onConspiracy as Explanation (Chicago, IL: University of Chicago Press).

Lahsen, Myanna H. (2005) ‘Technocracy, Democracy and U.S. Climate Science Politics:The Need for Demarcations’, Science, Technology, & Human Values 30(1): 137–69.

MacKenzie, Donald (1990) Inventing Accuracy: A Historical Sociology of Nuclear MissleGuidance (Cambridge, MA: MIT Press).

Merton, Robert K. (1973 [1942]) ‘The Normative Structure of Science’, in N.W. Storer(ed.), The Sociology of Science: Theoretical and Empirical Investigations (Chicago, IL:University of Chicago Press).

Mulkay, Michael (1976) ‘Norms and Ideology in Science’, Social Science Information 15:637–56.

National Research Council (1998) Capacity of U.S. Climate Modeling to Support ClimateChange Assessment (Washington, DC: National Academy Press).

Norton, Stephen D. & Frederick Suppe (2001) ‘Why Atmospheric Modeling is GoodScience’, in C.A. Miller & P.N. Edwards (eds), Changing the Atmosphere: ExpertKnowledge and Environmental Governance (Cambridge, MA: MIT Press): 67–105.

Olinger, David (1996) ‘Cool to the Warnings of Global Warming’s Dangers’, St. PetersburgTimes, South Pinellas Edition (29 July): 1A.

Oreskes, Naomi, Kristin Shrader-Frechette & Kenneth Belitz (1994) ‘Verification,Validation, and Confirmation of Numerical Models in the Earth Sciences’, Science263(4): 641–46.

Poster, Mark (1988) Jean Baudrillard: Selected Writings (Stanford, CA: Stanford UniversityPress).

Revkin, Andrew C. (2001) ‘Climate Research: The Devil is in the Details’, The New YorkTimes (3 July) ( < www.nytimes.com/2001/07/03/science > ).

Risbey, James, Milind Kandlikar & Anand Patwardhan (1996) ‘Assessing IntegratedAssessments’, Climatic Change 34: 327–36.

Shackley, Simon & Eric Darier (1998) ‘Seduction of the Sirens: Global Climate Changeand Modelling’, Science and Public Policy 25(5): 313–25.

Shackley, Simon & Brian Wynne (1995) ‘Integrating Knowledges for Climate Change:Pyramids, Nets and Uncertainties’, Global Environmental Change 5(2): 113–26.

Shackley, Simon & Brian Wynne (1996) ‘Representing Uncertainty in Global ClimateChange Science and Policy: Boundary-Ordering Devices and Authority’, Science,Technology, & Human Values 21(3): 275–302.

Shackley, Simon, Peter Young, Stuart Parkinson & Brian Wynne (1998) ‘Uncertainty,Complexity and Concepts of Good Science in Climate Change Modelling: Are GCMsthe Best Tools?’, Climatic Change 38: 159–205.

Sismondo, Sergio (1999) ‘Models, Simulations, and Their Objects’, Science in Context12(2): 247–60.

Smyth, Mary M. (2001) ‘Certainty and Uncertainty Sciences: Making the Boundaries ofPsychology in Introductory Textbooks’, Social Studies of Science 31(3): 389–416.

Somerville, Richard C.J. (1996) The Forgiving Air (Berkeley, CA: University of CaliforniaPress).

Trenberth, Kevin E. (1997) ‘The Use and Abuse of Climate Models’, Nature 386 (13March): 131–33.

Turkle, Sherry (1984) The Second Self (New York: Simon & Schuster).Wiin Nielsen, Aksel C. (1987) Forudsigelighed: Om Graenserne for Videnskab (Denmark:

Munksgaard).

Lahsen: Seductive Simulations? 921

Myanna Lahsen is a Research Scientist in the Center for Science andTechnology Policy Research (CIRES) at the University of Colorado at Boulder.Her most recent publications include: ‘Technocracy, Democracy and U.S.Climate Science Politics: The Need for Demarcations’, Science, Technology, &Human Values (In the Press) and ‘Transnational Locals: Brazilian Scientists inthe Climate Regime’, book chapter in Earthly Politics, Worldly Knowledge:Local and Global in Environmental Politics, edited by Sheila Jasanoff andMarybeth Long-Martello (MIT Press, 2004). Her research analyses howinternational scientific, environmental, and development projects relate tonational cultures of knowledge, practice and policy-making in Brazil,attending to the role of science in geopolitics (and vice versa), ‘North–South’ relations, and the impact of transnational social formations andcapacity building in the developing world.

Address: Center for Science and Technology Policy Research (CIRES),University of Colorado, Campus Box 488, 1333, Grandview Avenue, Boulder,CO 80309-0488, USA; email: [email protected]

922 Social Studies of Science 35/6