perceptual symbol systems - citeseerx

84
The habit of abstract pursuits makes learned men much infe- rior to the average in the power of visualization, and much more exclusively occupied with words in their “thinking.” Bertrand Russell (1919b) 1. Introduction For the last several decades, the fields of cognition and per- ception have diverged. Researchers in these two areas know ever less about each other’s work, and their discoveries have had diminishing influence on each other. In many universi- ties, researchers in these two areas are in different programs, and sometimes in different departments, buildings, and uni- versity divisions. One might conclude from this lack of con- tact that perception and cognition reflect independent or modular systems in the brain. Perceptual systems pick up in- formation from the environment and pass it on to separate systems that support the various cognitive functions, such as language, memory, and thought. I will argue that this view is fundamentally wrong. Instead, cognition is inherently per- ceptual, sharing systems with perception at both the cogni- tive and the neural levels. I will further suggest that the di- vergence between cognition and perception reflects the widespread assumption that cognitive representations are in- herently nonperceptual, or what I will call amodal. 1.1. Grounding cognition in perception In contrast to modern views, it is relatively straightforward to imagine how cognition could be inherently perceptual. As Figure 1 illustrates, this view begins by assuming that perceptual states arise in sensory-motor systems. As dis- cussed in more detail later (sect. 2.1), a perceptual state can contain two components: an unconscious neural represen- tation of physical input, and an optional conscious experi- ence. Once a perceptual state arises, a subset of it is ex- tracted via selective attention and stored permanently in BEHAVIORAL AND BRAIN SCIENCES (1999) 22, 577–660 Printed in the United States of America © 1999 Cambridge University Press 0140-525X/99 $12.50 577 Perceptual symbol systems Lawrence W. Barsalou Department of Psychology, Emory University, Atlanta, GA 30322 [email protected] userwww.service.emory.edu/~barsalou/ Abstract: Prior to the twentieth century, theories of knowledge were inherently perceptual. Since then, developments in logic, statis- tics, and programming languages have inspired amodal theories that rest on principles fundamentally different from those underlying perception. In addition, perceptual approaches have become widely viewed as untenable because they are assumed to implement record- ing systems, not conceptual systems. A perceptual theory of knowledge is developed here in the context of current cognitive science and neuroscience. During perceptual experience, association areas in the brain capture bottom-up patterns of activation in sensory-motor areas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The stor- age and reactivation of perceptual symbols operates at the level of perceptual components – not at the level of holistic perceptual expe- riences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience and stored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a com- mon frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only do such simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift, run) and introspec- tion (e.g., compare, memory, happy, hungry). Once established, these simulators implement a basic conceptual system that represents types, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and ab- stract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinato- rially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to represent type-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a per- ceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal sym- bol systems. Implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored. Keywords: analogue processing; categories; concepts; frames; imagery; images; knowledge; perception; representation; sensory-motor representations; simulation; symbol grounding; symbol systems Lawrence Barsalou is Professor of Psychology at Emory University, having previously held positions at the Georgia Institute of Technology and the University of Chicago. He is the author of over 50 scientific pub- lications in cognitive psychology and cognitive science. His research ad- dresses the nature of human knowl- edge and its roles in language, memory, and thought, focusing specifically on the perceptual bases of knowl- edge, the structural properties of concepts, the dynamic representation of concepts, the construction of cate- gories to achieve goals, and the situated character of conceptualizations. He is a Fellow of the American Psy- chological Society and of the American Psychological Association, he served as Associate Editor for the Jour- nal of Experimental Psychology: Learning, Memory, and Cognition, and he is currently on the governing board of the Cognitive Science Society.

Upload: khangminh22

Post on 16-Mar-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

The habit of abstract pursuits makes learned men much infe-rior to the average in the power of visualization, and much moreexclusively occupied with words in their “thinking.”

Bertrand Russell (1919b)

1. Introduction

For the last several decades, the fields of cognition and per-ception have diverged. Researchers in these two areas knowever less about each other’s work, and their discoveries havehad diminishing influence on each other. In many universi-ties, researchers in these two areas are in different programs,and sometimes in different departments, buildings, and uni-versity divisions. One might conclude from this lack of con-tact that perception and cognition reflect independent ormodular systems in the brain. Perceptual systems pick up in-formation from the environment and pass it on to separatesystems that support the various cognitive functions, such aslanguage, memory, and thought. I will argue that this view isfundamentally wrong. Instead, cognition is inherently per-ceptual, sharing systems with perception at both the cogni-tive and the neural levels. I will further suggest that the di-vergence between cognition and perception reflects thewidespread assumption that cognitive representations are in-herently nonperceptual, or what I will call amodal.

1.1. Grounding cognition in perception

In contrast to modern views, it is relatively straightforwardto imagine how cognition could be inherently perceptual.

As Figure 1 illustrates, this view begins by assuming thatperceptual states arise in sensory-motor systems. As dis-cussed in more detail later (sect. 2.1), a perceptual state cancontain two components: an unconscious neural represen-tation of physical input, and an optional conscious experi-ence. Once a perceptual state arises, a subset of it is ex-tracted via selective attention and stored permanently in

BEHAVIORAL AND BRAIN SCIENCES (1999) 22, 577–660Printed in the United States of America

© 1999 Cambridge University Press 0140-525X/99 $12.50 577

Perceptual symbol systems

Lawrence W. BarsalouDepartment of Psychology, Emory University, Atlanta, GA [email protected] userwww.service.emory.edu/~barsalou/

Abstract: Prior to the twentieth century, theories of knowledge were inherently perceptual. Since then, developments in logic, statis-tics, and programming languages have inspired amodal theories that rest on principles fundamentally different from those underlyingperception. In addition, perceptual approaches have become widely viewed as untenable because they are assumed to implement record-ing systems, not conceptual systems. A perceptual theory of knowledge is developed here in the context of current cognitive science andneuroscience. During perceptual experience, association areas in the brain capture bottom-up patterns of activation in sensory-motorareas. Later, in a top-down manner, association areas partially reactivate sensory-motor areas to implement perceptual symbols. The stor-age and reactivation of perceptual symbols operates at the level of perceptual components – not at the level of holistic perceptual expe-riences. Through the use of selective attention, schematic representations of perceptual components are extracted from experience andstored in memory (e.g., individual memories of green, purr, hot). As memories of the same component become organized around a com-mon frame, they implement a simulator that produces limitless simulations of the component (e.g., simulations of purr). Not only dosuch simulators develop for aspects of sensory experience, they also develop for aspects of proprioception (e.g., lift, run) and introspec-tion (e.g., compare, memory, happy, hungry). Once established, these simulators implement a basic conceptual system that representstypes, supports categorization, and produces categorical inferences. These simulators further support productivity, propositions, and ab-stract concepts, thereby implementing a fully functional conceptual system. Productivity results from integrating simulators combinato-rially and recursively to produce complex simulations. Propositions result from binding simulators to perceived individuals to representtype-token relations. Abstract concepts are grounded in complex simulations of combined physical and introspective events. Thus, a per-ceptual theory of knowledge can implement a fully functional conceptual system while avoiding problems associated with amodal sym-bol systems. Implications for cognition, neuroscience, evolution, development, and artificial intelligence are explored.

Keywords: analogue processing; categories; concepts; frames; imagery; images; knowledge; perception; representation; sensory-motorrepresentations; simulation; symbol grounding; symbol systems

Lawrence Barsalou is Professorof Psychology at Emory University,having previously held positions atthe Georgia Institute of Technologyand the University of Chicago. He isthe author of over 50 scientific pub-lications in cognitive psychology andcognitive science. His research ad-dresses the nature of human knowl-

edge and its roles in language, memory, and thought,focusing specifically on the perceptual bases of knowl-edge, the structural properties of concepts, the dynamicrepresentation of concepts, the construction of cate-gories to achieve goals, and the situated character ofconceptualizations. He is a Fellow of the American Psy-chological Society and of the American PsychologicalAssociation, he served as Associate Editor for the Jour-nal of Experimental Psychology: Learning, Memory,and Cognition, and he is currently on the governingboard of the Cognitive Science Society.

long-term memory. On later retrievals, this perceptualmemory can function symbolically, standing for referents inthe world, and entering into symbol manipulation. As col-lections of perceptual symbols develop, they constitute therepresentations that underlie cognition.

Perceptual symbols are modal and analogical. They aremodal because they are represented in the same systems asthe perceptual states that produced them. The neural sys-tems that represent color in perception, for example, alsorepresent the colors of objects in perceptual symbols, atleast to a significant extent. On this view, a common repre-sentational system underlies perception and cognition, notindependent systems. Because perceptual symbols aremodal, they are also analogical. The structure of a percep-tual symbol corresponds, at least somewhat, to the percep-tual state that produced it.1

Given how reasonable this perceptually based view ofcognition might seem, why has it not enjoyed widespreadacceptance? Why is it not in serious contention as a theoryof representation? Actually, this view dominated theoriesof mind for most of recorded history. For more than2,000 years, theorists viewed higher cognition as inherentlyperceptual. Since Aristotle (4th century BC/1961) and Epi-curus (4th century BC/1994), theorists saw the representa-tions that underlie cognition as imagistic. British empiri-cists such as Locke (1690/1959), Berkeley (1710/1982), andHume (1739/1978) certainly viewed cognition in this man-ner. Images likewise played a central role in the theories oflater nativists such as Kant (1787/1965) and Reid (1764/1970; 1785/1969). Even recent philosophers such as Rus-sell (1919b) and Price (1953) have incorporated imagescentrally into their theories. Until the early twentieth cen-tury, nearly all theorists assumed that knowledge had astrong perceptual character.

After being widely accepted for two millennia, this viewwithered with mentalism in the early twentieth century. Atthat time, behaviorists and ordinary language philosopherssuccessfully banished mental states from consideration inmuch of the scientific community, arguing that they wereunscientific and led to confused views of human nature(e.g., Ryle 1949; Watson 1913; Wittgenstein 1953). Becauseperceptual theories of mind had dominated mentalism tothat point, attacks on mentalism often included a critique

of images. The goal of these attacks was not to exclude im-ages from mentalism, however, but to eliminate mentalismaltogether. As a result, image-based theories of cognitiondisappeared with theories of cognition.

1.2. Amodal symbol systems

Following the cognitive revolution in the mid-twentiethcentury, theorists developed radically new approaches torepresentation. In contrast to pre-twentieth century think-ing, modern cognitive scientists began working with repre-sentational schemes that were inherently nonperceptual.To a large extent, this shift reflected major developmentsoutside cognitive science in logic, statistics, and computerscience. Formalisms such as predicate calculus, probabilitytheory, and programming languages became widely knownand inspired technical developments everywhere. In cog-nitive science, they inspired many new representationallanguages, most of which are still in widespread use today(e.g., feature lists, frames, schemata, semantic nets, proce-dural semantics, production systems, connectionism).

These new representational schemes differed from ear-lier ones in their relation to perception. Whereas earlierschemes assumed that cognitive representations utilizeperceptual representations (Fig. 1), the newer schemes as-sumed that cognitive and perceptual representations con-stitute separate systems that work according to differentprinciples. Figure 2 illustrates this assumption. As in theframework for perceptual symbol systems in Figure 1, per-ceptual states arise in sensory-motor systems. However, thenext step differs critically. Rather than extracting a subsetof a perceptual state and storing it for later use as a symbol,an amodal symbol system transduces a subset of a percep-tual state into a completely new representation languagethat is inherently nonperceptual.

As amodal symbols become transduced from perceptualstates, they enter into larger representational structures,such as feature lists, frames, schemata, semantic networks,and production systems. These structures in turn constitutea fully functional symbolic system with a combinatorial syn-tax and semantics, which supports all of the higher cogni-tive functions, including memory, knowledge, language,and thought. For general treatments of this approach, seeDennett (1969), Newell and Simon (1972), Fodor (1975),Pylyshyn (1984), and Haugeland (1985). For reviews of spe-cific theories in psychology, see E. Smith and Medin (1981),Rumelhart and Norman (1988), and Barsalou and Hale(1993).

It is essential to see that the symbols in these systems areamodal and arbitrary. They are amodal because their inter-nal structures bear no correspondence to the perceptualstates that produced them. The amodal symbols that rep-resent the colors of objects in their absence reside in a dif-ferent neural system from the representations of these col-ors during perception itself. In addition, these two systemsuse different representational schemes and operate ac-cording to different principles.

Because the symbols in these symbol systems areamodal, they are linked arbitrarily to the perceptual statesthat produce them. Similarly to how words typically havearbitrary relations to entities in the world, amodal symbolshave arbitrary relations to perceptual states. Just as theword “chair” has no systematic similarity to physical chairs,the amodal symbol for chair has no systematic similarity to

Barsalou: Perceptual symbol systems

578 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

Figure 1. The basic assumption underlying perceptual symbolsystems: Subsets of perceptual states in sensory-motor systems areextracted and stored in long-term memory to function as symbols.As a result, the internal structure of these symbols is modal, andthey are analogically related to the perceptual states that producedthem.

perceived chairs. As a consequence, similarities betweenamodal symbols are not related systematically to similaritiesbetween their perceptual states, which is again analogous tohow similarities between words are not related systemati-cally to similarities between their referents. Just as thewords “blue” and “green” are not necessarily more similarthan the words “blue” and “red,” the amodal symbols forblue and green are not necessarily more similar than theamodal symbols for blue and red.2

Amodal symbols bear an important relation to words andlanguage. Theorists typically use linguistic forms to repre-sent amodal symbols. In feature lists, words represent fea-tures, as in:

CHAIR (1)seatbacklegs

Similarly in schemata, frames, and predicate calculus ex-pressions, words represent relations, arguments, and val-ues, as in:

EAT (2)Agent 5 horseObject 5 hay

Although theorists generally assume that words do not lit-erally constitute the content of these representations, it isassumed that close amodal counterparts of words do. Al-though the word “horse” does not represent the value ofAgent for EAT in (2), an amodal symbol that closely paral-lels this word does. Thus, symbolic thought is assumed tobe analogous in many important ways to language. Just aslanguage processing involves the sequential processing ofwords in a sentence, so conceptual processing is assumed toinvolve the sequential processing of amodal symbols in list-like or sentence-like structures (e.g., Fodor & Pylyshyn1988).

It is important to see that this emphasis on amodal andarbitrary symbols also exists in some, but not all, connec-tionist schemes for representing knowledge (e.g., McClel-land et al. 1986; Rumelhart et al. 1986). Consider a feed-forward network with back propagation. The input units inthe first layer constitute a simple perceptual system that

codes the perceived features of presented entities. In con-trast, the internal layer of hidden units is often interpretedas a simple conceptual system, with a pattern of activationproviding the conceptual representation of an input pat-tern. Most importantly, the relation between a conceptualrepresentation and its perceptual input is arbitrary for tech-nical reasons. Prior to learning, the starting weights on theconnections between the input units and the hidden unitsare set to small random values (if the values were all 0, thesystem could not learn). As a result, the conceptual repre-sentations that develop through learning are related arbi-trarily to the perceptual states that activate them. With dif-ferent starting weights, arbitrarily different conceptualstates correspond to the same perceptual states. Eventhough connectionist schemes for representation differ inimportant ways from more traditional schemes, they oftenshare the critical assumption that cognitive representationsare amodal and arbitrary.

Connectionist representational schemes need not neces-sarily work this way. If the same associative network repre-sents information in both perception and cognition, itgrounds knowledge in perception and is not amodal (e.g.,Pulvermüller 1999). As described later (sects. 2.2.1, 2.5),shared associative networks provide a natural way to viewthe representation of perceptual symbols.

1.2.1. Strengths. Amodal symbol systems have many pow-erful and important properties that any fully functional con-ceptual system must exhibit. These include the ability torepresent types and tokens, to produce categorical infer-ences, to combine symbols productively, to representpropositions, and to represent abstract concepts. Amodalsymbol systems have played the critical role of making theseproperties central to theories of human cognition, makingit clear that any viable theory must account for them.

1.2.2. Problems. It has been less widely acknowledged thatamodal symbol systems face many unresolved problems.First, there is little direct empirical evidence that amodalsymbols exist. Using picture and word processing tasks,some researchers have explicitly tested the hypothesis thatconceptual symbols are amodal (e.g., Snodgrass 1984;Theios & Amhrein 1989). However, a comprehensive re-view of this work concluded that conceptual symbols havea perceptual character (Glaser 1992; also see Seifert 1997).More recently, researchers have suggested that amodal vec-tors derived from linguistic context underlie semantic pro-cessing (Burgess & Lund 1997; Landauer & Dumais 1997).However, Glenberg et al. (1998b) provide strong evidenceagainst these views, suggesting instead that affordances de-rived from sensory-motor simulations are essential to se-mantic processing.

Findings from neuroscience also challenge amodal sym-bols. Much research has established that categorical knowl-edge is grounded in sensory-motor regions of the brain (forreviews see Damasio 1989; Gainotti et al. 1995; Pulver-müller 1999; also see sect. 2.3). Damage to a particular sen-sory-motor region disrupts the conceptual processing ofcategories that use this region to perceive physical exem-plars. For example, damage to the visual system disruptsthe conceptual processing of categories whose exemplarsare primarily processed visually, such as birds. These find-ings strongly suggest that categorical knowledge is notamodal.3

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 579

Figure 2. The basic assumption underlying amodal symbol sys-tems: Perceptual states are transduced into a completely new rep-resentational system that describes these states amodally. As a re-sult, the internal structure of these symbols is unrelated to theperceptual states that produced them, with conventional associa-tions establishing reference instead.

In general, the primary evidence for amodal symbols isindirect. Because amodal symbols can implement concep-tual systems, they receive indirect support through their in-strumental roles in these accounts. Notably, however,amodal symbols have not fared well in implementing allcomputational functions. In particular, they have encoun-tered difficulty in representing spatio-temporal knowledge,because the computational systems that result are cumber-some, brittle, and intractable (e.g., Clark 1997; Glasgow1993; McDermott 1987; Winograd & Flores 1987). Al-though amodal symbol systems do implement some com-putational functions elegantly and naturally, their inade-quacies in implementing others are not encouraging.

Another shortcoming of amodal symbol systems is theirfailure to provide a satisfactory account of the transductionprocess that maps perceptual states into amodal symbols(Fig. 2). The lack of an account for such a critical processshould give one pause in adopting this general framework.If we cannot explain how these symbols arise in the cogni-tive system, why should we be confident that they exist?Perhaps even more serious is the complete lack of cognitiveand neural evidence that such a transduction process actu-ally exists in the brain.

A related shortcoming is the symbol grounding problem(Harnad 1987; 1990; Newton 1996; Searle 1980), which isthe converse of the transduction problem. Just as we haveno account of how perceptual states become mapped toamodal symbols during transduction, neither do we have anaccount of how amodal symbols become mapped back toperceptual states and entities in the world. Althoughamodal theories often stress the importance of symbol in-terpretation, they fail to provide compelling accounts of theinterpretive scheme that guides reference. Without such anaccount, we should again have misgivings about the viabil-ity of this approach.4

A related problem concerns how an amodal system im-plements comprehension in the absence of physical ref-erents. Imagine that amodal symbols are manipulated toenvision a future event. If nothing in the perceivedenvironment grounds these symbols, how does the systemunderstand its reasoning? Because the processing ofamodal symbols is usually assumed to be entirely syntactic(based on form and not meaning), how could such a systemhave any sense of what its computations are about? It is of-ten argued that amodal symbols acquire meaning from as-sociated symbols, but without ultimately grounding termi-nal symbols, the problem remains unsolved. Certainlypeople have the experience of comprehension in such situ-ations.

One solution is to postulate mediating perceptual repre-sentations (e.g., Harnad 1987; Höffding 1891; Neisser1967). According to this account, every amodal symbol is as-sociated with corresponding perceptual states in long-termmemory. For example, the amodal symbol for dog is associ-ated with perceptual memories of dogs. During transduc-tion, the perception of a dog activates these perceptualmemories, which activate the amodal symbol for dog. Dur-ing symbol grounding, the activation of the amodal symbolin turn activates associated perceptual memories, whichground comprehension. Problematically, though, percep-tual memories are doing all of the work, and the amodalsymbols are redundant. Why couldn’t the system simply useits perceptual representations of dogs alone to representdog, both during categorization and reasoning?

The obvious response from the amodal perspective isthat amodal symbols perform additional work that theseperceptual representations cannot perform. As we shall see,however, perceptual representations can play the criticalsymbolic functions that amodal symbols play in traditionalsystems, so that amodal symbols become redundant. If wehave no direct evidence for amodal symbols, as noted ear-lier, then why postulate them?

Finally, amodal symbol systems are too powerful. Theycan explain any finding post hoc (Anderson 1978), but of-ten without providing much illumination. Besides beingunfalsifiable, these systems often fail to make strong a pri-ori predictions about cognitive phenomena, especiallythose of a perceptual nature. For example, amodal theoriesdo not naturally predict distance and orientation effects inscanning and rotation (Finke 1989; Kosslyn 1980), althoughthey can explain them post hoc. Such accounts are not par-ticularly impressive, though, because they are uncon-strained and offer little insight into the phenomena.

1.2.3. Theory evaluation. Much has been made about theability of amodal theories to explain any imagery phenom-enon (e.g., Anderson 1978). However, this ability must beput into perspective. If perceptual theories predict these ef-fects a priori, whereas amodal theories explain them posthoc, why should this be viewed as a tie? From the perspec-tive of inferential statistics, Bayesian reasoning, and philos-ophy of science, post hoc accounts should be viewed withgreat caution. If a priori prediction is favored over post hocprediction in these other areas, why should it not be favoredhere? Clearly, greater credence must be given to a theorywhose falsifiable, a priori predictions are supported than toa theory that does not predict these findings a priori, andthat accounts for them post hoc only because of its unfalsi-fiable explanatory power.

Furthermore, the assessment of scientific theories de-pends on many other factors besides the ability to fit data.As philosophers of science often note, theories must also beevaluated on falsifiability, parsimony, the ability to produceprovocative hypotheses that push a science forward, the ex-istence of direct evidence for their constructs, freedomfrom conceptual problems in their apparatus, and integra-bility with theory in neighboring fields. As we have seen,amodal theories suffer problems in all these regards. Theyare unfalsifiable, they are not parsimonious, they lack directsupport, they suffer conceptual problems such as transduc-tion and symbol grounding, and it is not clear how to inte-grate them with theory in neighboring fields, such as per-ception and neuroscience. For all of these reasons, weshould view amodal theories with caution and skepticism,and we should be open to alternatives.

1.3. The current status of perceptual symbol systems

The reemergence of cognition in the mid-twentieth cen-tury did not bring a reemergence of perceptually based cog-nition. As we have seen, representational schemes movedin a nonperceptual direction. Furthermore, theorists wereinitially hostile to imbuing modern cognitive theories withany perceptual character whatsoever. When Shepard andMetzler (1971) offered initial evidence for image-like rep-resentations in working memory (not long-term memory!),they encountered considerable resistance (e.g., Anderson1978; Pylyshyn 1973; 1981). [See also Pylyshyn: “Computa-

Barsalou: Perceptual symbol systems

580 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

tional Models and Empirical Constraints” BBS 1(1) 1978;“Computation and Cognition” BBS 3(1) 1980; “Is VisionContinuous with Cognition?” BBS 22(3) 1999.] WhenKosslyn (1980) presented his theory of imagery, he arguedadamantly that permanent representations in long-termmemory are amodal, with perceptual images existing onlytemporarily in working memory (see also Kosslyn 1976).

The reasons for this resistance are not entirely clear. Onefactor could be lingering paranoia arising from the attacksof behaviorists and ordinary language philosophers. An-other factor could be more recent criticisms of imagery inphilosophy, some of which will be addressed later (e.g.,Dennett 1969; Fodor 1975; Geach 1957). Perhaps the mostserious factor has been uncharitable characterizations ofperceptual cognition that fail to appreciate its potential.Critics often base their attacks on weak formulations of theperceptual approach and underestimate earlier theorists.As a result, perceptual theories of knowledge are widelymisunderstood.

Consider some of the more common misunderstandings:perceptual theories of knowledge are generally believed tocontain holistic representations instead of componentialrepresentations that exhibit productivity. These theoriesare widely believed to contain only conscious mental im-ages, not unconscious representations. The representationsin these theories are often assumed to arise only in the sen-sory modalities, not in other modalities of experience, suchas proprioception and introspection. These theories aretypically viewed as containing only static representations,not dynamic ones. These theories are generally construedas failing to support the propositions that underlie descrip-tion and interpretation. And these theories are often as-sumed to include only empirical collections of sense data,not genetically constrained mechanisms.

Careful readings of earlier thinkers, however, indicatethat perceptual theories of knowledge often go consider-ably beyond this simplistic stereotype. Many philosophers,for example, have assumed that perceptual representationsare componential and produce representations produc-tively (e.g., Locke, Russell, Price). Many have assumed thatunconscious representations, then referred to as “disposi-tions” and “schemata,” produce conscious images (e.g.,Locke, Kant, Reid, Price). Many have assumed that imagescan reflect nonsensory experience, most importantly intro-spection or “reflection” (e.g., Locke, Hume, Kant, Reid).Many have assumed that images can support the type-token mappings that underlie propositions (e.g., Locke,Reid, Russell, Price). Many have assumed that native mech-anisms interpret and organize images (e.g., Kant, Reid). Allhave assumed that images can be dynamic, not just static,representing events as well as snapshots of time.

As these examples suggest, perceptual theories of knowl-edge should be judged on the basis of their strongest mem-bers, not their weakest. My intent here is to develop a pow-erful theory of perceptual symbols in the contexts ofcognitive science and neuroscience. As we shall see, thistype of theory can exhibit the strengths of amodal symbolsystems while avoiding their problems.

More and more researchers are developing perceptualtheories of cognition. In linguistics, cognitive linguists havemade the perceptual character of knowledge a central as-sumption of their theories (e.g., Fauconnier 1985; 1997;Jackendoff 1987; Johnson 1987; Lakoff 1987; 1988; Lakoff& Johnson 1980; Lakoff & Turner 1989; Langacker 1986;

1987; 1991; 1997; Sweetser 1990; Talmy 1983; 1988; Turner1996). In psychology, these researchers include Paivio(1971; 1986), Miller and Johnson-Laird (1976), Hutten-locher (1973; 1976), Shannon (1987), J. Mandler (1992),Tomasello (1992), L. Smith (L. Smith & Heise 1992; L.Smith et al. 1992; Jones & L. Smith 1993), Gibbs (1994),Glenberg (1997), Goldstone (Goldstone 1994; Goldstone& Barsalou 1998), Wu (1995), Solomon (1997), Mac-Whinney (1998), and myself (Barsalou 1993; Barsalou &Prinz 1997; Barsalou et al. 1993; in press). In philosophy,these researchers include Barwise and Etchemendy (1990;1991), Nersessian (1992), Peacocke (1992), Thagard(1992), Davies and Stone (1995), Heal (1996), Newton(1996), Clark (1997), and Prinz (1997; Prinz & Barsalou, inpress a). In artificial intelligence, Glasgow (1993) has shownthat perceptual representations can increase computationalpower substantially, and other researchers are groundingmachine symbols in sensory-motor events (e.g., Bailey et al.1997; Cohen et al. 1997; Rosenstein & Cohen 1998). Manyadditional researchers have considered the role of percep-tual representations in imagery (e.g., Farah 1995; Finke1989; Kosslyn 1994; Shepard & Cooper 1982; Tye 1991),but the focus here is on perceptual representations in long-term knowledge.

1.4. Recording systems versus conceptual systems

It is widely believed that perceptually based theories ofknowledge do not have sufficient expressive power to im-plement a fully functional conceptual system. As describedearlier (sect. 1.2.1), a fully functional conceptual systemrepresents both types and tokens, it produces categoricalinferences, it combines symbols productively to producelimitless conceptual structures, it produces propositions bybinding types to tokens, and it represents abstract concepts.The primary purpose of this target article is to demonstratethat perceptual symbol systems can implement these func-tions naturally and powerfully.

The distinction between a recording system and a con-ceptual system is central to this task (Dretske 1995; Hauge-land 1991). Perceptually based theories of knowledge aretypically construed as recording systems. A recording sys-tem captures physical information by creating attenuated(not exact) copies of it, as exemplified by photographs,videotapes, and audiotapes. Notably, a recording systemdoes not interpret what each part of a recording contains –it simply creates an attenuated copy. For example, a photoof a picnic simply records the light present at each point inthe scene without interpreting the types of entities present.

In contrast, a conceptual system interprets the entities ina recording. In perceiving a picnic, the human conceptualsystem might construe perceived individuals as instances oftree, table, watermelon, eat, above, and so forth. To accom-plish this, the conceptual system binds specific tokens inperception (i.e., individuals) to knowledge for general typesof things in memory (i.e., concepts). Clearly, a system thatonly records perceptual experience cannot construe indi-viduals in this manner – it only records them in the holisticcontext of an undifferentiated event.

A conceptual system has other properties as well. First,it is inferential, allowing the cognitive system to go beyondperceptual input. Theorists have argued for years that theprimary purpose of concepts is to provide categorical infer-ences about perceived individuals. Again, this is not some-

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 581

thing that recording systems accomplish. How does a photoof a dog go beyond what it records to provide inferencesabout the individual present? Second, a conceptual systemis productive in the sense of being able to construct com-plex concepts from simpler ones. This, too, is not somethingpossible with recording systems. How could a photo ofsome snow combine with a photo of a ball to form the con-cept of snowball? Third, a conceptual system supports theformulation of propositions, where a proposition resultsfrom binding a concept (type) to an individual (token) in amanner that is true or false. Again, this is something thatlies beyond the power of recording systems. How does aphoto of a dog implement a binding between a concept andan individual?

As long as perceptually based theories of knowledge areviewed as recording systems, they will never be plausible,much less competitive. To become plausible and competi-tive, a perceptually based theory of knowledge must exhibitthe properties of a conceptual system. The primary purposeof this target article is to demonstrate that this is possible.

Of course, it is important to provide empirical support forsuch a theory as well. Various sources of empirical evidencewill be offered throughout the paper, especially in section4, and further reports of empirical support are forthcoming(Barsalou et al., in press; Solomon & Barsalou 1999a;1999b; Wu & Barsalou 1999). However, the primary sup-port here will be of a theoretical nature. Because so fewtheorists currently believe that a perceptually based theoryof knowledge could possibly have the requisite theoreticalproperties, it is essential to demonstrate that it can. Oncethis has been established, an empirical case can follow.

1.5. Overview

The remainder of this paper presents a theory of percep-tual symbols. Section 2 presents six core properties that im-plement a basic conceptual system: perceptual symbols areneural representations in sensory-motor areas of the brain(sect. 2.1); they represent schematic components of per-ceptual experience, not entire holistic experiences (sect.2.2); they are multimodal, arising across the sensory modal-ities, proprioception, and introspection (sect. 2.3). Relatedperceptual symbols become integrated into a simulator thatproduces limitless simulations of a perceptual component(e.g., red, lift, hungry, sect. 2.4). Frames organize the per-ceptual symbols within a simulator (sect. 2.5), and words as-sociated with simulators provide linguistic control over theconstruction of simulations (sect. 2.6).

Section 3 presents four further properties, derived fromthe six core properties, that implement a fully functionalconceptual system: simulators can be combined combina-torially and recursively to implement productivity (sect.3.1); they can become bound to perceived individuals to im-plement propositions (sect. 3.2). Because perceptual sym-bols reside in sensory-motor systems, they implement vari-able embodiment, not functionalism (sect. 3.3). Usingcomplex simulations of combined physical and introspec-tive events, perceptual symbol systems represent abstractconcepts (sect. 3.4).

Section 4 sketches implications of this approach. Viewingknowledge as grounded in sensory-motor areas changeshow we think about basic cognitive processes, includingcategorization, concepts, attention, working memory, long-term memory, language, problem solving, decision making,

skill, reasoning, and formal symbol manipulation (sect. 4.1).This approach also has implications for evolution and de-velopment (sect. 4.2), neuroscience (sect. 4.3), and artificialintelligence (sect. 4.4).

2. Core properties

The properties of this theory will not be characterized for-mally, nor will they be grounded in specific neural mecha-nisms. Instead, this formulation of the theory should beviewed as a high-level functional account of how the braincould implement a conceptual system using sensory-motormechanisms. Once the possibility of such an account hasbeen established, later work can develop computational im-plementations and ground them more precisely in neuralsystems.

Because this target article focuses on the high level ar-chitecture of perceptual symbol systems, it leaves many de-tails unspecified. The theory does not specify the featuresof perception, or why attention focuses on some featuresbut not others. The theory does not address how the cogni-tive system divides the world into categories, or how ab-straction processes establish categorical knowledge. Thetheory does not explain how the fit between one represen-tation and another is computed, or how constraints controlthe combination of concepts. Notably, these issues remainlargely unresolved in all theories of knowledge – not justperceptual symbol systems – thereby constituting some ofthe field’s significant challenges. To provide these missingaspects of the theory would exceed the scope of this article,both in length and ambition. Instead, the goal is to formu-late the high-level architecture of perceptual symbol sys-tems, which may well provide leverage in resolving theseother issues. From here on, footnotes indicate critical as-pects of the theory that remain to be developed.

Finally, this target article proposes a theory of knowl-edge, not a theory of perception. Although the theory reliesheavily on perception, it remains largely agnostic about thenature of perceptual mechanisms. Instead, the criticalclaim is that whatever mechanisms happen to underlie per-ception, an important subset will underlie knowledge aswell.

2.1. Neural representations in sensory-motor systems

Perceptual symbols are not like physical pictures; nor arethey mental images or any other form of conscious subjec-tive experience. As natural and traditional as it is to think ofperceptual symbols in these ways, this is not the form theytake here. Instead, they are records of the neural states thatunderlie perception. During perception, systems of neu-rons in sensory-motor regions of the brain capture infor-mation about perceived events in the environment and inthe body. At this level of perceptual analysis, the informa-tion represented is relatively qualitative and functional(e.g., the presence or absence of edges, vertices, colors, spa-tial relations, movements, pain, heat). The neuroscience lit-erature on sensory-motor systems is replete with accountsof this neural architecture (e.g., Bear et al. 1996; Gazzanigaet al. 1998; Zeki 1993). There is little doubt that the brainuses active configurations of neurons to represent the prop-erties of perceived entities and events.

This basic premise of modern perceptual theory under-

Barsalou: Perceptual symbol systems

582 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

lies the present theory of perceptual symbol systems: a per-ceptual symbol is a record of the neural activation thatarises during perception. Essentially the same assumptionalso underlies much current work in imagery: commonneural systems underlie imagery and perception (e.g.,Crammond 1997; Deschaumes-Molinaro et al. 1992; Farah1995; Jeannerod 1994; 1995; Kosslyn 1994; Zatorre et al.1996). The proposal here is stronger, however, further as-suming that the neural systems common to imagery andperception underlie conceptual knowledge as well.

This claim by no means implies that identical systems un-derlie perception, imagery, and knowledge. Obviously, theymust differ in important ways. For example, Damasio(1989) suggests that convergence zones integrate informa-tion in sensory-motor maps to represent knowledge. Moregenerally, associative areas throughout the brain appear toplay this integrative role (Squire et al. 1993). Althoughmechanisms outside sensory-motor systems enter into con-ceptual knowledge, perceptual symbols always remaingrounded in these systems. Complete transductions neveroccur whereby amodal representations that lie in associa-tive areas totally replace modal representations. Thus,Damasio (1989) states that convergence zones “are unin-formed as to the content of the representations they assistin attempting to reconstruct. The role of convergence zonesis to enact formulas for the reconstitution of fragment-based momentary representations of entities or events insensory and motor cortices” (p. 46).5

2.1.1. Conscious versus unconscious processing. Al-though neural representations define perceptual symbols,they may produce conscious counterparts on some occa-sions. On other occasions, however, perceptual symbolsfunction unconsciously, as during preconscious processingand automatized skills. Most importantly, the basic defini-tion of perceptual symbols resides at the neural level: un-conscious neural representations – not conscious mentalimages – constitute the core content of perceptual sym-bols.6

Both the cognitive and neuroscience literatures supportthis distinction between unconscious neural representa-tions and optional conscious counterparts. In the cognitiveliterature, research on preconscious processing indicatesthat conscious states may not accompany unconscious pro-cessing, and that if they do, they follow it (e.g., Marcel1983a; 1983b; Velmans 1991). Similarly, research on skillacquisition has found that conscious awareness falls away asautomaticity develops during skill acquisition, leaving un-conscious mechanisms largely in control (e.g., Schneider &Shiffrin 1977; Shiffrin 1988; Shiffrin & Schneider 1977).Researchers have similarly found that conscious experienceoften fails to reflect the unconscious mechanisms control-ling behavior (e.g., Nisbett & Wilson 1977). In the neuro-science literature, research on blindsight indicates that un-conscious processing can occur in the absence of consciousvisual images (e.g., Cowey & Stoerig 1991; Weiskrantz1986; see also Campion, Lotto & Smith: “Is Blindsight anEffect of Scattered Light, Spared Cortex, and Near-Threshold Vision?” BBS 2(3) 1983). Similarly, consciousstates typically follow unconscious states when processingsensations and initiating actions, rather than precedingthem (Dennett & Kinsbourne 1992; Libet 1982; 1985).Furthermore, different neural mechanisms appear respon-sible for producing conscious and unconscious processing

(e.g., Farah & Feinberg 1997; Gazzaniga 1988; Schacter etal. 1988).

Some individuals experience little or no imagery. By dis-tinguishing unconscious perceptual processing from con-scious perceptual experience, we can view such individualsas people whose unconscious perceptual processing under-lies cognition with little conscious awareness. If humanknowledge is inherently perceptual, there is no a priori rea-son it must be represented consciously.

2.2. Schematic perceptual symbols

A perceptual symbol is not the record of the entire brainstate that underlies a perception. Instead, it is only a verysmall subset that represents a coherent aspect of the state.This is an assumption of many older theories (e.g., Locke1690/1959), as well as many current ones (e.g., Langacker1986; J. Mandler 1992; Talmy 1983). Rather than contain-ing an entire holistic representation of a perceptual brainstate, a perceptual symbol contains only a schematic aspect.

The schematic nature of perceptual symbols falls out nat-urally from two attentional assumptions that are nearly ax-iomatic in cognitive psychology: Selective attention (1) iso-lates information in perception, and (2) stores the isolatedinformation in long-term memory. First, consider the roleof selective attention in isolating features. During a per-ceptual experience, the cognitive system can focus atten-tion on a meaningful, coherent aspect of perception. Onperceiving an array of objects, attention can focus on theshape of one object, filtering out its color, texture, and po-sition, as well as the surrounding objects. From decades ofwork on attention, we know that people have a sophisti-cated and flexible ability to focus attention on features (e.g.,Norman 1976; Shiffrin 1988; Treisman 1969), as well as onthe relations between features (e.g., Treisman 1993). Al-though nonselected information may not be filtered outcompletely, there is no doubt that it is filtered to a signifi-cant extent (e.g., Garner 1974; 1978; Melara & Marks1990).7

Once an aspect of perception has been selected, it has avery high likelihood of being stored in long-term memory.On selecting the shape of an object, attention stores infor-mation about it. From decades of work on episodic mem-ory, it is clear that where selective attention goes, long-termstorage follows, at least to a substantial extent (e.g., Barsa-lou 1995; F. Craik & Lockhart 1972; Morris et al. 1977; D.Nelson et al. 1979). Research on the acquisition of auto-maticity likewise shows that selective attention controlsstorage (Compton 1995; Lassaline & Logan 1993; Logan &Etherton 1994; Logan et al. 1996). Although some nonse-lected information may be stored, there is no doubt that itis stored to a much lesser extent than selected information.Because selective attention focuses constantly on aspects ofexperience in this manner, large numbers of schematic rep-resentations become stored in memory. As we shall seelater, these representations can serve basic symbolic func-tions. Section 3.1 demonstrates that these representationscombine productively to implement compositionality, andsection 3.2 demonstrates that they acquire semantic inter-pretations through the construction of propositions. Theuse of “perceptual symbols” to this point anticipates theselater developments of the theory.

Finally, this symbol formation process should be viewedin terms of the neural representations described in section

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 583

2.1. If a configuration of active neurons underlies a per-ceptual state, selective attention operates on this neuralrepresentation, isolating a subset of active neurons. If se-lective attention focuses on an object’s shape, the neuronsrepresenting this shape are selected, and a record of theiractivation is stored. Such storage could reflect the Hebbianstrengthening of connections between active neurons (e.g.,Pulvermüller 1999), or the indirect integration of activeneurons via an adjacent associative area (e.g., Damasio1989). Conscious experience may accompany the symbolformation process and may be necessary for this process tooccur initially, falling away only as a symbol’s processing be-comes automatized with practice. Most fundamentally,however, the symbol formation process selects and stores asubset of the active neurons in a perceptual state.

2.2.1. Perceptual symbols are dynamic, not discrete. Oncea perceptual symbol is stored, it does not function rigidly asa discrete symbol. Because a perceptual symbol is an asso-ciative pattern of neurons, its subsequent activation has dy-namical properties. Rather than being reinstated exactly onlater occasions, its activations may vary widely. The subse-quent storage of additional perceptual symbols in the sameassociation area may alter connections in the original pattern,causing subsequent activations to differ. Different contextsmay distort activations of the original pattern, as connectionsfrom contextual features bias activation toward some fea-tures in the pattern more than others. In these respects, aperceptual symbol is an attractor in a connectionist network.As the network changes over time, the attractor changes. Asthe context varies, activation of the attractor covaries. Thus,a perceptual symbol is neither rigid nor discrete.

2.2.2. Perceptual symbols are componential, not holistic.Theorists often view perceptual representations as con-scious holistic images. This leads to various misunder-standings about perceptual theories of knowledge. One isthat it becomes difficult to see how a perceptual represen-tation could be componential. How can one construct aschematic image of a shape without orientation combinedholistically? If one imagines a triangle consciously, is orien-tation not intrinsically required in a holistic image?

It may be true that conscious images must contain cer-tain conjunctions of dimensions. Indeed, it may be difficultor impossible to construct a conscious image that breaksapart certain dimensions, such as shape and orientation. Ifa perceptual symbol is defined as an unconscious neuralrepresentation, however, this is not a problem. The neuronsfor a particular shape could be active, while no neurons fora particular orientation are. During the unconscious pro-cessing of perceptual symbols, the perceptual symbol for aparticular shape could represent the shape componentially,while perceptual symbols for other dimensions, such as ori-entation, remain inactive. The neuroanatomy of vision sup-ports this proposal, given the presence of distinct channelsin the visual system that process different dimensions, suchas shape, orientation, color, movement, and so forth.

When conscious images are constructed for a perceptualsymbol, the activation of other dimensions may often be re-quired. For example, consciously imagining a triangle mayrequire that it have a particular orientation. [See Edelman:“Representation if Representation of Similarities” BBS21(4) 1998.] However, these conscious representationsneed not be holistic in the sense of being irreducible to

schematic components. For example, Kosslyn and his col-leagues have shown that when people construct consciousimages, they construct them sequentially, component bycomponent, not holistically in a single step (Kosslyn et al.1988; Roth & Kosslyn 1988; see also Tye 1993).

2.2.3. Perceptual symbols need not represent specific in-dividuals. Contrary to what some thinkers have argued,perceptual symbols need not represent specific individuals(e.g., Berkeley 1710/1982; Hume 1739/1978). Because ofthe schematicity assumption and its implications for humanmemory, we should be surprised if the cognitive systemever contains a complete representation of an individual.Furthermore, because of the extensive forgetting and re-construction that characterize human memory, we shouldagain be surprised if the cognitive system ever remembersan individual with perfect accuracy, during either consciousor unconscious processing. Typically, partial information isretrieved, and some information may be inaccurate.

As we shall see later, the designation of a perceptual sym-bol determines whether it represents a specific individualor a kind – the resemblance of a symbol to its referent is notcritical (sect. 3.2.8). Suffice it to say for now that the sameperceptual symbol can represent a variety of referents, de-pending on how causal and contextual factors link it to ref-erents in different contexts (e.g., Dretske 1995; Fodor1975; Goodman 1976; Schwartz 1981). Across differentpragmatic contexts, a schematic drawing of a generic sky-scraper could stand for the Empire State Building, for sky-scrapers in general, or for clothing made in New York City.A drawing of the Empire State Building could likewisestand for any of these referents. Just as different physicalreplicas can stand for each of these referents in differentcontexts, perceptual representations of them can do so aswell (Price 1953). Thus, the ability of a perceptual symbolto stand for a particular individual need not imply that itmust represent an individual.

2.2.4. Perceptual symbols can be indeterminate. Theo-rists sometimes argue that because perceptual representa-tions are picture-like, they are determinate. It follows thatif human conceptualizations are indeterminate, perceptualrepresentations cannot represent them (e.g., Dennett1969; but see Block 1983). For example, it has been arguedthat people’s conceptualizations of a tiger are indeterminatein its number of stripes; hence they must not be represent-ing it perceptually. To my knowledge, it has not been veri-fied empirically that people’s conceptualizations of tigersare in fact indeterminate. If this is true, though, a percep-tual representation of a tiger’s stripes could be indetermi-nate in several ways (Schwartz 1981; Tye 1993). For exam-ple, the stripes could be blurred in an image, so that theyare difficult to count. Or, if a perceptual symbol for stripeshad been extracted schematically from the perception of atiger, it might not contain all of the stripes but only a patch.In later representing the tiger, this free-floating symbolmight be retrieved to represent the fact that the tiger wasstriped, but, because it was only a patch, it would not implya particular number of stripes in the tiger. If this symbolwere used to construct stripes on the surface of a simulatedtiger, the tiger would then have a determinate number ofstripes, but the number might differ from the original tiger,assuming for any number of reasons that the rendering ofthe tiger’s surface did not proceed veridically.

Barsalou: Perceptual symbol systems

584 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

The two solutions considered thus far assume consciousperceptual representations of a tiger. Unconscious neuralrepresentations provide another solution. It is well knownthat high-level neurons in perceptual systems can code in-formation qualitatively. For example, a neuron can code thepresence of a line without coding its specific length, posi-tion, or orientation. Similarly, a neuron can code the spatialfrequency of a region independently of its size or location.Imagine that certain neurons in the visual system respondto stripes independently of their number (i.e., detectors forspatial frequency). In perceiving a tiger, if such detectorsfire and become stored in a perceptual representation, theycode a tiger’s number of stripes indeterminately, becausethey simply respond to striped patterning and do not cap-ture any particular number of stripes.

Qualitatively oriented neurons provide a perceptual rep-resentation system with the potential to represent a widevariety of concepts indeterminately. Consider the repre-sentation of triangle. Imagine that certain neurons repre-sent the presence of lines independently of their length, po-sition, and orientation. Further imagine that other neuronsrepresent vertices between pairs of lines independently ofthe angle between them. Three qualitative detectors forlines, coupled spatially with three qualitative detectors forvertices that join them, could represent a generic triangle.Because all of these detectors are qualitative, the lengths ofthe lines and the angles between them do not matter; theyrepresent all instances of triangle simultaneously. In thismanner, qualitatively specified neurons support perceptualrepresentations that are not only indeterminate but alsogeneric.8

2.3. Multimodal perceptual symbols

The symbol formation process just described in section 2.2can operate on any aspect of perceived experience. Notonly does it operate on vision, it operates on the other foursensory modalities (audition, haptics, olfaction, and gusta-tion), as well as on proprioception and introspection. In anymodality, selective attention focuses on aspects of per-ceived experience and stores records of them in long-termmemory, which later function as symbols. As a result, a widevariety of symbols is stored. From audition, people acquireperceptual symbols for speech and the various soundsheard in everyday experience. From touch, people acquireperceptual symbols for textures and temperatures. Fromproprioception, people acquire perceptual symbols forhand movements and body positions.

Presumably, each type of symbol becomes established inits respective brain area. Visual symbols become estab-lished in visual areas, auditory symbols in auditory areas,proprioceptive symbols in motor areas, and so forth. Theneuroscience literature on category localization supportsthis assumption. When a sensory-motor area is damaged,categories that rely on it during the processing of perceivedinstances exhibit deficits in conceptual processing (e.g.,Damasio & Damasio 1994; Gainotti et al. 1995; Pulver-müller 1999; Warrington & Shallice 1984). For example,damage to visual areas disrupts the conceptual processingof categories specified by visual features (e.g., birds). Anal-ogously, damage to motor and somatosensory areas disruptsthe conceptual processing of categories specified by motorand somatosensory features (e.g., tools). Recent neuro-imaging studies on people with intact brains provide con-

verging evidence (e.g., A. Martin et al. 1995; 1996; Pulver-müller 1999; Rösler et al. 1995). When normal subjectsperform conceptual tasks with animals, visual areas arehighly active; when they perform conceptual tasks withtools, motor and somatosensory areas are highly active.Analogous findings have also been found for the conceptualprocessing of color and space (e.g., DeRenzi & Spinnler1967; Levine et al. 1985; Rösler et al. 1995).

As these findings illustrate, perceptual symbols are mul-timodal, originating in all modes of perceived experience,and they are distributed widely throughout the modality-specific areas of the brain. It should now be clear that “per-ceptual” is not being used in its standard sense here. Ratherthan referring only to the sensory modalities, as it usuallydoes, it refers much more widely to any aspect of perceivedexperience, including proprioception and introspection.

2.3.1. Introspection. Relative to sensory-motor processingin the brain, introspective processing is poorly understood.Functionally, three types of introspective experience appearespecially important: representational states, cognitive oper-ations, and emotional states. Representational states includethe representation of an entity or event in its absence, as wellas construing a perceived entity as belonging to a category.Cognitive operations include rehearsal, elaboration, search,retrieval, comparison, and transformation. Emotional statesinclude emotions, moods, and affects. In each case, selectiveattention can focus on an aspect of an introspective state andstores it in memory for later use as a symbol. For example,selective attention could focus on the ability to representsomething in its absence, filtering out the particular entity orevent represented and storing a schematic representation ofa representational state. Similarly, selective attention couldfocus on the process of comparison, filtering out the partic-ular entities compared and storing a schematic representa-tion of the comparison process. During an emotional event,selective attention could focus on emotional feelings, filter-ing out the specific circumstances leading to the emotion,and storing a schematic representation of the experience’s“hot” components.

Much remains to be learned about the neural bases of in-trospection, although much is known about the neuralbases of emotion (e.g., Damasio 1994; LeDoux 1996). Tothe extent that introspection requires attention and work-ing memory, the neural systems that underlie them may becentral (e.g., Jonides & E. Smith 1997; Posner 1995; Rush-worth & Owen 1998). Like sensory-motor systems, intro-spection may have roots in evolution and genetics. Just asgenetically constrained dimensions underlie vision (e.g.,color, shape, depth), genetically constrained dimensionsmay also underlie introspection. Across individuals and cul-tures, these dimensions may attract selective attention, re-sulting in the extraction of similar perceptual symbols forintrospection across individuals and cultures. Research onmental verbs in psychology and linguistics suggests whatsome of these dimensions might be (e.g., Cacciari & Levo-rato 1994; Levin 1995; Schwanenflugel et al. 1994). For ex-ample, Schwanenflugel et al. report that the dimensions ofperceptual/conceptual, certain/uncertain, and creative/noncreative organize mental verbs such as see, reason,know, guess, and compare. The fact that the same dimen-sions arise cross-culturally suggests that different culturesconceptualize introspection similarly (e.g., D’Andrade 1987;Schwanenflugel et al., in press).

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 585

2.4. Simulators and simulations

Perceptual symbols do not exist independently of one an-other in long-term memory. Instead, related symbols be-come organized into a simulator that allows the cognitivesystem to construct specific simulations of an entity or eventin its absence (analogous to the simulations that underliemental imagery). Consider the process of storing percep-tual symbols while viewing a particular car. As one looks atthe car from the side, selective attention focuses on variousaspects of its body, such as wheels, doors, and windows. Asselective attention focuses on these aspects, the resultingmemories are integrated spatially, perhaps using an object-centered reference frame. Similarly, as the perceiver movesto the rear of the car, to the other side, and to the front,stored perceptual records likewise become integrated intothis spatially organized system. As the perceiver looks un-der the hood, peers into the trunk, and climbs inside thepassenger area, further records become integrated. As a re-sult of organizing perceptual records spatially, perceiverscan later simulate the car in its absence. They can anticipatehow the car would look from its side if they were to movearound the car in the same direction as before; or they cananticipate how the car would look from the front if theywere to go around the car in the opposite direction. Be-cause they have integrated the perceptual information ex-tracted earlier into an organized system, they can later sim-ulate coherent experiences of the object.9

A similar process allows people to simulate event se-quences. Imagine that someone presses the gas pedal andhears the engine roar, then lets up and hears the engine idle.Because the perceptual information stored for eachsubevent is not stored independently but is instead inte-grated temporally, the perceiver can later simulate this eventsequence. Furthermore, the simulated event may containmultimodal aspects of experience, to the extent that they re-ceived selective attention. Besides visual information, theevent sequence might include the proprioceptive experienceof pressing the pedal, the auditory experience of hearing theengine roar, the haptic experience of feeling the car vibrat-ing, and mild excitement about the power experienced.

As described later (sect. 2.5), the perceptual symbols ex-tracted from an entity or event are integrated into a framethat contains perceptual symbols extracted from previouscategory members. For example, the perceptual symbolsextracted from a car are integrated into the frame for car,which contains perceptual symbols extracted from previousinstances. After processing many cars, a tremendousamount of multimodal information becomes establishedthat specifies what it is like to experience cars sensorially,proprioceptively, and introspectively. In other words, theframe for car contains extensive multimodal information ofwhat it is like to experience this type of thing.

A frame is never experienced directly in its entirety. In-stead, subsets of frame information become active to con-struct specific simulations in working memory (sects. 2.4.3,2.5.2). For example, a subset of the car frame might be-come active to simulate one particular experience of a car.On other occasions, different subsets might become activeto simulate other experiences. Thus, a simulator containstwo levels of structure: (1) an underlying frame that inte-grates perceptual symbols across category instances, and (2)the potentially infinite set of simulations that can be con-structed from the frame. As we shall see in later sections,

these two levels of structure support a wide variety of im-portant conceptual functions.10

2.4.1. Caveats. Several caveats about simulators must benoted. First, a simulator produces simulations that are al-ways partial and sketchy, never complete. As selective at-tention extracts perceptual symbols from perception, itnever extracts all of the information that is potentially avail-able. As a result, a frame is impoverished relative to the per-ceptions that produced it, as are the simulations con-structed from it.

Second, simulations are likely to be biased and distortedin various ways, rarely, if ever, being completely veridical.The well-known principles of Gestalt organization providegood examples. When a linear series of points is presentedvisually, an underlying line is perceived. As a result, thestored perceptual information goes beyond what is objec-tively present, representing a line, not just the points. Theprocess of completion similarly goes beyond the informa-tion present. When part of an object’s edge is occluded bya closer object, the edge is stored as complete, even thoughthe entire edge was not perceived. Finally, when an imper-fect edge exists on a perceived object, the perceptual sys-tem may idealize and store the edge as perfectly straight,because doing so simplifies processing. As a result, the stor-age of perceptual symbols may be nonveridical, as may thesimulations constructed from them. McCloskey (1983) andPylyshyn (1978) cite further distortions, and Tye (1993)suggests how perceptual simulation can explain them.

Third, a simulator is not simply an empirical collection ofsense impressions but goes considerably further. Mecha-nisms with strong genetic constraints almost certainly playcentral roles in establishing, maintaining, and running sim-ulators. For example, genetic predispositions that constrainthe processing of space, objects, movement, and emotionunderlie the storage of perceptual symbols and guide thesimulation process (cf. Baillargeon 1995; E. Markman1989; Spelke et al. 1992). Clearly, however, the full-blownrealization of these abilities reflects considerable interac-tion with the environment (e.g., Elman et al. 1996). Thus,a simulator is both a “rational” and an “empirical” system,reflecting intertwined genetic and experiential histories.

2.4.2. Dispositions, schemata, and mental models. Simu-lators have important similarities with other constructs. Inthe philosophical literature, Lockean (1690/1959) disposi-tions and Kantian (1787/1965) schemata are comparableconcepts. Both assume that unconscious generative mech-anisms produce specific images of entities and events thatgo beyond particular entities and events experienced in thepast. Similar ideas exist in more recent literatures, includ-ing Russell (1919b), Price (1953), and Damasio (1994). Inall cases, two levels of structure are proposed: a deep set ofgenerating mechanisms produces an infinite set of surfaceimages, with the former typically being unconscious andthe latter conscious.

Mental models are also related to simulators althoughthey are not identical (K. Craik 1943; Gentner & Stevens1983; Johnson-Laird 1983). Whereas a simulator includestwo levels of structure, mental models are roughly equiva-lent to only the surface level, namely, simulations of specificentities and events. Mental models tend not to address un-derlying generative mechanisms that produce a family ofrelated simulations.

Barsalou: Perceptual symbol systems

586 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

2.4.3. Concepts, conceptualizations, and categories. Ac-cording to this theory, the primary goal of human learningis to establish simulators. During childhood, the cognitivesystem expends much of its resources developing simula-tors for important types of entities and events. Once indi-viduals can simulate a kind of thing to a culturally accept-able degree, they have an adequate understanding of it.What is deemed a culturally competent grasp of a categorymay vary, but in general it can be viewed as the ability tosimulate the range of multimodal experiences common tothe majority of a culture’s members (cf. Romney et al.1986). Thus, people have a culturally acceptable simulatorfor chair if they can construct multimodal simulations of thechairs typically encountered in their culture, as well as theactivities associated with them.

In this theory, a concept is equivalent to a simulator. It isthe knowledge and accompanying processes that allow anindividual to represent some kind of entity or event ade-quately. A given simulator can produce limitless simulationsof a kind, with each simulation providing a different con-ceptualization of it. Whereas a concept represents a kindgenerally, a conceptualization provides one specific way ofthinking about it. For example, the simulator for chair cansimulate many different chairs under many different cir-cumstances, each comprising a different conceptualizationof the category. For further discussion of this distinction be-tween permanent knowledge of a kind in long-term mem-ory and temporary representations of it in working memory,see Barsalou (1987; 1989; 1993; also see sect. 2.4.5).

Simulators do not arise in a vacuum but develop to trackmeaningful units in the world. As a result, knowledge canaccumulate for each unit over time and support optimal in-teractions with it (e.g., Barsalou et al. 1993; 1998; Millikan1998). Meaningful units include important individuals(e.g., family members, friends, personal possessions) andcategories (e.g., natural kinds, artifacts, events), where acategory is a set of individuals in the environment or intro-spection. Once a simulator becomes established in memoryfor a category, it helps identify members of the category onsubsequent occasions, and it provides categorical infer-ences about them, as described next.11

2.4.4. Categorization, categorical inferences, and afford-ances. Tracking a category successfully requires that itsmembers be categorized correctly when they appear. View-ing concepts as simulators suggests a different way of think-ing about categorization. Whereas many theories assumethat relatively static, amodal structures determine categorymembership (e.g., definitions, prototypes, exemplars, the-ories), simulators suggest a more dynamic, embodied ap-proach: if the simulator for a category can produce a satis-factory simulation of a perceived entity, the entity belongsin the category. If the simulator cannot produce a satisfac-tory simulation, the entity is not a category member.12

Besides being dynamic, grounding categorization in per-ceptual symbols has another important feature: the knowl-edge that determines categorization is represented inroughly the same manner as the perceived entities thatmust be categorized. For example, the perceptual simula-tions used to categorize chairs approximate the actual per-ceptions of chairs. In contrast, amodal theories assume thatamodal features in concepts are compared to perceived en-tities to perform categorization. Whereas amodal theorieshave to explain how two very different types of representa-

tion are compared, perceptual symbol systems simply as-sume that two similar representations are compared. As anatural side effect of perceiving a category’s members, per-ceptual knowledge accrues that can be compared directlyto perceived entities during categorization.

On this view, categorization depends on both familiarand novel simulations. Each successful categorizationstores a simulation of the entity categorized. If the same en-tity or a highly similar one is encountered later, it is assignedto the category because the perception of it matches an ex-isting simulation in memory. Alternatively, if a novel entityis encountered that fails to match an existing simulation,constructing a novel simulation that matches the entity canestablish membership. Explanation-based learning as-sumes a similar distinction between expertise and creativ-ity in categorization (DeJong & Mooney 1986; T. Mitchellet al. 1986), as do theories of skill acquisition (Anderson1993; Logan 1988; Newell 1990), although these ap-proaches typically adopt amodal representations.

As an example, imagine that the simulator for triangleconstructs three lines and connects their ends uniquely.Following experiences with previous triangles, simulationsthat match these instances become stored in the simulator.On encountering these triangles later, or highly similarones, prestored simulations support rapid categorization,thereby implementing expertise. However, a very differenttriangle, never seen before, can also be categorized if thetriangle simulator can construct a simulation of it (cf. Miller& Johnson-Laird 1976).

Categorization is not an end in itself but provides accessto categorical inferences. Once an entity is categorized,knowledge associated with the category provides predic-tions about the entity’s structure, history, and behavior, andalso suggests ways of interacting with it (e.g., Barsalou 1991;Ross 1996; see also sect. 3.2.2). In this theory, categoricalinferences arise through simulation. Because a simulatorcontains a tremendous amount of multimodal knowledgeabout a category, it can simulate information that goes be-yond that perceived in a categorized entity. On perceivinga computer from the front, the simulator for computer cansimulate all sorts of things not perceived, such as the com-puter’s rear panel and internal components, what the com-puter will do when turned on, what tasks it can perform,how the keys will feel when pressed, and so forth. Ratherthan having to learn about the entity from scratch, a per-ceiver can run simulations that anticipate the entity’s struc-ture and behavior and that suggest ways of interacting withit successfully.

Simulators also produce categorical inferences in the ab-sence of category members. As described later, simulationsprovide people with a powerful ability to reason about en-tities and events in their absence (sects. 2.6, 3.1, 3.2, 4.1,4.2). Simulations of future entities, such as a rental home,allow people to identify preparations that must be made inadvance. Simulations of future events, such as asking a fa-vor, allow people to identify optimal strategies for achiev-ing success. To the extent that future category members aresimilar to previous category members, simulations of pre-vious members provide reasonable inferences about futuremembers.13

Deriving categorical inferences successfully requiresthat simulations preserve at least some of the affordancespresent in actual sensory-motor experiences with categorymembers (cf. Gibson 1979; also see S. Edelman 1998). To

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 587

the extent that simulations capture affordances from per-ception and action, successful reasoning about physicalsituations can proceed in their absence (Glenberg 1997;Glenberg et al. 1998b; Newton 1996). Agents can draw in-ferences that go beyond perceived entities, and they canplan intelligently for the future. While sitting in a restau-rant and wanting to hide from someone entering, one couldsimulate that a newspaper on the table affords coveringone’s face completely but that a matchbook does not. As aresult of these simulations, the newspaper is selected toachieve this goal rather than the matchbook. Because thesimulations captured the physical affordances correctly, theselected strategy works.

2.4.5. Concept stability. Equating concepts with simula-tors provides a solution to the problem of concept stability.Previous work demonstrates that conceptualizations of acategory vary widely between and within individuals (Bar-salou 1987; 1989; 1993). If different people conceptualizebird differently on a given occasion, and if the same indi-vidual conceptualizes bird differently across occasions, howcan stability be achieved for this concept?

One solution is to assume that a common simulator forbird underlies these different conceptualizations, both be-tween and within individuals. First, consider how a simula-tor produces stability within an individual. If a person’s dif-ferent simulations of a category arise from the samesimulator, then they can all be viewed as instantiating thesame concept. Because the same simulator produced all ofthese simulations, it unifies them. Between individuals, thekey issue concerns whether different people acquire simi-lar simulators. A number of factors suggest that they should,including a common cognitive system, common experiencewith the physical world, and socio-cultural institutions thatinduce conventions (e.g., Newton 1996; Tomasello et al.1993). Although two individuals may represent the samecategory differently on a given occasion, each may have theability to simulate the other’s conceptualization. In an un-published study, subjects almost always viewed other sub-jects’ conceptualizations of a category as correct, eventhough their individual conceptualizations varied widely.Each subject produced a unique conceptualization but ac-cepted those of other subjects because they could be simu-lated. Furthermore, common contextual constraints duringcommunication often drive two people’s simulations of acategory into similar forms. In another unpublished study,conceptualizations of a category became much more stableboth between and within subjects when constructed in acommon context. Subjects shared similar simulators thatproduced similar conceptualizations when constrained ad-equately.

2.4.6. Cognitive penetration. The notion of a simulator isdifficult to reconcile with the view that cognition does notpenetrate perception (Fodor 1983). According to the im-penetrability hypothesis, the amodal symbol system under-lying higher cognition has little or no impact on processingin sensory-motor systems because these systems are mod-ular and therefore impenetrable. In contrast, the constructof a simulator assumes that sensory-motor systems aredeeply penetrable. Because perceptual symbols reside insensory-motor systems, running a simulator involves a par-tial running of these systems in a top-down manner.

In an insightful BBS review of top-down effects in vision,

Pylyshyn (1999) concludes that cognition only producestop-down effects indirectly through attention and decisionmaking – it does not affect the content of vision directly.Contrary to this conclusion, however, much evidence in-dicates that cognition does affect the content of sensory-motor systems directly. The neuroscience literature onmental imagery demonstrates clearly that cognition estab-lishes content in sensory-motor systems in the absence ofphysical input. In visual imagery, the primary visual cortex,V1, is often active, along with many other early visual areas(e.g., Kosslyn et al. 1995). In motor imagery, the primarymotor cortex, M1, is often active, along with many otherearly motor areas (e.g., Crammond 1997; Deschaumes-Molinaro et al. 1992; Jeannerod 1994; 1995). Indeed, mo-tor imagery not only activates early motor areas, it also stim-ulates spinal neurons, produces limb movements, andmodulates both respiration and heart rate. When sharp-shooters imagine shooting a gun, their entire body behavessimilarly to actually doing so. In auditory imagery, activationhas not yet been observed in the primary auditory cortex,but activation has been observed in other early auditory ar-eas (e.g., Zatorre et al. 1996). These findings clearly demon-strate that cognition establishes content in sensory-motorsystems in the absence of physical input.

A potential response is that mental imagery arises solelywithin sensory-motor areas – it is not initiated by cognitiveareas. In this vein, Pylyshyn (1999) suggests that perceptualmodules contain local memory areas that affect the contentof perception in a top-down manner. This move under-mines the impenetrability thesis, however, at least in itsstrong form. As a quick perusal of textbooks on cognitionand perception reveals, memory is widely viewed as a basiccognitive process – not as a perceptual process. Many re-searchers would probably agree that once memory is im-ported into a sensory-motor system, cognition has been im-ported. Furthermore, to distinguish perceptual memoryfrom cognitive memory, as Pylyshyn does, makes sense onlyif one assumes that cognition utilizes amodal symbols. Onceone adopts the perspective of perceptual symbol systems,there is only perceptual memory, and it constitutes the rep-resentational core of cognition. In this spirit, Damasio(1989) argues eloquently that there is no sharp discontinu-ity between perceptual and cognitive memory. Instead,there is simply a gradient from posterior to anterior associ-ation areas in the complexity and specificity of the memo-ries that they activate in sensory-motor areas. On Damasio’sview, memory areas both inside and outside a sensory-motor system control its feature map to implement cogni-tive representations. In this spirit, the remainder of this tar-get article assumes that top-down cognitive processing in-cludes all memory effects on perceptual content, includingmemory effects that originate in local association areas.

Nevertheless, Pylyshyn (1999) makes compelling argu-ments about the resiliency of bottom-up information inface-to-face competition with contradicting top-down in-formation. For example, when staring at the Müller-Lyer il-lusion, one cannot perceive the horizontal lines as equiva-lent in length, even though one knows cognitively that theyare. Rather than indicating impenetrability, however, thisimportant observation may simply indicate that bottom-upinformation dominates top-down information when theyconflict (except in the degenerate case of psychosis andother hallucinatory states, when top-down information doesdominate bottom-up information). Indeed, Marslen-Wil-

Barsalou: Perceptual symbol systems

588 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

son and Tyler (1980), although taking a nonmodular inter-active approach, offer exactly this account of bottom-updominance in speech recognition. Although top-down pro-cessing can penetrate speech processing, it is overriddenwhen bottom-up information conflicts. If semantic and syn-tactic knowledge predict that “The cowboy climbed into the———” ends with “saddle,” but the final word is actually“jacuzzi,” then “jacuzzi” overrides “saddle.”

On this view, sensory-motor systems are penetrable butnot always. When bottom-up information conflicts withtop-down information, the former usually dominates.When bottom-up information is absent, however, top-downinformation penetrates, as in mental imagery. Perhaps mostcritically, when bottom-up and top-down information arecompatible, top-down processing again penetrates, but insubtle manners that complement bottom-up processing.The next section (sect. 2.4.7) reviews several importantphenomena in which bottom-up and top-down processingsimultaneously activate sensory-motor representations asthey cooperate to perceive physical entities (i.e., implicitmemory, filling-in, anticipation, interpretation). Recentwork on simultaneous imagery and perception showsclearly that these two processes work well together whencompatible (e.g., Craver-Lemley & Reeves 1997; Gilden etal. 1995).

Perhaps the critical issue in this debate concerns the de-finition of cognition. On Pylyshyn’s view, cognition con-cerns semantic beliefs about the external world (i.e., the be-lief that the horizontal lines are the same length in theMüller-Lyer illusion). However, this is a far narrower viewof cognition than most cognitive psychologists take, as evi-denced by journals and texts in the field. Judging from thesesources, cognitive psychologists believe that a muchbroader array of processes and representations – includingmemory – constitutes cognition.

Ultimately, as Pylyshyn suggests, identifying the mecha-nisms that underlie intelligence should be our primary goal,from the most preliminary sensory processes to the mostabstract thought processes. Where we actually draw the linebetween perception and cognition may not be all that im-portant, useful, or meaningful. In this spirit, perceptualsymbol systems attempt to characterize the mechanismsthat underlie the human conceptual system. As we haveseen, the primary thesis is that sensory-motor systems rep-resent not only perceived entities but also conceptualiza-tions of them in their absence. From this perspective, cog-nition penetrates perception when sensory input is absent,or when top-down inferences are compatible with sensoryinput.

2.4.7. A family of representational processes. Evolutionoften capitalizes on existing mechanisms to perform newfunctions (Gould 1991). Representational mechanisms insensory-motor regions of the brain may be such mecha-nisms. Thus far, these representational mechanisms haveplayed three roles in perceptual symbol systems: (1) In per-ception, they represent physical objects. (2) In imagery,they represent objects in their absence. (3) In conception,they also represent objects in their absence. On this view,conception differs from imagery primarily in the con-sciousness and specificity of sensory-motor representa-tions, with these representations being more conscious anddetailed in imagery than in conception (Solomon & Barsa-lou 1999a; Wu & Barsalou 1999). Several other cognitive

processes also appear to use the same representationalmechanisms, including implicit memory, filling-in, antici-pation, and interpretation. Whereas perception, imagery,and conception perform either bottom-up or top-downprocessing exclusively, these other four processes fuse com-plementary mixtures of bottom-up and top-down process-ing to construct perceptions.

In implicit memory (i.e., repetition priming), a percep-tual memory speeds the perception of a familiar entity (e.g.,Roediger & McDermott 1993; Schacter 1995). On seeing aparticular chair, for example, a memory is established thatspeeds perception of the same chair later. Much researchdemonstrates the strong perceptual character of thesememories, with slight deviations in perceptual featureseliminating facilitative effects. Furthermore, imagining anentity produces much the same facilitation as perceiving it,suggesting a common representational basis. Perhaps mostcritically, implicit memory has been localized in sensory-motor areas of the brain, with decreasing brain activity re-quired to perceive a familiar entity (e.g., Buckner et al.1995). Thus, the representations that underlie implicitmemory reside in the same systems that process entitiesperceptually. When a familiar entity is perceived, implicitmemories become fused with bottom-up information torepresent it efficiently.

In filling-in, a perceptual memory completes gaps in bot-tom-up information. Some filling-in phenomena reflectperceptual inferences that are largely independent of mem-ory (for a review, see Pessoa et al. 1998). In the perceptionof illusory contours, for example, low-level sensory mecha-nisms infer edges on the basis of perceived vertices (e.g.,Kanizsa 1979). However, other filling-in phenomena relyheavily on memory. In the phoneme restoration effect,knowledge of a word creates the conscious perceptual ex-perience of hearing a phoneme where noise exists physi-cally (e.g., Samuel 1981; 1987; Warren 1970). More signif-icantly, phoneme restoration adapts low-level featuredetectors much as if physical phonemes had adapted them(Samuel 1997). Thus, when a word is recognized, its mem-ory representation fills in missing phonemes, not only in ex-perience, but also in sensory processing. Such findingsstrongly suggest that cognitive and perceptual representa-tions reside in a common system, and that they becomefused to produce perceptual representations. Knowledge-based filling-in also occurs in vision. For example, knowl-edge about bodily movements causes apparent motion todeviate from the perceptual principle of minimal distance(Shiffrar & Freyd 1990; 1993). Rather than filling in an armas taking the shortest path through a torso, perceivers fill itin as taking the longer path around the torso, consistentwith bodily experience.

In perceptual anticipation, the cognitive system uses pastexperience to simulate a perceived entity’s future activity.For example, if an object traveling along a trajectory disap-pears, perceivers anticipate where it would be if it were stillon the trajectory, recognizing it faster at this point than atthe point it disappeared, or at any other point in the display(Freyd 1987). Recent findings indicate that knowledge af-fects the simulation of these trajectories. When subjects be-lieve that an ambiguous object is a rocket, they simulate adifferent trajectory compared to when they believe it is asteeple (Reed & Vinson 1996). Even infants produce per-ceptual anticipations in various occlusion tasks (e.g., Bail-largeon 1995; Hespos & Rochat 1997). These results fur-

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 589

ther indicate that top-down and bottom-up processes co-ordinate the construction of useful perceptions.

In interpretation, the conceptual representation of anambiguous perceptual stimulus biases sensory processing.In audition, when subjects believe that multiple speakersare producing a series of speech sounds, they normalize thesounds differently for each speaker (Magnuson & Nusbaum1993). In contrast, when subjects believe that only onespeaker is producing the same sounds, they treat them in-stead as differences in the speaker’s emphasis. Thus, eachinterpretation produces sensory processing that is appro-priate for its particular conceptualization of the world.Again, cognition and sensation coordinate to producemeaningful perceptions (see also Nusbaum & Morin 1992;Schwab 1981). Analogous interpretive effects occur in vi-sion. Conceptual interpretations guide computations of fig-ure and ground in early visual processing (Peterson & Gib-son 1993; Weisstein & Wong 1986); they affect the selectiveadaptation of spatial frequency detectors (Weisstein et al.1972; Weisstein & Harris 1980); and they facilitate edge de-tection (Weisstein & Harris 1974). Frith and Dolan (1997)report that top-down interpretive processing activates sen-sory-motor regions in the brain.

In summary, an important family of basic cognitive pro-cesses appears to utilize a single mechanism, namely, sen-sory-motor representations. These processes, although re-lated, vary along a continuum of bottom-up to top-downprocessing. At one extreme, bottom-up input activates sen-sory-motor representations in the absence of top-down pro-cessing (“pure” perception). At the other extreme, top-down processing activates sensory-motor representationsin the absence of bottom-up processing (imagery and con-ception). In between lie processes that fuse complementarymixtures of bottom-up and top-down processing to coordi-nate the perception of physical entities (implicit memory,filling-in, anticipation, interpretation).

2.5. Frames

A frame is an integrated system of perceptual symbols thatis used to construct specific simulations of a category. To-gether, a frame and the simulations it produces constitute asimulator. In most theories, frames are amodal, as are theclosely related constructs of schemata and scripts (e.g.,Minsky 1977; 1985; Rumelhart & Ortony 1978; Schank &Abelson 1977; for reviews, see Barsalou 1992; Barsalou &Hale 1992). As we shall see, however, frames and schematahave natural analogues in perceptual symbol systems. In theaccount that follows, all aspects require much further de-velopment, and many important issues are not addressed.This account is meant to provide only a rough sense of howframes develop in perceptual symbol systems and how theyproduce specific simulations.

The partial frame for car in Figure 3 illustrates how per-ceptual symbol systems implement frames. On the percep-tion of a first car (Fig. 3A), the schematic symbol formationprocess in section 2.2 extracts perceptual symbols for thecar’s overall shape and some of its components, and then in-tegrates these symbols into an object-centered referenceframe. The representation at the top of Figure 3A repre-sents the approximate volumetric extent of the car, as wellas embedded subregions that contain significant compo-nents (e.g., the doors and tires). These subregions and theirspecializations reflect the result of the symbol formation

process described earlier. For every subregion that receivesselective attention, an approximate delineation of the sub-region is stored and then connected to the content infor-mation that specializes it. For example, attending to thewindow and handle of the front door establishes perceptualsymbols for these subregions and the content informationthat specializes them.14

As Figure 3A illustrates, the frame represents spatial andcontent information separately. At one level, the volumet-ric regions of the object are represented according to theirspatial layout. At another level, the contents of these sub-regions are represented as specializations. This distinctionbetween levels of representation follows the work of Unger-lieder and Mishkin (1982), who identified separate neuralpathways for spatial/motor information and object features(also see Milner & Goodale 1995). Whereas the spatial rep-resentation establishes the frame’s skeleton, the contentspecializations flesh it out.

On the perception of a second car, a reminding takesplace, retrieving the spatially integrated set of symbols forthe first car (Barsalou et al. 1998; Medin & Ross 1989; Mil-likan 1998; Ross et al. 1990; Spalding & Ross 1994). The re-trieved set of symbols guides processing of the second carin a top-down manner, leading to the extraction of percep-tual symbols in the same subregions. As Figure 3B illus-trates, this might lead to the extraction of content informa-tion for the second car’s shape, doors, and wheels, whichbecome connected to the same subregions as the contentextracted from the first car. In addition, other subregions ofthe second car may be processed, establishing new percep-tual symbols (e.g., the antenna and gas cap).

Barsalou: Perceptual symbol systems

590 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

Figure 3. (A) An example of establishing an initial frame for carafter processing a first instance. (B) The frame’s evolution afterprocessing a second instance. (C) Constructing a simulation of thesecond instance from the frame in panel B. In Panels A and B,lines with arrows represent excitatory connections, and lines withcircles represent inhibitory connections.

Most important, all the information extracted from thetwo cars becomes integrated into a knowledge structurethat constitutes the beginnings of a car frame. When per-ceptual symbols have been extracted from the same volu-metric region for both cars, they both become connected tothat subregion in the object-oriented reference frame. Forexample, the perceptual symbols for each car’s doors be-come associated with the door subregions. As a result, thespecialization from either car can be retrieved to specializea region during a simulation, as may an average superim-position of them.

Figure 3B further illustrates how connections within theframe change as new instances become encoded. First,specializations from the same instance are connected to-gether, providing a mechanism for later reinstating it. Sec-ond, these connections become weaker over time, with thedashed connections for the first instance representingweaker connections than those for the second instance.Third, inhibitory connections develop between specializa-tions that compete for the same subregion, providing an ad-ditional mechanism for reinstating particular instances. Fi-nally, connections processed repeatedly become stronger,such as the thicker connections between the overall volumeand the subregions for the doors and wheels.

Following experiences with many cars, the car frame ac-crues a tremendous amount of information. For any givensubregion, many specializations exist, and default special-izations develop, either through the averaging of special-izations, or because one specialization develops thestrongest association to the subregion. Frames develop sim-ilarly for event concepts (e.g., eating), except that subre-gions exist over time as well as in space (examples will bepresented in sect. 3.4 on abstract concepts).

2.5.1. Basic frame properties. Barsalou (1992) and Barsa-lou and Hale (1993) propose that four basic properties con-stitute a frame (and the related construct of a schema): (1)predicates, (2) attribute-value bindings, (3) constraints, and(4) recursion. Barsalou (1993) describes how these proper-ties manifest themselves in perceptual symbol systems.

Predicates are roughly equivalent to unspecializedframes. For example, the predicate CAR(Door 5 x, Win-dow 5 y, . . . ) is roughly equivalent to the perceptual framefor car with its subregions unspecialized (e.g., the hierar-chical volumetric representation in Fig. 3B).

Attribute-value bindings arise through the specializationof a subregion in a simulation. As different specializationsbecome bound to the same subregion, they establish dif-ferent “‘values” of an “attribute” or “slot” (e.g., door and itsspecializations in Fig. 3B).

Constraints arise from associative connections betweenspecializations that represent individuals and subcategorieswithin a frame (e.g., the connections between the special-izations for the second instance of car in Fig. 3B). Thus, ac-tivating the second car’s specialization in one subregion ac-tivates its associated specializations in other subregions,thereby simulating this specific car. Because constraints canweaken over time and because strong defaults may domi-nate the activation process, reconstructive error can occur.For example, if the first car’s tires received extensive pro-cessing, the strength of their connections to the subregionsfor the wheels might dominate, even during attempts tosimulate the second car.

Recursion arises from establishing a simulation within an

existing simulator. As Figures 3A and 3B suggest, the sim-ulator for wheel might initially just simulate schematic cir-cular volumes, with all other information filtered out. Asmore attention is paid to the wheels on later occasions, how-ever, detail is extracted from their subregions. As a result,simulators for tire and hubcap develop within the simula-tor for wheel, organizing the various specializations thatoccur for each of them. Because this process can continueindefinitely, an arbitrarily deep system of simulators devel-ops, bounded only by the perceiver’s motivation and per-ceptual resolution.

2.5.2. Constructing specific simulations. The cognitivesystem uses the frame for a category to construct specificsimulations (i.e., mental models). As discussed in later sec-tions, such simulations take myriad forms. In general, how-ever, the process takes the form illustrated in Figure 3C.First, the overall volumetric representation for car be-comes active, along with a subset of its subregions (e.g., fordoors, wheels, antenna, gas cap). If a cursory simulation isbeing constructed, only the subregions most frequentlyprocessed previously are included, with other subregionsremaining inactive. This process can then proceed recur-sively, with further subregions and specializations compet-ing for inclusion. As Figure 3C illustrates, subregions forthe doors and windows subsequently become active, fol-lowed by their specializations.

In any given subregion, the specialization having thehighest association with the subregion, with other active re-gions, and with other active specializations becomes active.Because this process occurs for each subregion simultane-ously, it is highly interactive. The simulation that emergesreflects the strongest attractor in the frame’s state space. Ifcontext “clamps” certain aspects of the simulation initially,the constraint satisfaction process may diverge from theframe’s strongest attractor toward a weaker one. If an eventis being simulated, subregions and their specializations maychange over time as the constraint satisfaction processevolves recurrently.

During a simulation, processing is not limited to the re-trieval of frame information but can also include transfor-mations of it. Retrieved information can be enlarged,shrunk, stretched, and reshaped; it can be translated acrossthe simulation spatially or temporally; it can be rotated inany dimension; it can remain fixed while the perspective onit varies; it can be broken into pieces; it can be merged withother information. Other transformations are no doubt pos-sible as well. The imagery literature offers compelling evi-dence that such transformations are readily available in thecognitive system (e.g., Finke 1989; Kosslyn 1980; Shepard& Cooper 1982), and that these transformations conformclosely to perceptual experience (e.g., Freyd 1987; Parsons1987a; 1987b; Shiffrar & Freyd 1990; 1993).

2.5.3. Framing and background-dependent meaning. Aslinguists and philosophers have noted for some time, con-cepts are often specified relative to one another (e.g., Fill-more 1985; Langacker 1986; A. Lehrer 1974; A. Lehrer &Kittay 1992; Quine 1953). Psychologists often make a sim-ilar observation that concepts are specified in the context ofintuitive theories (e.g., Carey 1985; Keil 1989; Murphy &Medin 1985; Rips 1989; Wellman & Gelman 1988). Fram-ing and background-dependent meaning are two specificways in which background knowledge specifies concepts. In

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 591

framing, a focal concept depends on a background conceptand cannot be specified independently of it. For example,payment is specified relative to buy, hypotenuse is specifiedrelative to right triangle, and foot is specified relative to legand body. In background-dependent meaning, the same fo-cal concept changes as its background concept changes. Forexample, the conceptualization of foot varies as the back-ground concept changes from human to horse to tree. Sim-ilarly, the conceptualization of handle varies across shovel,drawer, and car door, as does the conceptualization of redacross fire truck, brick, hair, and wine (Halff et al. 1976).

Frames provide the background structures that supportframing. Thus, the event frame for buy organizes the back-ground knowledge necessary for understanding payment,and the entity frame for human organizes the backgroundknowledge necessary for understanding foot. When pay-ment or foot is conceptualized, its associated frame pro-duces a simulation that provides the necessary backgroundfor understanding it. For example, a simulation of a humanbody provides one possible framing for a simulation of afoot.

Frames also offer a natural account of background de-pendent meaning. Foot, for example, is conceptualized dif-ferently when human is simulated in the background thanwhen horse or tree is simulated. Because different percep-tual symbols are accessed for foot in the context of differ-ent frames, simulations of foot vary widely. Similarly, dif-ferent conceptualizations of red reflect different perceptualsymbols accessed in frames for fire truck, brick, hair, andwine.15

2.6. Linguistic indexing and control

In humans, linguistic symbols develop together with theirassociated perceptual symbols. Like a perceptual symbol, alinguistic symbol is a schematic memory of a perceivedevent, where the perceived event is a spoken or a writtenword. A linguistic symbol is not an amodal symbol, nor doesan amodal symbol ever develop in conjunction with it. In-stead, a linguistic symbol develops just like a perceptualsymbol. As selective attention focuses on spoken and writ-ten words, schematic memories extracted from perceptualstates become integrated into simulators that later producesimulations of these words in recognition, imagination, andproduction.

As simulators for words develop in memory, they becomeassociated with simulators for the entities and events towhich they refer. Whereas some simulators for words be-come linked to simulators for entire entities or events, oth-ers become linked to subregions and specializations.Whereas “car” becomes linked to the entire simulator forcar, “trunk” becomes linked to one of its subregions. Simu-lators for words also become associated with other aspectsof simulations, including surface properties (e.g., “red”),manners (e.g., “rapidly”), relations (e.g., “above”), and soforth. Within the simulator for a concept, large numbers ofsimulators for words become associated with its various as-pects to produce a semantic field that mirrors the underly-ing conceptual field (Barsalou 1991; 1992; 1993).

Once simulators for words become linked to simulatorsfor concepts, they can control simulations. On recognizinga word, the cognitive system activates the simulator for theassociated concept to simulate a possible referent. On pars-ing the sentences in a text, surface syntax provides instruc-

tions for building perceptual simulations (Langacker 1986;1987; 1991; 1997). As discussed in the next section, the pro-ductive nature of language, coupled with the links betweenlinguistic and perceptual simulators, provides a powerfulmeans of constructing simulations that go far beyond an in-dividual’s experience. As people hear or read a text, they useproductively formulated sentences to construct a produc-tively formulated simulation that constitutes a semanticinterpretation (sect. 4.1.6). Conversely, during languageproduction, the construction of a simulation activates asso-ciated words and syntactic patterns, which become candi-dates for spoken sentences designed to produce a similarsimulation in a listener. Thus, linguistic symbols index andcontrol simulations to provide humans with a conceptualability that is probably the most powerful of any species(Donald 1991; 1993). As MacWhinney (1998) suggests, lan-guage allows conversationalists to coordinate simulationsfrom a wide variety of useful perspectives.16

3. Derived properties

By parsing perception into schematic components and thenintegrating components across individuals into frames, sim-ulators develop that represent the types of entities andevents in experience. The result is a basic conceptual sys-tem, not a recording system. Rather than recording holisticrepresentations of perceptual experience, this system es-tablishes knowledge about the categories of individuals thatconstitute it. Each simulator represents a type, not a token,and the collection of simulators constitutes a basic concep-tual system that represents the components of perceptionand provides categorical inferences about them.

We have yet to characterize a fully functional conceptualsystem. To do so, we must establish that perceptual symbolsystems can implement productivity, propositions, and ab-stract concepts. The next section demonstrates that theseproperties follow naturally from the basic system presentedthus far and that one additional property – variable em-bodiment – follows as well.

3.1. Productivity

One of the important lessons we have learned from amodalsymbol systems is that a viable theory of knowledge must beproductive (e.g., Chomsky 1957; Fodor 1975; Fodor &Pylyshyn 1988). The human cognitive system can produce aninfinite number of conceptual and linguistic structures thatgo far beyond those experienced. No one has experienced areal Cheshire Cat, but it is easy to imagine a cat whose bodyfades and reappears while its human smile remains.

Productivity is the ability to construct an unlimited num-ber of complex representations from a finite number ofsymbols using combinatorial and recursive mechanisms. Itis fair to say that productivity is not typically recognized aspossible within perceptual theories of knowledge, again be-cause these theories are usually construed as recording sys-tems. It is worth noting, however, that certain physical arti-facts exhibit a kind of “perceptual productivity,” usingcombinatorial and recursive procedures on schematic dia-grams to construct limitless complex diagrams. In architec-ture, notations exist for combining primitive schematic di-agrams combinatorially and recursively to form complexdiagrams of buildings (e.g., W. Mitchell 1990). In electron-

Barsalou: Perceptual symbol systems

592 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

ics, similar notations exist for building complex circuitsfrom primitive schematic components (Haugeland 1991).The fact that physical artifacts can function in this mannersuggests that schematic perceptual representations in cog-nition could behave similarly (Price 1953). As we shall see,if one adopts the core properties of perceptual symbol sys-tems established thus far, productivity follows naturally.

Figure 4 illustrates productivity in perceptual symbolsystems. Before exploring productivity in detail, it is firstnecessary to make several points about the diagrams in Fig-ure 4, as well as those in all later figures. First, diagramssuch as the balloon in Figure 4A should not be viewed asliterally representing pictures or conscious images. Instead,these theoretical illustrations stand for configurations ofneurons that become active in representing the physical in-formation conveyed in these drawings. For example, theballoon in Figure 4A is a theoretical notation that refers toconfigurations of neurons active in perceiving balloons.

Second, each of these drawings stands metonymically fora simulator. Rather than standing for only one projection ofan object (e.g., the balloon in Fig. 4A), these drawings standfor a simulator capable of producing countless projectionsof the instance shown, as well as of an unlimited number ofother instances.

Third, the diagrams in Figure 4B stand for simulators ofspatial relations that result from the symbol formationprocess described earlier. During the perception of a bal-loon above a cloud, for example, selective attention focuseson the occupied regions of space, filtering out the entitiesin them. As a result, a schematic representation of abovedevelops that contains two schematic regions of spacewithin one of several possible reference frames (not shownin Fig. 4). Following the similar extraction of informationon other occasions, a simulator develops that can rendermany different above relations. For example, specific sim-ulations may vary in the vertical distance between the tworegions, in their horizontal offset, and so forth. Finally, thethicker boundary for a given region in a spatial relation in-dicates that selective attention is focusing on it.17 Thus,above and below involve the same representation of space,but with a different distribution of attention over it. Asmuch research illustrates, an adequate treatment of spatialconcepts requires further analysis and detail than providedhere (e.g., Herskovits 1997; Regier 1996; Talmy 1983).Nevertheless, one can view spatial concepts as simulatorsthat develop through the schematic symbol formationprocess in section 2.2.

Figure 4C illustrates how the simulators in Figures 4Aand 4B combine to produce complex perceptual simula-tions combinatorially. In the leftmost example, the simula-tor for above produces a specific simulation of above (i.e.,two schematic regions of the same size, close together andin vertical alignment). The simulators for balloon and cloudproduce specific simulations that specialize the two regionsof the above simulation. The result is a complex simulationin which a balloon is simulated above a cloud. Note thatcomplex simulations of these three categories could take in-finitely many forms, including different balloons, clouds,and above relations. The second and third examples in Fig-ure 4C illustrate the combinatorial nature of these simula-tions, as the various objects in Figure 4A are rotatedthrough each of the regions in above. Because many possi-ble objects could enter into simulations of above, a verylarge number of such simulations is possible.

A simulation can be constructed recursively by specializ-ing the specialization of a schematic region. In Figure 4D,the lower region of above is specialized recursively with asimulation of left-of, whose regions are then specializedwith simulations of jet and cloud. The resulting simulationrepresents a balloon that is above a jet to the left of a cloud.Because such recursion can occur indefinitely, an infinitenumber of simulations can be produced in principle.

3.1.1. Reversing the symbol formation process. Produc-tivity in perceptual symbol systems is approximately thesymbol formation process run in reverse. During symbolformation, large amounts of information are filtered out ofperceptual representations to form a schematic represen-tation of a selected aspect (sect. 2.2). During productivity,small amounts of the information filtered out are addedback. Thus, schematicity makes productivity possible. Forexample, if a perceptual symbol for ball only represents itsshape schematically after color and texture have been fil-tered out, then information about color and texture canlater be added productively. For example, the simulation ofa ball could evolve into a blue ball or a smooth yellow ball.Because the symbol formation process similarly establishesschematic representations for colors and textures, theserepresentations can be combined productively with per-ceptual representations for shapes to produce complex sim-ulations.

Productivity is not limited to filling in schematic regions

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 593

Figure 4. An example of how perceptual symbols for object cat-egories (A) and spatial relations (B) implement productivitythrough combinatorial (C) and recursive (D) processing. Boxeswith thin solid lines represent simulators; boxes with thick dashedlines represent simulations.

but can also result from replacements, transformations, anddeletions of existing structure. Imagine that a simulatedlamp includes a white cylindrical shade. To represent dif-ferent lamps, the simulated shade could be replaced with asimulated cardboard box, it could be transformed to a coneshape, or it could be deleted altogether. Such operations ap-pear widely available for constructing simulations, extend-ing productivity further.

Most important, the complementarity of schematizationand specialization allow perceptual systems to go beyonda recording system and become a conceptual system.Whereas photos and videos only capture information holis-tically, a perceptual symbol system extracts particular partsof images schematically and integrates them into simula-tors. Once simulators exist, they can be combined to con-struct simulations productively. Such abilities go far beyondthe recording abilities of photos and videos; yet, as we haveseen, they can be achieved within a perceptual framework.

3.1.2. Productivity in imagination. Productivity can sur-pass experience in many ways, constituting an importantsource of creativity (Barsalou & Prinz 1997). For example,one can simulate a chair never encountered, such as a pit-ted lavender chair. Because perceptual symbols for colorsbecome organized together in a semantic field, as do per-ceptual symbols for textures, simulations of a chair can cy-cle through the symbols within each field to try out variouscombinations (Barsalou 1991; 1992; 1993). During interiordecoration, one can combine colors and textures produc-tively with a schematic representation of a chair to seewhich works best, with the semantic fields for colors andtextures providing “palettes” for constructing the possibili-ties. Because different semantic fields can be explored or-thogonally, the resulting simulations take on an analyticcombinatorial character.

Productive mechanisms can further construct simula-tions that violate properties of the physical world. For ex-ample, one can productively combine the shape of a chairwith the simulation of a dog to construct a dog that func-tions as a chair. By searching through the combinatorialspace of possibilities, one can construct many similar simu-lations, such as Carroll’ s (1960) flamingos in Alice in Won-derland who form themselves into croquet mallets, and hishedgehogs who form themselves into croquet balls. Thespace of such possibilities is very large, bounded only by theranges of animals and artifacts that can be combined pro-ductively. When the recursive possibilities are considered,the possibilities become infinite (e.g., a camel taking theform of a carriage, with alligators that form themselves intowheels, with starfish that form themselves into spokes,etc.). Children’s books are full of many further examples,such as productively combining various human abilitieswith various kinds of animals (e.g., dinosaurs that talk, tallbirds that build sand castles).

As these examples illustrate, the human conceptual abil-ity transcends experience, combining existing knowledge innew ways. Because perceptual symbols are schematic, theycan combine creatively to simulate imaginary entities. Wuand Barsalou (1999) demonstrate that when people formnovel concepts productively, perceptual simulation plays acentral role.

3.1.3. Constraints and emergent properties. Although theproductive potential of perceptual symbols is extensive, it is

not a simple process whereby any two perceptual symbolscan combine to form a whole that is equal to the sum of itsparts. Instead, there are constraints on this process, as wellas emergent features (Prinz 1997; Prinz & Barsalou, inpress a). Presumably, these constraints and emergent fea-tures reflect affordances captured through the schematicsymbol formation process (sect. 2.4.4).

Constraints arise when a schematic perceptual symbolcannot be applied to a simulated entity, because the simu-lation lacks a critical characteristic. For example, it is diffi-cult to transform a simulated watermelon into a runningwatermelon, because an entity requires legs in order to run.If a simulated entity does not have legs, then simulating itrunning is difficult. Interestingly, even if an entity has thewrong kind of legs, such as a chair, it can easily be simulatedas running, because it has the requisite spatial parts that en-able the transformation.

Emergent properties arise frequently during the produc-tive construction of simulations. As Langacker (1987) ob-serves, combining animate agents productively with run-ning produces emergent properties with the methods andmanners of running varying across humans, birds, horses,crabs, and spiders. Although emergent properties may oftenreflect perceptual memories of familiar animals running (cf.Freyd 1987; Parsons 1987a; 1987b; Reed & Vinson 1996;Shiffrar & Freyd 1990; 1993), they may also arise from spa-tiotemporal properties of the simulation process. For exam-ple, when one imagines a chair versus a sofa running, dif-ferent simulations result. Because people have probablynever seen a chair or a sofa run, these different simulationsprobably reflect the different lengths of the legs on chairsand sofas, the different distances between their legs, and thedifferent volumes above them. Wu and Barsalou (1999)show that emergent properties arise during productive con-ceptualization as a result of perceptual simulation.18

3.1.4. Linguistic control of productivity. A foundationalprinciple in Langacker’s (1986; 1987; 1991; 1997) theory oflanguage is that grammar corresponds to conceptual struc-ture. One dimension of this correspondence is that the pro-ductive nature of grammar corresponds to the productivenature of conceptualization. The productive combinationof adjectives, nouns, verbs, and other linguistic elementscorresponds to the productive combination of perceptualsymbols for properties, entities, processes, and other con-ceptual elements.

This correspondence provides humans with a powerfulability to control each other’s simulations in the absence ofthe actual referents (Donald 1991; 1993; Tomasello et al.1993). Without this productive ability, people could only re-fer to mutually known referents associated with nonpro-ductive linguistic signs. In contrast, the productive ability toconstruct simulations through language allows people toinduce shared simulations of nonexperienced entities andevents. Past events experienced by one person can be con-veyed to a hearer who has not experienced them, therebyextending the hearer’s realm of knowledge indirectly. Fu-ture events can be explored together during planning, de-cision making, and problem solving as groups of individualsconverge on solutions. Because groups can discuss past andfuture events, greater teamwork becomes possible as doesmore extensive evaluation of possibilities. The productivecontrol of conceptualization through language appears cen-tral to defining what is uniquely human.

Barsalou: Perceptual symbol systems

594 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

3.2. Propositions

Another important lesson that we have learned fromamodal symbol systems is that a viable theory of knowledgemust implement propositions that describe and interpretsituations (e.g., Anderson & Bower 1973; Goodman 1976;Kintsch 1974; Norman et al. 1975; Pylyshyn 1973; 1978;1981; 1984). A given situation is capable of being construedin an infinite number of ways by an infinite number ofpropositions. Imagine being in a grocery store. There arelimitless ways to describe what is present, including (inamodal form):

CONTAINS (grocery store, apples) (3)ABOVE (ceiling, floor)

As these examples illustrate, different construals of the sit-uation result from selecting different aspects of the situa-tion and representing them in propositions. Because an in-finite number of aspects can be propositionalized, selectingthe propositions to represent a situation is an act of cre-ativity (Barsalou & Prinz 1997). Different construals canalso result from construing the same aspects of a situationin different ways, as in:

ABOVE (ceiling, floor) (4)BELOW ( floor, ceiling)

Bringing different concepts to bear on the same aspects ofa situation extends the creative construction of propositionsfurther.

Construals of situations can be arbitrarily complex, re-sulting from the ability to embed propositions hierarchi-cally, as in:

CAUSE (HUNGRY (shopper), BUY (shopper, groceries)) (5)

The productive properties of amodal symbols are central toconstructing complex propositions.

Not all construals of a situation are true. When a con-strual fails to describe a situation accurately, it constitutes afalse proposition, as in:

CONTAINS (grocery store, mountains) (6)

Similarly, true and false propositions can be negative, as inthe true proposition:

NOT (CONTAINS (grocery store, mountains)) (7)

Thus, propositions can construe situations falsely, and theycan indicate negative states.

Finally, propositions represent the gist of comprehen-sion. Comprehenders forget the surface forms of sentencesrapidly but remember the conceptual gist for a long time(e.g., Sachs 1967; 1974). Soon after hearing “Marshall gaveRick a watch,” listeners would probably be unable to spec-ify whether they had heard this sentence as opposed to“Rick received a watch from Marshall.” However, listenerswould correctly remember that it was Marshall who gaveRick the watch and not vice versa, because they had storedthe proposition:

GIVE (Agent 5 marshall, Recipient 5 rick, Object 5 watch) (8)

Thus, propositions capture conceptualizations that can beparaphrased in many ways.

Basically, propositions involve bringing knowledge tobear on perception, establishing type-token relations be-tween concepts in knowledge and individuals in the per-ceived world. This requires a conceptual system that can

combine types (concepts) productively to form hierarchicalstructures, and that can then map these structures onto in-dividuals in the world. It is fair to say that this ability is notusually recognized as possible in perceptual theories ofknowledge, again because they are widely construed asrecording systems. Indeed, this belief is so widespread thatthe term “propositional” is reserved solely for nonpercep-tual theories of knowledge. As we shall see, however, if oneadopts the core properties of perceptual symbol systems,the important properties of propositions follow naturally.Because perceptual symbol systems have the same poten-tial to implement propositions, they too are propositionalsystems.19

3.2.1. Type-token mappings. To see how perceptual sym-bol systems implement type-token mappings, consider Fig-ure 5A. The large panel with a thick solid border stands fora perceived scene that contains several individual entities.On the far left, the schematic drawing of a jet in a thin solidborder stands for the simulator that underlies the conceptjet. The other schematic drawing of a jet in a thick dashedborder represents a specific simulation that provides a goodfit of the perceived jet in the scene. Again, such drawingsare theoretical notations that should not be viewed as literalimages. The line from the simulator to the simulationstands for producing the simulation from the simulator. Theline from the simulation to the perceived individual standsfor fusing the simulation with the individual in perception.

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 595

Figure 5. (A) Examples of how perceptual symbol systems rep-resent true propositions by fusing perceived entities with simu-lations constructed from simulators. (B) Example of a complexhierarchical proposition. (C) Example of an alternative interpre-tation of the same aspects of the scene in panel B. Boxes with thinsolid lines represent simulators; boxes with thick dashed lines rep-resent simulations; boxes with thick solid lines represent per-ceived situations.

The activation of the simulator for jet in Figure 5A resultsfrom attending to the leftmost individual in the scene. Asvisual information is picked up from the individual, it pro-jects in parallel onto simulators in memory. A simulator be-comes increasingly active if (1) its frame contains an exist-ing simulation of the individual, or if (2) it can produce anovel simulation that provides a good fit (sect. 2.4.4). Thesimulation that best fits the individual eventually controlsprocessing.20 Because the simulation and the perceptionare represented in a common perceptual system, the finalrepresentation is a fusion of the two. Rather than being apure bottom-up representation, perception of the individ-ual includes top-down information from the simulation(sect. 2.4.6). The constructs of meshing, blending, percep-tual hypotheses, and perceptual anticipation are closely re-lated (Fauconnier & Turner 1998; Glenberg 1997; Kosslyn1994; Neisser 1976).

Binding a simulator successfully with a perceived indi-vidual via a simulation constitutes a type-token mapping.The simulator is a type that construes a perceived token ashaving the properties associated with the type. Most im-portantly, this type-token mapping implicitly constitutes aproposition, namely, the one that underlies “It is true thatthe perceived individual is a jet.” In this manner, percep-tual symbol systems establish simple propositions.

3.2.2. Categorical inferences. Binding a simulator to a per-ceived individual allows perceivers to draw wide variety ofcategorical inferences (sects. 2.4.4, 2.4.7). If the jet in Fig-ure 5A should become occluded, or if the perceivers shouldturn away, a simulation of its trajectory could predict whereit will reappear later. Simulating the jet might further sug-gest many aspects of it that are not visible. For example, thesimulation might suggest that the jet contains pilots, pas-sengers, luggage, and fuel. Similarly, it might suggest thatthe jet is likely to fly horizontally, not vertically, and is even-tually likely to decrease altitude, put down its wheels, andland at an airport. From bringing the multimodal simulatorfor jet to bear on the perceived individual, a wealth of top-down inference becomes available. Finally, the bindingprocess updates the simulator for jet. As selective attentionextracts perceptual symbols from the current individual anduses them to construct a simulation, they become inte-grated into the underlying frame that helped produce it (asillustrated in Fig. 3).

3.2.3. False and negative propositions. So far, we haveonly considered true propositions, namely, successful bind-ings between simulators and perceived individuals. Not allattempted bindings succeed, however, and when they donot, they constitute false propositions. Similarly, negativepropositions occur when the explicitly noted absence ofsomething in a simulation corresponds to an analogous ab-sence in a perceived situation. Because false and negativepropositions receive detailed treatment in section 3.4 onabstract concepts, further discussion is deferred until then.

3.2.4. Multiple construals. Because a given situation can beconstrued in infinite ways, propositional construal is cre-ative. One way of producing infinite propositions is by se-lecting different aspects of the situation to construe. Figure5A illustrates how perceptual symbol systems implementthis ability. If perceivers were to focus attention on the up-permost individual in the scene, rather than on the leftmost

individual, they would construe the scene differently.Rather than bringing the simulator for jet to bear, theywould bring the simulator for balloon to bear. The success-ful mapping that results represents a different proposition,namely, the one that underlies “It is true that the perceivedindividual is a balloon.” Because infinitely many aspects ofthe scene can be selected and bound to simulators, an infi-nite number of propositions describing the scene are pos-sible.

3.2.5. Productively produced hierarchical propositions.Perceptual symbol systems readily implement complex hi-erarchical propositions. In Figure 5B, a hierarchical simu-lation of a balloon above a cloud is constructed productivelyfrom the simulators for balloon, cloud, and above (sect. 3.1).This hierarchical simulation in turn is fused successfullywith individuals in the perceived scene and the regions theyoccupy. The result is the representation of the hierarchicalproposition that underlies, “It is true that there is a balloonabove a cloud.”

3.2.6. Alternative interpretations. Perceptual symbol sys-tems also readily represent alternative interpretations ofthe same individuals in a scene. As Figure 5C illustrates, thesimulator for below can be mapped into the same aspectsof the perceived situation as the simulator for above (Fig.5B). Because the same spatial configuration of regions sat-isfies both, either can represent a true proposition aboutthe scene, differing only in where attention is focused (i.e.,the upper or lower region). In this manner, and in manyothers as well, perceptual symbols support different inter-pretations of the same information. To the extent that dif-ferent simulations can be fit successfully to the same per-ceived information, different interpretations result.

3.2.7. Gist. A perceptual simulation represents a gist thatcan be paraphrased in multiple ways. Imagine that someonehears the sentence, “The balloon is above the cloud.” To rep-resent the sentence’s meaning, the listener might constructa simulation that focuses attention on the upper region ofabove, as in Figure 5B. When later trying to remember thesentence, however, the listener might construct a simulationthat has lost information about where attention resided. Asa result, it is impossible to specify whether the earlier sen-tence had been “The balloon is above the cloud” or “Thecloud is below the balloon.” As information becomes lostfrom the memory of a simulation, paraphrases become in-creasingly likely. Furthermore, because different simulatorscan often be mapped into the remaining information, addi-tional paraphrases become possible.

In summary, perceptual symbol systems readily imple-ment all the fundamental properties of propositions re-viewed earlier for amodal symbols systems. Perceptualsymbol systems produce type-token mappings that providea wealth of inferences about construed individuals. Theyproduce alternative construals of the same scene, either byselecting different aspects of the scene to simulate, or bysimulating the same aspects in different ways. They repre-sent complex hierarchical propositions by constructing sim-ulations productively and then mapping these complex sim-ulations into scenes. They represent the gist of sentenceswith simulations that can be paraphrased later with differ-ent utterances. Finally, as described later, they representfalse and negative propositions through failed and absent

Barsalou: Perceptual symbol systems

596 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

mappings between simulations and scenes. For all of thesereasons, perceptual symbol systems are propositional sys-tems.

3.2.8. Intentionality. Propositions exhibit intentionality,namely, the ability of a mental state to be about something(e.g., K. Lehrer 1989). In Figure 5A, for example, the sim-ulation for jet has intentionality, because it is about the spe-cific jet in the scene. A major problem for any theory of rep-resentation is specifying how mental states come to beabout something (or nothing at all). In particular, percep-tually based theories of knowledge are often criticized be-cause they seem to imply that the content of an image de-termines its intentionality (or referent). Thus, if someonesimulates a jet, the simulation is assumed to be about a jetjust like the one simulated.

The content of a symbol does not specify its intentional-ity, being neither necessary nor sufficient for establishingreference (e.g., Goodman 1976; also see Bloom & Markson1998). For example, constructing an exact replica of theEmpire State Building is not sufficient for establishing ref-erence from the replica to the building (Schwartz 1981). Ifthis replica were placed on a map of the United States atthe location of New York City, it could stand for the entirecity of New York, not just for the Empire State Building.Similarly, it is not necessary for a symbol that stands for theEmpire State Building to bear any similarity to its referent.If the City of New York had a unique, randomly selectednumber for each of its buildings, the one for the EmpireState Building would constitute an arbitrary symbol for it.As these examples illustrate, the degree to which a symbol’scontent resembles its referent is neither sufficient nor nec-essary for establishing reference.

There is good reason to believe that perceptual repre-sentations can and do have intentionality. Pictures, physicalreplicas, and movies often refer clearly to specific entitiesand events in the world (e.g., Goodman 1976; Price 1953).Just because the content of a perceptual representation isnot the only factor in establishing reference, it does not fol-low that a perceptual representation cannot have referenceand thereby not function symbolically.

It is also widely believed that factors external to a sym-bol’s content play important roles in establishing its inten-tionality (e.g., Dretske 1995). A wide variety of external fac-tors is involved, many of them well documented in theliterature. For example, definite descriptions often help es-tablish the reference of a symbol (Russell 1919a), as in:

“The computer on my office desk is broken.” (9)

On hearing this sentence, imagine that a listener constructsa perceptual simulation of a computer. The content of thissimulation is not sufficient to specify its reference, giventhat it could refer to many computers. However, when con-joined productively with the simulation that represents theregion “on my office desk,” the complex simulation that re-sults establishes reference successfully, assuming that onlyone computer resides in this region. As this example illus-trates, definite descriptions offer one important type ofmechanism external to a symbol’s content that helps estab-lish its reference, but they are by no means the only way ofaccomplishing this. Many other important mechanisms ex-ist as well. However, definite descriptions illustrate the im-portance of external relations that go beyond a symbol’scontent in establishing its intentionality.

External mechanisms such as definite descriptions canestablish reference for either perceptual or amodal symbols(Goodman 1976); because both types of symbols requiresuch mechanisms, neither has an advantage in this regard.Where perceptual symbols do have an advantage is in theability of their content to play a heuristic role in establish-ing reference. Although perceptual content is rarely defin-itive for intentionality, it may often provide a major sourceof constraint and assistance in determining what a symbolis about. In contrast, because the content of an amodal sym-bol bears no resemblance to its referents, it cannot provideany such guidance.

To see how the content of a perceptual symbol can playa heuristic role in establishing reference, consider again theexample in sentence (9). As we saw, the perceptual contentof a simulated computer is not sufficient for establishing thespeaker’s reference, because it could be mapped to manypotential computers in the world, as well as to many otherentities. Instead, the simulation that represents “on my of-fice desk” was required to fix the reference. Importantly,however, simulating the top of the desk alone is not suffi-cient to establish reference if there are other things on thedesk besides the computer. Although this simulation re-stricts the range of potential reference to entities on thespeaker’s desk, it does not specify a unique referent amongthem. However, the high resemblance of a simulated com-puter to one of these entities finalizes the act of reference,specifying the single most similar individual, namely, thecomputer. In this manner, the content of the perceptualsymbol for computer plays an important – although by nomeans complete – role in establishing reference. Both com-ponents of the definite description – simulating the top ofthe speaker’s desk and simulating a computer – are re-quired to refer successfully (Barsalou et al. 1993).21

The perceptual symbol that comes to stand for a referentestablishes sense as well as reference (Frege 1892/1952).As we saw in Figures 5B and 5C, two different propositionscan have the same reference but different interpretationsor senses (i.e., a balloon above a cloud versus a cloud belowa balloon). Because a perceptual symbol contains content,this content represents the symbol’s referents accordingly.As different content becomes active to represent the samereferent, it becomes fused with the perceived referent toestablish different interpretations (sects. 3.2.2, 3.2.4).22

3.2.9. The construal of perceptual symbols. The psycho-logical and philosophical literatures raise important prob-lems about the construal of images. These problems do notconcern propositional construal per se, because they do notbear directly on the problem of mapping types to tokens.Instead, they concern the construal of images in the ab-sence of reference.

One such problem is the ambiguity of images (Chambers& Reisberg 1992). Consider a mental image of a circularshape. How do we know whether this image represents apie, a wheel, or a coin? One solution is to assume that a cir-cular shape would rarely, if ever, be imaged in isolation, butwould instead be part of a simulation constructed from asimulator (sect. 2.4). Approximately the same schematiccircular shape might be produced during the simulation ofa pie, a wheel, or a coin. As discussed for background-dependent meaning (sect. 2.5.3), however, this shape wouldbe framed in the context of the particular simulator thatproduced it, thereby giving it a clear interpretation. Al-

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 597

though the shape might be ambiguous in isolation, whenlinked historically to a particular simulator over the courseof a simulation, it is not.

Conversely, another problem of image construal con-cerns how radically different images can be construed asimages of the same thing. Consider Schwartz’s (1981) ex-ample of a circle. Normally, a circle in full view looks round.If its lower half is occluded, however, it looks like a semi-circle. If partially occluded by visual clutter, it looks like acircular arrangement of line segments. If rotated 45°around its vertical axis, it looks like an oval. If rotated 90°,it looks like a straight vertical line. How do we know thatthese five different images all represent the same entity?Again, the construct of a simulator provides a solution. Ifthe simulator for circle constructs an instance and thentransforms it into different surface images, the transforma-tional history of this particular circle links all these imagesso that they are construed as the same individual. As de-scribed for concept stability (sect. 2.4.5), a simulator unifiesall the simulations it produces into a single concept. The ex-ample here is a similar but more specific case where a sim-ulator unifies all the perspectives and conditions underwhich a given individual can be simulated.

3.3. Variable embodiment

Variable embodiment is the idea that a symbol’s meaningreflects the physical system in which it is represented (Clark1997; Damasio 1994; Glenberg 1997; Johnson 1987; Lakoff1987; Lakoff & Johnson 1980; Newton 1996). As differentintelligent systems vary physically, their symbol systems arelikely to vary as well. Unlike productivity and propositionalconstrual, variable embodiment does not exist in amodalsymbol systems. It exists only in perceptual symbol systems,arising from their first two core properties: neural repre-sentations in perceptual systems (sect. 2.1), and schematicsymbol formation (sect. 2.2). Before considering variableembodiment in perceptual symbol systems, it is useful toconsider its absence in amodal symbol systems.

According to the functionalist perspective that has dom-inated modern cognitive science, the symbol system un-derlying human intelligence can be disembodied. Once wecharacterize the computational properties of this systemsuccessfully, we can implement it in other physical systemssuch as computers. Thus, functionalism implies that thecomputational system underlying human intelligence canbe understood independently of the human body. It furtherimplies that the same basic symbol system operates in allnormal humans, independent of their biological idiosyn-crasies. Just as computer software can be characterized in-dependently of the particular hardware that implements it,the human symbol system can be characterized indepen-dently of the biological system that embodies it. For ac-counts of this view, see Putnam (1960), Fodor (1975), andPylyshyn (1984); for critiques, see Churchland (1986) andG. Edelman (1992).

The discrete referential nature of amodal symbols lies atthe heart of modern functionalism. To see this, consider thetransformations of the word “CUP” in Figure 6 and theirlack of implications for reference. As Figure 6 illustrates,the word “CUP” can increase in size, it can rotate 45° coun-terclockwise, and the . on the P can become separatedfrom the *. In each case, the transformation implies nothingnew about its referent. If “CUP” originally referred to the

referent on the left of Figure 6, its reference does notchange across these transformations. Making “CUP” largerdoes not mean that its referent now appears larger. Rotat-ing “CUP” 45° counterclockwise does not imply that its ref-erent tipped. Separating the . on the P horizontally fromthe * does not imply that the handle of the cup has now bro-ken off. Because words bear no structural relations to theirreferents, structural changes in a word have no implicationsfor analogous changes in reference. As long as the conven-tional link between a word and its referent remains intact,the word refers to the referent discretely in exactly the sameway across transformations.

Because amodal symbols refer in essentially the sameway as words do, they also refer discretely. Changes in theirform have no implications for their meaning. It is this dis-crete property of amodal symbols that makes functionalismpossible. Regardless of how an amodal symbol is realizedphysically, it serves the same computational function. Phys-ical variability in the form of the symbol is irrelevant, as longas it maintains the same conventional links to the world, andthe same syntactic relations to other amodal symbols.

Perceptual symbols differ fundamentally. Unlike anamodal symbol, variations in the form of a perceptual sym-bol can have semantic implications (cf. Goodman 1976). AsFigure 6 illustrates, the schematic perceptual symbol for acup can increase in size, it can rotate 45° counterclockwise,and the handle can become separated from the cup. In eachcase, the transformation implies a change in the referent(Fig. 6). Increasing the size of the perceptual symbol im-plies that the referent appears larger, perhaps because theperceiver has moved closer to it. Rotating the perceptualsymbol 45° counterclockwise implies that the cup hastipped analogously. Separating the handle from the cup inthe perceptual symbol implies that the handle has becomedetached from the referent. Because perceptual symbolsbear structural relations to their referents, structuralchanges in a symbol imply structural changes in its referent,at least under many conditions.23

The analogically referring nature of perceptual symbolsmakes their embodiment critical. As the content of a sym-bol varies, its reference may vary as well. If different intel-ligent systems have different perceptual systems, the con-ceptual systems that develop through the schematic symbolformation process may also differ. Because their symbolscontain different perceptual content, they may refer to dif-ferent structure in the world and may not be functionallyequivalent.

3.3.1. Adaptive roles of variable embodiment. Variableembodiment allows individuals to adapt the perceptualsymbols in their conceptual systems to specific environ-ments. Imagine that different individuals consume some-what different varieties of the same plants because they livein different locales. Through perceiving their respectivefoods, different individuals develop somewhat differentperceptual symbols to represent them. As a result, some-what different conceptual systems develop through theschematic symbol formation process, each tuned optimallyto its typical referents.24

Variable embodiment performs a second useful function,ensuring that different individuals match perceptual sym-bols optimally to perception during categorization (sect.2.4.4). Consider the perception of color. Different individ-uals from the same culture differ in the detailed psy-

Barsalou: Perceptual symbol systems

598 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

chophysical structure of their color categories (Shevell &He 1995; V. Smith et al. 1976). Even individuals with nor-mal color vision discriminate the same colors in somewhatdifferent ways, because of subtle differences in their per-ceptual systems. As a result, when the schematic symbolformation process establishes the ability to simulate colors,somewhat different simulators arise in different individu-als. Because each simulator is grounded in a subtly differ-ent implementation of the same perceptual system, it rep-resents the same symbols in subtly different manners. Mostimportant, each individual’s simulator for color optimallymatches their symbols for colors to their perceptions of col-ors. As different individuals search for bananas at the gro-cery store, they can simulate the particular representationof yellow that each is likely to perceive on encounteringphysical instances.

Because humans vary on all phenotypic traits to some ex-tent, they are likely to vary on all the perceptual discrimi-nations that could be extracted to form perceptual symbols,not just color. Similar variability should arise in the percep-tion of shape, texture, space, pitch, taste, smell, movement,introspection, and so forth. If so, then variable embodimentallows the human conceptual system to adapt itself natu-rally to variability in perceptual systems. In contrast, suchadaptability is not in the spirit of functionalism. Becausefunctionalism rests on amodal symbols that bear no struc-tural relation to their referents, it neither anticipates nor ac-commodates individual variability in perception.

3.3.2. Variable embodiment in conceptual variability andstability. Earlier we saw that conceptual variability arisesout of simulators (sect. 2.4.5). When the simulator for a cat-egory produces different simulations, conceptual variabil-ity arises both between and within individuals. Variable em-bodiment provides a second source of variability. Differentindividuals represent the same concept differently becausetheir perceptual symbols become tuned to somewhat dif-ferent physical environments and develop in somewhat dif-ferent perceptual systems.

We also saw earlier that simulators provide conceptualstability (sects. 2.4.5, 3.2.9). When different simulations canbe traced back to the same simulator, they become unifiedas instances of the same concept. Embodiment similarlyprovides stability. Although perceptual systems producevariable embodiment across different individuals, they alsoproduce shared embodiment at a more general level. Be-cause most humans have roughly the same mechanisms forperceiving color, they have roughly the same conceptualrepresentations of it. Although perceptual systems induceidiosyncrasies in perception and conception, they also in-duce basic commonalities (Newton 1996).

3.4. Abstract concepts

Representing abstract concepts poses a classic challenge forperceptual theories of knowledge. Although representingconcrete objects has sometimes seemed feasible, repre-

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 599

Figure 6. An example of how transforming a word or an amodal symbol fails to produce an analogous transformation in reference,whereas transforming a perceptual simulation does.

senting events, mental states, social institutions, and otherabstract concepts has often seemed problematic.

3.4.1. Representing abstract concepts metaphorically.Cognitive linguists have suggested that metaphor providesa perceptually based solution to the representation of ab-stract concepts (e.g., Gibbs 1994; Johnson 1987; Lakoff1987; Lakoff & Johnson 1980; Lakoff & Turner 1989;Turner 1996). For example, Lakoff and Johnson (1980) sug-gest that the concrete domain of liquid exploding from acontainer represents the abstract concept of anger. Al-though metaphor most certainly plays a major role in elab-orating and construing abstract concepts, it is not sufficientfor representing them (Barsalou et al. 1993; Murphy 1996).Instead, a direct, nonmetaphorical representation of an ab-stract domain is essential for two reasons: first, it constitutesthe most basic understanding of the domain. Knowing onlythat anger is like liquid exploding from a container hardlyconstitutes an adequate concept. If this is all that peopleknow, they are far from having an adequate understandingof anger. Second, a direct representation of an abstract do-main is necessary to guide the mapping of a concrete do-main into it. A concrete domain cannot be mapped system-atically into an abstract domain that has no content.

As research on emotion shows (e.g., G. Mandler 1975;Shaver et al. 1987; Stein & Levine 1990), people have di-rect knowledge about anger that arises from three sourcesof experience. First, anger involves the appraisal of an ini-tiating event, specifically, the perception that an agent’s goalhas been blocked. Second, anger involves the experience ofintense affective states. Third, anger often involves behav-ioral responses, such as expressing disapproval, seeking re-venge, and removing an obstacle. As people experienceeach of these three aspects of anger, they develop knowl-edge directly through the schematic symbol formationprocess (sect. 2.2).

Although people may understand anger metaphoricallyat times, such understanding elaborates and extends the di-rect concept. Furthermore, metaphorical language may of-ten indicate polysemy rather than metaphorical conceptu-alization (Barsalou et al. 1993). For example, whensomeone says, “John exploded in anger,” the word “ex-plode” may function polysemously. “Explode” may haveone sense associated with heated liquid exploding in con-tainers, and another associated with the rapid onset of an-gry behavior. Rather than activating conceptual knowledgefor liquid exploding from a container, “explode” may simplyactivate a perceptual simulation of angry behavior directly.As in the direct interpretation of indirect speech acts thatbypass literal interpretation (Gibbs 1983; 1994), familiarmetaphors may produce direct interpretations that bypassmetaphorical mappings. Just as “Can you pass the salt?” by-passes its literal meaning to arrive directly at a pragmatic re-quest, so can “explode” bypass its concrete sense to arrivedirectly at its introspective sense. Although novel meta-phors may typically require a metaphorical mapping, fa-miliar metaphors may bypass this process through poly-semy.

3.4.2. Representing abstract concepts directly with per-ceptual symbols. Ideally, it should be shown that percep-tual symbol systems can represent all abstract concepts di-rectly. Such an analysis is not feasible, however, given thelarge number of abstract concepts. An alternative strategy

is to select quintessential abstract concepts and show thatperceptual accounts are possible. The next two subsectionsprovide perceptual accounts of two such concepts, truthand disjunction, as well as related concepts in their seman-tic fields. If challenging abstract concepts like these can berepresented perceptually, we have good reason to believethat other abstract concepts are also tractable.

In developing perceptual accounts of these abstract con-cepts and others, a general trend has emerged. Across theseconcepts, three mechanisms appear central to their repre-sentation. First, an abstract concept is framed against thebackground of a simulated event sequence. Rather than be-ing represented out of context in a single time slice, an ab-stract concept is represented in the context of a larger bodyof temporally extended knowledge (Barwise & Perry 1983;Fillmore 1985; Langacker 1986; Newton 1996; Yeh &Barsalou 1999a; 1999b). As we saw earlier in the sectionson simulators (sect. 2.4) and framing (sect. 2.5.3), it is pos-sible to simulate event sequences perceptually, and it is pos-sible for a simulator to frame more specific concepts. Thus,perceptual symbol systems can implement the framing thatunderlies abstract concepts.

Second, selective attention highlights the core content ofan abstract concept against its event background (Lan-gacker 1986; Talmy 1983). An abstract concept is not theentire event simulation that frames it but is a focal part ofit. As we saw earlier in the section on schematic symbol for-mation (sect. 2.2), it is possible to focus attention on a partof a perceptual simulation analytically. In this way, percep-tual symbol systems capture the focusing that underlies ab-stract concepts.

Third, perceptual symbols for introspective states arecentral to the representation of abstract concepts. If onelimits perceptual symbols to those that are extracted fromperception of the external world, the representation of ab-stract concepts is impossible. As we saw earlier in the sec-tion on multimodal symbols, the same symbol formationprocess that operates on the physical world can also oper-ate on introspective and proprioceptive events. As a result,the introspective symbols essential to many abstract con-cepts can be represented in a perceptual symbol system. Al-though many different introspective events enter into therepresentation of abstract concepts, propositional construalappears particularly important (sect. 3.2). As we shall see,the act of using a mental state to construe the physical worldis often central.

Together, these three mechanisms – framing, selectivity,and introspective symbols – provide powerful tools for rep-resenting abstract concepts in perceptual symbol systems.As we shall see shortly, they make it possible to formulateaccounts of truth, disjunction, and related concepts. Thissuccess suggests a conjecture about the representation ofabstract concepts: framing, selectivity, and introspectivesymbols allow a perceptual symbol system to represent anyabstract concept. This conjecture in turn suggests a generalstrategy for discovering these representations. First, iden-tify an event sequence that frames the abstract concept.Second, characterize the multimodal symbols that repre-sent not only the physical events in the sequence but alsothe introspective and proprioceptive events. Third, identifythe focal elements of the simulation that constitute the corerepresentation of the abstract concept against the eventbackground. Finally, repeat the above process for any otherevent sequences that may be relevant to representing the

Barsalou: Perceptual symbol systems

600 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

concept (abstract concepts often refer to multiple events,such as marriage referring to a ceremony, interpersonal re-lations, domestic activities, etc.). If the conjecture is cor-rect, this strategy should always produce a satisfactory ac-count of an abstract concept.

Of course, the success of this strategy does not entail thatpeople actually represent abstract concepts this way. Suchconclusions await empirical assessment and support. How-ever, success is important, because it demonstrates thatperceptual symbol systems have the expressive power torepresent abstract concepts. In a pilot study with KarenOlseth Solomon, we have obtained preliminary support forthis account. Subjects produced more features about eventsequences and introspection for abstract concepts, such astruth and magic, than for concrete concepts, such as car andwatermelon. Recent neuroscience findings are also consis-tent with this proposal. As reviewed by Pulvermüller(1999), abstract concepts tend to activate frontal brain re-gions. Of the many functions proposed for frontal regions,two include the coordination of multimodal informationand sequential processing over time. As just described, ab-stract concepts exhibit both properties. On the one hand,they represent complex configurations of multimodal in-formation that must be coordinated. On the other, they rep-resent event sequences extended in time. Thus, frontal ac-tivation is consistent with this proposal.

3.4.3. Representing truth, falsity, negation, and anger.This analysis does not attempt to account for all senses oftruth, nor for its formal senses. Only a core sense of peo-ple’s intuitive concept of truth is addressed to illustrate thisapproach to representing abstract concepts. In the pilotstudy with Solomon, subjects described this sense fre-quently. Figure 7A depicts the simulated event sequencethat underlies it. In the first subevent, an agent constructsa perceptual simulation of a balloon above a cloud using theproductive mechanisms described earlier. The agent mighthave constructed this simulation on hearing a speaker say,“There’s a balloon above a cloud outside,” or because theagent had seen a balloon above a cloud earlier, or for someother reason. Regardless, at this point in the event se-quence, the agent has constructed a simulation. In the sec-ond subevent, the agent perceives a physical situation (i.e.,the scene inside the thick solid border), and attempts tomap the perceptual simulation into it. The agent may at-tempt this mapping because a speaker purported that thestatement producing the simulation was about this particu-lar situation, because the agent remembered that the sim-ulation resulted from perceiving the situation earlier, or forsome other reason. The agent then assesses whether thesimulation provides an accurate representation of the situ-ation, as described earlier for propositional construal. Inthis case, it does. Analogous to the simulation, the situationcontains a balloon and a cloud, and the balloon is above thecloud. On establishing this successful mapping, the agentmight say, “It’s true that a balloon is above a cloud,” with“true” being grounded in the mapping.

This account of truth illustrates the three critical mech-anisms for representing abstract concepts. First, a simu-lated event sequence frames the concept. Second, the con-cept is not the entire simulation but a focal part of it,specifically, the outcome that the simulation provides an ac-curate construal of the situation. Third, perceptual symbolsfor introspective events are central to the concept, includ-

ing those for a perceptual simulation, the process of com-paring the perceptual simulation to the perceived situation,and the outcome of establishing a successful mapping be-tween them. After performing this complex event sequenceon many occasions, a simulator develops for truth, that is,people learn to simulate the experience of successfullymapping an internal simulation into a perceived scene.25

The concept of falsity is closely related to the concept oftruth. The account here addresses only one sense of peo-ple’s intuitive concept for falsity, although it too is polyse-mous and has formal interpretations. As Figures 7A and 7Billustrate, very similar event sequences underlie these twoconcepts. In both, a simulation is constructed initially, fol-lowed by the perception of a situation. The two sequencesdiffer only in whether the simulation can or cannot bemapped successfully into the situation. Whereas the map-ping succeeds for truth, it fails for falsity. Thus, a speaker,after failing to map the simulation into the situation in Fig-ure 7B, might say, “It is false that there is a balloon above acloud,” with “false” being grounded in the failed mapping.The slant mark through the line between the simulated bal-loon and its absence in perception is a theoretical device,not a cognitive entity, that stands for a failed attempt at fus-ing them.26

The concept of negation is also closely related to the con-cept of truth. Again, the account here addresses only onesense of people’s intuitive concept. As Figure 7C illustrates,negation results from explicitly noting the absence of abinding between a simulator and a simulation. In thisparticular case, the simulator for balloon is not bound toanything in the upper region of the above simulation. Ex-plicitly noting the absence of a binding becomes part of the

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 601

Figure 7. (A) Accounting for one sense of truth using perceptualsymbols. (B) Accounting for one sense of falsity using perceptualsymbols. (C) Accounting for one sense of negation using percep-tual symbols. Boxes with thin solid lines represent simulators;boxes with thick dashed lines represent simulations; boxes withthick solid lines represent perceived situations.

simulation, as indicated theoretically by the line with a slantmark through it. When the simulation is compared subse-quently to the perceived situation, the absent mapping un-derlies the negated claim, “It is true that there is not a bal-loon above a cloud.” Negation can also occur with falsity. Ifthe perceived situation in Figure 7C did contain a balloonabove a cloud, the absent mapping would fail to map intothe situation, and the perceiver might say, “It is false thatthere is not a balloon above a cloud.” As these examples il-lustrate, falsity and negation can be represented with ab-sent mappings between simulators, simulations, and per-ceived situations. Givón (1978) and Glenberg et al. (1998a)make related proposals.

The concept of anger also belongs to this semantic field.As discussed earlier (sect. 3.4.1), a core component of angeris the appraisal of a blocked goal. In perceptual symbol sys-tems, a goal is a simulated state of the world that an agentwould like to achieve, and a blocked goal is the failed map-ping of such a simulation at a time when it is expected tomap successfully. Thus, a blocked goal involves an event se-quence very similar to the one in Figure 7B for falsity. Inboth, a simulated situation fails to match a perceived situa-tion. As also described earlier, anger involves other corecomponents, such as affective experience and behavioralresponses, which are likely to be included in its simulations.Nevertheless, anger belongs to the same semantic field astruth and falsity by virtue of sharing a common event se-quence, namely, assessing the fit of a simulation to a situa-tion. Lie is also related, where a statement induces a simu-lation purported to be true that is actually not (i.e., thesimulation is negative in the liar’s simulation but false in thedeceived’s simulation).

Finally, it is worth noting that the experiential basis forthis semantic field probably exists from birth, if not before.Anyone who has raised children knows that an infant has ex-pectations about food, comfort, and sleep. When these ex-pectations are satisfied, the infant is content; when they arenot, the infant conveys dissatisfaction. The point is that theinfant constantly experiences the event sequence that un-derlies truth, falsity, negation, anger, and their related con-cepts. From birth, and perhaps before, the infant simulatesits expected experience and assesses whether these simula-tions map successfully into what actually occurs (cf. Haith1994). As an infant experiences this event sequence day af-ter day, it learns about satisfied versus failed expectations.If infants have the schematic symbol formation process de-scribed earlier, and if they can apply it to their introspectiveexperience, they should acquire implicit understandings oftruth, falsity, and negation long before they begin acquir-ing language. Once the language acquisition process be-gins, a rich experiential basis supports learning the words inthis semantic field.

3.4.4. Representing disjunction and ad hoc categories.Disjunction, like truth, is polysemous and includes formalsenses. The analysis here only attempts to account for oneintuitive sense. As Figure 8 illustrates, an event sequence isagain critical. In the first subevent, an agent perceives a sit-uation and constructs an internal simulation that construesparts of it propositionally. In the second subevent some timelater, the agent attempts to remember the earlier situation,searching for the propositions constructed at that time. Asthe second subevent illustrates, the agent retrieves oneproposition for between and two others for tree and barn,

which are combined productively. Although the tree on theleft and the barn on the right have been remembered, thebetween proposition implies that there was an entity in themiddle, which has been forgotten. Finally, in the thirdsubevent, the agent attempts to reconstruct the entity thatcould have existed in the middle. Using reconstructivemechanisms (irrelevant to an analysis of disjunction), theagent simulates two possible entities in this region, a horseand a cow, with the intent that they possibly construe the re-gion’s original content. The dotted lines are theoretical no-tations which indicate that the agent is not simulating thesetwo specializations simultaneously but is alternating be-tween them in a single simulated event, while holding spe-cializations of the outer regions constant. During thisprocess, the agent might say, “There was a horse or a cowbetween the tree and the barn,” with “or” being groundedin the alternating simulations of the middle region.

Similar to truth and negation, the three mechanisms forrepresenting abstract concepts are present. Again, an eventsequence frames the concept, while attention focuses onthe core content, namely, the alternating specializations ofthe middle region. Again, introspective events, such as pro-ductive simulation, propositional construal, and alternatingspecialization, are central.27

The semantic field that contains disjunction also containsad hoc categories (Barsalou 1983; 1985; 1991). As Barsalou(1991) proposes, ad hoc categories construe entities as play-ing roles in events. For example, the category of things tostand on to change a light bulb construes a chair as playingthe role of allowing someone to reach a light bulb while re-placing it. Alternatively, the category of things used to propa door open construes a chair as playing the role of keepinga door from closing, perhaps to cool off a hot room. Each ofthese categories is disjunctive, because a wide variety of en-tities can play the same role in an event.

Figure 9 illustrates how perceptual symbol systems rep-resent ad hoc categories. Figure 9A stands for simulatingthe event of changing a light bulb, and Figure 9C stands forsimulating the event of propping a door open. In each sim-ulation, a schematic region can be specialized disjunctively,thereby forming an ad hoc category. In Figure 9A, aschematic region exists for the entity that is stood on to

Barsalou: Perceptual symbol systems

602 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

Figure 8. Accounting for one sense of disjunction using percep-tual symbols. Boxes with thin solid lines represent simulators;boxes with thick dashed lines represent simulations; the box withthe thick solid line represents a perceived situation. The dashedlines in Subevent 3 represent alternating simulations of a horseand a cow in the middle region of the simulated event.

reach the light bulb. In Figure 9C, a schematic region ex-ists for the entity that props the door open. Each of theseschematic regions can be specialized with entities in theperceived situation (Fig. 9B). The schematic region forsomething to stand on to change a light bulb in Figure 9Acan be specialized with the stool, chair, and table, whereasthe schematic region for something used to prop a dooropen in Figure 9C can be specialized with the accordion,the chair, and the iron.28

As Figure 9 further illustrates, the same entity is construedin different ways when it specializes regions in differentevent simulations. For example, the chair in Figure 9B spe-cializes regions in simulations for changing a light bulb andfor propping a door open. As in Figures 5B and 5C, a per-ceptual symbol system represents different interpretationsof the same individual by binding it to different simulations.A wide variety of other role concepts, such as merchandise,food, and crop, can similarly be represented as disjunctivesets that instantiate the same region of a simulation.

3.4.5. Summary. Perceptual symbol systems can representsome of the most challenging abstract concepts directly. Al-though they have not been shown to represent all abstractconcepts, their ability to represent some of the most diffi-cult cases is encouraging. Such success suggests that otherabstract concepts can be represented similarly by (a) iden-tifying the event sequences that frame them, (b) specifyingthe physical and introspective events in these sequences,and (c) identifying the focal elements of these simulations.As these examples further illustrate, abstract concepts areperceptual, being grounded in temporally extended simu-lations of external and internal events. What may makethese concepts seem nonperceptual is their heavy reliance

on complex configurations of multimodal information dis-tributed over time. This property distinguishes abstractconcepts from more concrete concepts, but it does notmean that they are nonperceptual.

4. Implications

A perceptual theory of knowledge is not restricted to arecording system that can only store and process holisticimages. Construing perceptual theories this way fails to ad-dress the most powerful instances of the class. As we haveseen, it is possible to formulate a perceptual theory ofknowledge that constitutes a fully functional conceptualsystem. According to this theory, selective attention extractscomponents of perceptual experience to establish simula-tors that function as concepts. Once established, simulatorsrepresent types, they produce categorical inferences, theycombine productively to form complex simulations neverexperienced, and they establish propositions that construeindividuals in the world. Furthermore, this framework isnot limited to the representation of concrete objects butcan also represent abstract concepts. A perceptual symbolsystem can perform all of the important functions that mod-ern cognitive science asks of a theory of knowledge.

Once one begins viewing human knowledge this way, itchanges how one views other aspects of cognition. Think-ing about memory, language, and thought as grounded inperceptual simulation is very different from thinking aboutthem as grounded in the comparison of feature lists, thesearch of semantic nets, or the left-to-right processing ofpredicates. The remaining sections explore how adoptingthe perspective of perceptual symbol systems affects one’sviews of cognition, neuroscience, evolution, development,and artificial intelligence.

4.1. Implications for cognition

4.1.1. Perception. As described earlier, cognition divergedfrom perception in the twentieth century as knowledgecame to be viewed as nonperceptual. Conversely, percep-tion diverged from cognition, focusing primarily on bot-tom-up sensory mechanisms and ignoring top-down ef-fects. From the perspective of perceptual symbol systems,the dissociation between perception and cognition is artifi-cial and invalid. Because perception and cognition sharecommon neural systems, they function simultaneously inthe same mechanisms and cannot be divorced. What islearned about one becomes potentially relevant for under-standing the other.

This influence is not unidirectional: cognition does notbecome more perceptual while perception remains unaf-fected. Perception is not an entirely modular system withcognition lying outside it. Because perception shares sys-tems with cognition, bottom-up activation of perceptualsystems engages cognitive processes immediately. As de-scribed earlier (sect. 2.4.6), bottom-up information maydominate conflicting top-down information but fuse withconsistent top-down information.

4.1.2. Categorization and concepts. From the perspectiveof perceptual symbol systems, categorization is proposi-tional construal (sect. 3.2). The perception of an individualactivates the best-fitting simulator, thereby establishing atype-token mapping. In the process, the simulator runs a

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 603

Figure 9. Accounting for the ad hoc categories of things to standon to change a light bulb (A) and things that could prop a dooropen (C), which construe common entities in the same scene dif-ferently (B).

schematic simulation in the same perceptual systems thatperceive the individual, with the simulation adding inferredfeatures. Because the simulation and the perception arerepresented in shared perceptual systems, they becomefused together. In contrast, standard views assume thatamodal features in a nonperceptual system become activeto represent a perceived individual, with the cognitive andperceptual representations remaining separate.

The concepts that underlie categorization also take onnew forms from the perspective of perceptual symbol sys-tems. Rather than being a static amodal structure, a conceptis the ability to simulate a kind of thing perceptually. Be-cause simulators partially reproduce perceptual experi-ence, spatiotemporal relations and transformations becomecentral to concepts. Whereas such knowledge is cumber-some and brittle in amodal symbol systems (e.g., Clark1997; Glasgow 1993; McDermott 1987; Winograd & Flores1987), it “rides for free” in perceptual symbol systems(Goldstone & Barsalou 1998). Conceptual combinationsimilarly takes on a new character in perceptual symbol sys-tems. Rather than being the set-theoretic combination ofamodal features, it becomes the hierarchical constructionof complex perceptual simulations.

Perceptual symbol systems also offer a different per-spective on the proposal that intuitive theories providebackground knowledge about concepts (Murphy & Medin1985). As we saw earlier (sect. 2.5.3), concepts are oftenframed in the context of perceptually simulated events.This type of framing may underlie many of the effects at-tributed to intuitive theories, and it may account for themin a more implicit and less formal manner. Rather than as-suming that people use theories to frame concepts, it maybe more realistic to assume that people use simulatedevents from daily experience. For example, an intuitive the-ory about biology is often assumed to frame concepts of nat-ural kinds. Alternatively, simulations of biological eventsmay frame these concepts and represent implicitly the the-oretical principles that underlie them. Specifically, simula-tions of birth, growth, and mating may frame concepts likeanimal and represent theoretical principles such as moth-ers give birth to babies, who grow up to produce their ownoffspring. Simulations of various transformations may fur-ther support inferences that distinguish natural kinds fromartifacts, such as the ability of leg to bend for human but notfor table. Similarly, simulations of rolling out pizzas versusthe minting of quarters may underlie different inferencesabout variability in diameter (Rips 1989). Event simulationsmay also represent functions, such as the function of cupsbeing represented in simulations of drinking from them.Whenever an intuitive theory appears necessary to explaina conceptual inference, a simulated event sequence mayprovide the requisite knowledge.

4.1.3. Attention. Attention is central to schematic symbolformation. By focusing on an aspect of perceptual experi-ence and transferring it to long-term memory, attentionovercomes the limits of a recording system. Attention is akey analytic mechanism in parsing experience into theschematic components that ultimately form concepts. Al-though attention has played important roles in previoustheories of concepts (e.g., Nosofsky 1984; Trabasso &Bower 1968), its role here is to create the schematic per-ceptual symbols that compose simulators.

Once a simulator becomes established, it in turn controls

attention (cf. Logan 1995). As a simulator produces a sim-ulation, it controls attention across the simulation. Thus, at-tention becomes a semantic feature, as when it focuses onthe upper versus the lower region of the same spatial rep-resentation to distinguish above from below. The control ofattention can also contrast a focal concept against a back-ground simulation, as we saw in the sections on framing(sect. 2.5.3) and abstract concepts (sect. 3.4). As these ex-amples illustrate, attention takes on important new roles inthe context of perceptual symbol systems.

Perceptual symbol systems also provide natural accounts oftraditional attentional phenomena. For example, automaticprocessing is the running of a highly compiled simulation,whereas strategic processing is the construction of a novelsimulation using productive mechanisms. Priming is essen-tially perceptual anticipation (Glenberg 1997; Neisser 1976),namely, the top-down activation of a simulation that matchesa perceptual input and facilitates its processing. In con-trast, inhibition is the top-down suppression of a simulator.

4.1.4. Working memory. Current accounts of working mem-ory construe it as a complex set of limited capacity mecha-nisms, including an executive processor and several modal-ity specific buffers, such as an articulatory loop and a visualshort-term memory (Baddeley 1986). From the perspectiveof perceptual symbol systems, working memory is the sys-tem that runs perceptual simulations. The articulatory loopsimulates language just heard or about to be spoken. The vi-sual short-term buffer simulates visual experience just seenor currently imagined. The motor short-term buffer simu-lates movements just performed or about to be performed.The gustatory short-term buffer simulates tastes just experi-enced or anticipated. The executive processor simulates theexecution of procedures just executed or about to be per-formed. Not only do these working memory systems oper-ate during perception, movement, and problem solving,they can also be used to simulate these activities offline.

Standard theories of cognition assume that working mem-ory contains perceptual representations, whereas long-term memory contains amodal representations (e.g.,Baddeley 1986; Kosslyn 1980). From the perspective of per-ceptual symbol systems, both systems are inherently per-ceptual, sharing neural systems with perception. Whereaslong-term memory contains simulators, working memoryimplements specific simulations.

4.1.5. Long-term memory. Encoding information into long-term memory is closely related to categorization, becauseboth involve propositional construal. When an individual isencountered, its categorization into a type-token relationencodes a proposition (sect. 3.2.1). To the extent that thisproposition receives processing, it becomes established inlong-term memory. The many varieties of elaboration andorganization in the encoding literature can all be viewed inthis manner. Indeed, various findings suggest that elabora-tion and organization are inherently perceptual (e.g.,Brandimonte et al. 1997; Intraub et al. 1998; M. Martin &Jones 1998). From the perspective of perceptual symbolsystems, encoding produces a fusion of bottom-up sensa-tion and top-down simulation (sect. 2.4.7). Under concep-tually driven orienting tasks, a fusion contains increased in-formation from simulation; under data-driven orientingtasks, a fusion contains increased information from sensa-tion (cf. Jacoby 1983). Furthermore, rich sensory informa-

Barsalou: Perceptual symbol systems

604 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

tion suggests that the memory resulted from perception,not imagination (Johnson & Raye 1981).

Once a type-token fusion becomes stored in long-termmemory, it can be retrieved on later occasions. Because thefusion is a perceptual representation, its retrieval is essen-tially an attempt to simulate the original entity or event.Thus, memory retrieval is another form of perceptual sim-ulation (Glenberg 1997; cf. Conway 1990; Kolers & Roedi-ger 1984), with fluent simulations producing attributions ofremembrance (Jacoby et al. 1989). Such simulations canbecome active unconsciously in implicit memory, or theycan become active consciously in explicit memory. Recon-structive memory reflects the unbidden contribution of asimulator into the retrieval process. As a memory is re-trieved, it produces a simulation of the earlier event. As thesimulation becomes active, it may differ somewhat from theoriginal perception, perhaps because of less bottom-upconstraint. To the extent that the remembered event’s fea-tures have become inaccessible, the simulator converges onits default simulation. The result is the wide variety of re-constructive effects reported in the literature.

4.1.6. Language processing. Language comprehensioncan be viewed as the construction of a perceptual simula-tion to represent the meaning of an utterance or text. Assuggested earlier, the productive properties of languageguide this process, with a combination of words providinginstructions for building a perceptual simulation (Lan-gacker 1986; 1987; 1991; 1997; MacWhinney 1998). A typ-ical sentence usually contains reference to specific individ-uals, as well as predications of them. For example, “the cupon Anna’s desk is blue” refers to a particular cup on a par-ticular desk, and predicates that it is blue. To construct asimulation of this sentence, the comprehender simulatesthe individual desk and cup and then specializes the colorof the cup. Later sentences update the simulation by chang-ing the individuals present and/or transforming them.Thus, “it contains pens and pencils” adds new individuals tothe simulation, inserting them inside the cup. The afford-ances of a simulation may often produce inferences duringcomprehension. For example, spatial properties of thepens, pencils, and cup determine that the pens and pencilssit vertically in the cup, leaning slightly against its lip. If thepens and pencils had instead been placed in a drawer, theirorientation would have been horizontal. In an amodal rep-resentation, such inferences would not be made, or theywould require cumbersome logical formulae.

As individuals and their properties become established ina simulation, they become fused with the simulators that con-strue them. Thus, the pens, pencils, and cup become fusedwith simulators for pen, pencil, and cup, and the blue colorof the cup becomes fused with blue. As a result, the simula-tors produce inferences about the individuals as needed. Forexample, if the text stated, “Anna checked whether one of thepens worked, and it didn’t,” the simulator for pen might sim-ulate inferences such as Anna pressed a button on top of apen and when the pen didn’t work, an uncomfortable feelingresulted from the dry pen scraping across the paper. Simi-larly, if the text stated, “Anna would have preferred a cup ina lighter shade of blue,” the simulator for blue might simu-late a lighter shade that simulates the cup negatively but thatproduces a more positive affective response in the simulationof Anna’s mental state. As comprehension proceeds, repre-sentations of individuals develop, as in the perception of a

physical scene. Simultaneously, simulators become fusedwith these imagined individuals to represent propositionsabout them, much like the type-token mappings that developduring the categorization of physical entities.29

As these examples illustrate, perceptual simulation offersa natural account of how people construct the meanings oftexts, or what other researchers have called situation mod-els and mental models (e.g., Johnson-Laird 1983; Just &Carpenter 1987; van Dijk & Kintsch 1983). A variety offindings can be interpreted as evidence that perceptualsimulation underlies these models (e.g., Black et al. 1979;Bransford & Johnson 1973; Gernsbacher et al. 1990; Gibbs1994; Glenberg et al. 1987; Intraub & Hoffman 1992; Mor-row et al. 1987; Potter & Faulconer 1975; Potter et al. 1986;Potter et al. 1977; Von Eckardt & Potter 1985; Rinck et al.1997; Wilson et al. 1993).

4.1.7. Problem solving, decision making, and skill. Fromthe perspective of perceptual symbol systems, problem solv-ing is the process of constructing a perceptual simulationthat leads from an initial state to a goal state. Problem solverscan work forward from the initial state or backward from thegoal state, but in either case they attempt to simulate a planthat achieves the goal. In novice problem solving, the diffi-culty is finding satisfactory components of the simulationand ordering them properly. If novices start from the initialstate, they may not know the next event to simulate that willlead to the goal. A component may be added to the simula-tion that has been successful for achieving this type of goalin the past, or a component might be added because its sim-ulated affordances suggest that it may work. For example,one might want to remove caulk between a wall and thecounter in a kitchen. If one has never done this before, var-ious plans can be simulated to see which works best (e.g., us-ing a scraper versus a chemical solvent).

Decision making can be viewed as specializing a simu-lated plan in different ways to see which specialization pro-duces the best outcome (cf. the simulation heuristic of Kah-neman & A. Tversky 1982). In evaluating plans to removecaulk from a joint, a decision must be made about how tospecialize the region of the handheld instrument. As possi-ble specializations are retrieved, each is simulated to assesswhich works best. A wide variety of decisions can be viewedthis way, including decisions about purchases, occupations,social events, and so forth. For each, an agent simulates aplan to achieve a goal and then tries out disjunctive spe-cializations of a region to see which yields the most promis-ing outcome. In essence, making a decision involves con-structing an ad hoc category for a plan region and selectingone of its members (sect. 3.4.4).

Skill results from compiling simulations for most of theplans in a domain through extensive experience (cf. Ander-son 1993; Logan 1988; Newell 1990). Rather than search-ing for plan components and making decisions about howto specialize plan regions, experts retrieve compiled simu-lations that achieve goals with minimal transformation.During plan execution, simulated plans run slightly aheadof perception. As in propositional construal, simulation andperception become closely entrained. Expertise is achievedwhen an agent can almost always simulate what is about tooccur, rarely stopping to revise the simulation.

4.1.8. Reasoning and formal symbol manipulation. To seehow perceptual symbol systems could underlie logical rea-

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 605

soning, consider modus ponens. In modus ponens, if thepremise XrY is true, and if the premise X is also true, thenthe conclusion Y follows. Shortly, a formal account ofmodus ponens in perceptual symbol systems will be pre-sented, but first an implicit psychological account is devel-oped. Imagine that an agent is told, If a computer is a Mac-intosh, then it has a mouse. On hearing this, the agentconstructs a perceptual simulation of a Macintosh that in-cludes a mouse, thereby representing the premise, XrY,informally (cf. Johnson-Laird 1983). On a later occasion,when a particular Macintosh is encountered (i.e., premiseX), it activates the simulation of a Macintosh constructedearlier, which includes a mouse. As the simulation is fusedwith the individual, a mouse is simulated, even if a physicalmouse is not perceived. Through this psychological ana-logue to modus ponens, the inference that the individualhas a mouse is drawn (i.e., Y).

Similar use of simulation could underlie syllogisms suchas Every B is C, A is B, therefore A is C. Imagine that, overtime, an agent experiences a mouse with every Macintosh.As a result, mice become strongly established in the simu-lator for Macintosh, so that all simulations of a Macintoshinclude one (i.e., Every B is C). Later, when a new entity iscategorized as a Macintosh (i.e., A is B), the simulator forMacintosh produces a simulation that includes a mouse,thereby drawing the syllogistic inference (i.e., A is C).

To the extent that C does not always covary with B, thecertainty of C decreases, giving the inference a statisticalcharacter (Oaksford & Chater 1994). If an agent has expe-rienced Macintoshes without mice, the inference that aperceived individual has one is less certain, because simu-lations can be constructed without them as well as withthem. To the extent that simulations with mice are easier toconstruct than simulations without mice, however, the in-ference is compelling. As a simulation becomes more flu-ent, its perceived likelihood increases (Jacoby et al. 1989).

Widespread content effects in reasoning are consistentwith perceptual simulation. As many researchers have re-ported, reasoning improves when the abstract variables inarguments are replaced with familiar situations. For exam-ple, people draw the invalid inference of affirming the con-sequent in arguments stated with abstract variables (i.e., re-ceiving the premises XrY and Y, and then concluding X).However, in a familiar domain such as computers, if X is aMacintosh and Y is a mouse, people do not affirm the con-sequent, because they can think of non-Macintosh com-puters that have mice. From the perspective of perceptualsymbol systems, this improvement reflects the ability toconstruct relevant simulations (cf. Johnson-Laird 1983). Tothe extent that the critical events have been experienced,information becomes stored that produces the simulationsnecessary to drawing only valid inferences.

Perceptual simulation offers a similar account of causalreasoning. Cheng and Novick (1992) propose that thestrength of a causal inference reflects the difference be-tween the probability of an event leading to an outcome andthe probability of the event not leading to the outcome,with increasingly large differences producing increasinglystrong inferences. A perceptual symbol system can com-pute these differences using perceptual simulations. To theextent that it is easier to simulate an event leading to an out-come than the event not leading to the outcome, the eventis construed as a likely cause of the outcome. Indeed, peo-ple appear to construct such simulations to assess causality

(Ahn & Bailenson 1996; Ahn et al. 1995). Covariation is alsocritical, though, because the more the event and outcomecovary, the greater the fluency of the simulation, and thestronger the causal attribution (cf. Jacoby et al. 1989).Much additional research in social cognition illustrates thatthe ease of simulating a scenario underlies the acceptabil-ity of a causal explanation (e.g., K. Markman et al. 1993;Pennington & Hastie 1992; Wells & Gavinski 1989).

Finally, it is possible to explain formal symbol manipula-tion in logic and mathematics through the simulation of ar-bitrary symbols. From perceptual experience with externalsymbols and operations, the ability to construct analogoussimulations internally develops. For example, after watch-ing an instructor work through modus ponens, a studentmay develop the ability to simulate the formal procedureinternally. The student learns to simulate the two premisesfollowed by the inferred conclusion, analogous to how theywould be manipulated externally (i.e., simulate “XrY” and“X” as given, then simulate “Y” as true). Furthermore, thestudent develops the ability to simulate, replacing variableswith constants. Thus, if students receive “Macintoshr

mouse, Macintosh” in a problem, they know that memoryshould be searched for a simulated rule that this pattern ofconstants can specialize. On retrieving the simulation formodus ponens, the students see that the perceived form ofthe constants matches the simulated form of the inferencerule, indicating that the simulation can be applied. Runningthe simulation requires first specializing the variables in thesimulated premises with the constants from the problem.The simulation then produces “Y” as an inference and spe-cializes it with its value from the first premise, thereby sim-ulating the correct conclusion, “mouse.”

As this example illustrates, the same basic processes thatsimulate natural entities and events can also simulate for-mal entities and events. It is worth adding, though, thatpeople often construct nonformal simulations to solve for-mal problems. For example, mathematicians, logicians, andscientists often construct visual simulations to discover andunderstand formalisms (e.g., Barwise & Etchemendy 1990,1991; Hadamard 1949; Thagard 1992). Nonacademics sim-ilarly use nonformal simulations to process formalisms (e.g.,Bassok 1997; Huttenlocher et al. 1994). Whereas proofs en-sure the properties of a formalism, perceptual simulationsoften lead to its discovery in the first place.

4.2. Implications for evolution and development

Amodal symbol systems require a major leap in evolution.Assuming that nonhuman animals do not have amodal sym-bol systems, humans must have acquired a radically newform of neural hardware to support a radically new form ofrepresentation. Of course, this is possible. However, if amore conservative evolutionary path also explains the hu-man conceptual system, parsimony favors it, all other fac-tors being equal.

Not only do perceptual symbol systems offer a more par-simonious account of how intelligence evolved, they also es-tablish continuity with nonhuman animals. On this view,many animals have perceptual symbol systems that allowthem to simulate entities and events in their environment.Such simulations could produce useful inferences aboutwhat is likely to occur at a given place and time, and aboutwhat actions will be effective. Because many animals haveattention, working memory, and long-term memory, they

Barsalou: Perceptual symbol systems

606 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

could readily extract elements of perception analytically, in-tegrate them in long-term memory to form simulators, andconstruct specific simulations in working memory. If so,then the human conceptual ability is continuous with thenonhuman conceptual ability, not discontinuous.

Where human intelligence may diverge is in the use oflanguage to support shared simulations (e.g., Donald 1991;1993; Tomasello et al. 1993). Evolution may have built uponperceptual symbol systems in nonhuman primates byadding mechanisms in humans for uttering and decodingrapid speech, and for linking speech with conceptual simu-lations. Rather than evolving a radically new system of rep-resentation, evolution may have developed a linguistic sys-tem that extended the power of existing perceptual symbolsystems. Through language, humans became able to con-trol simulations in the minds of others, including simula-tions of mental states. As a result, humans became able tocoordinate physical and mental events in the service ofcommon goals. Whereas nonhumans primarily constructsimulations individually in response to immediate physicaland internal environments, humans construct simulationsjointly in response to language about nonpresent situations,thereby overcoming the present moment.

Human development may follow a similar path, with on-togeny loosely recapitulating phylogeny (K. Nelson 1996).Similar to nonhumans, infants may develop simulators andmap them into their immediate world. During early devel-opment, infants focus attention selectively on aspects of ex-perience, integrate them in memory, and construct simula-tors to represent entities and events (cf. Cohen 1991; Jones& L. Smith 1993; J. Mandler 1992; L. Smith et al. 1992).Long before infants use language, they develop the abilityto simulate many aspects of experience. By the time infantsare ready for language, they have a tremendous amount ofknowledge in place to support its acquisition. As they en-counter new words, they attach them to the relevant simu-lators. New words may sometimes trigger the constructionof a new simulator, or a new aspect of an existing one (cf. E.Markman 1989; Tomasello 1992). Much of the time, how-ever, new words may map onto existing simulators and theirparts. As linguistic skill develops, children learn to con-struct simulations productively from other people’s utter-ances, and to construct utterances that convey their inter-nal simulations to others.

Analogous to perceptual symbol systems being continu-ous across evolution, perceptual symbol systems are con-tinuous across development, with the addition of linguisticcontrol added to achieve social coordination and culturaltransmission. Certainly, perceptual symbol systems mustchange somewhat over both evolution and development.Indeed, the principle of variable embodiment implies suchdifferences (sect. 3.3). To the extent that different animalshave different perceptual and bodily systems, they shouldhave different conceptual systems. Similarly, as infants’ per-ceptual and bodily systems develop, their conceptual sys-tems should change accordingly. Nevertheless, the samebasic form of conceptual representation remains constantacross both evolution and development, and a radically newform is not necessary.

4.3. Implications for neuroscience

Much more is known about how brains implement per-ception than about how they implement cognition. If

perception underlies cognition, then what we know aboutperception can be used to understand cognition. Neural ac-counts of color, form, location, and movement in percep-tion should provide insights into the neural mechanismsthat represent this information conceptually. Much re-search has established that mental imagery producesneural activity in sensory-motor systems, suggesting thatcommon neural mechanisms underlie imagery and per-ception (e.g., Crammond 1997; Deschaumes-Molinaro etal. 1992; Farah 1995; Jeannerod 1994; 1995; Kosslyn 1994;Zatorre et al. 1996). If perceptual processing also under-lies cognition, then common sensory-motor mechanismsshould be active for all three processes. As described ear-lier (sect. 2.3), increasing neuroscientific evidence sup-ports this hypothesis, as does increasing behavioral evi-dence (e.g., Barsalou et al., in press; Solomon & Barsalou1999a; 1999b; Wu & Barsalou 1999).

Because perception, imagery, and cognition are not iden-tical behaviorally, their neuroanatomical bases should notbe identical. Theorists have noted neuroanatomical differ-ences between perception and imagery (e.g., Farah 1988;Kosslyn 1994), and differences also certainly exist betweenperception and cognition. The argument is not that per-ception and cognition are identical. It is only that they sharerepresentational mechanisms to a considerable extent.

4.4. Implications for artificial intelligence

Modern digital computers are amodal symbol systems.They capture external input using one set of sensing mech-anisms (e.g., keyboards, mice) and map it to a different setof representational mechanisms (e.g., binary strings inmemory devices). As a result, arbitrary binary strings cometo stand for the states of input devices (e.g., 1011 stands fora press of the period key).

Nevertheless, a perceptual symbol system could be im-plemented in a current computer. To see this, imagine thata computer has a set of peripheral devices that connect it tothe world. At a given point in time, the “perceptual state”of the machine is the current state of its peripheral devices.If the machine can focus selectively on a small subset of aperceptual state, associate the subset’s components inmemory to form a perceptual symbol, integrate this mem-ory with related memories, and later reproduce a superim-position of them in the peripheral device to perform con-ceptual processing, it is a perceptual symbol system. Themachine stores and simulates perceptual symbols in its per-ceptual systems to perform basic computational operations– it does not process transduced symbols in a separate cen-tral processor.

In such a system, a perceptual symbol can later becomeactive during the processing of new input and be fused withthe input to form a type-token proposition (in a peripheraldevice). If different perceptual symbols become fused withthe same perceptual input, different construals result. Thesystem could also implement productivity by combining theactivation of peripheral states in a top-down manner. Mostsimply, two symbols could be activated simultaneously andsuperimposed to form a more complex state never encoun-tered.

Variable embodiment has implications for such imple-mentations. Because peripheral devices in computers dif-fer considerably from human sensory-motor systems, im-plementations of perceptual symbol systems in computers

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 607

should have a distinctly nonhuman character. Contrary tothe claims of functionalism, computers should not be capa-ble of implementing a human conceptual system, becausethey do not have the requisite sensory-motor systems forrepresenting human concepts. Although it is intriguing toconsider how a perceptual symbol system might be imple-mented technologically, it is probably naive to believe thatsuch a system would correspond closely to human intelli-gence. Such correspondence awaits the development of ar-tifacts that are much more biological in nature.

5. Conclusion

Once forgotten, old ideas seem new. Perhaps the percep-tual approach to cognition has not been altogether forgot-ten, but it has appeared so strange and improbable in mod-ern cognitive science that it has been relegated to theperiphery. Rather than resurrecting older perceptual theo-ries and comparing them to amodal theories, reinventingperceptual theories in the contexts of cognitive science andneuroscience may be more productive. Allowing these con-texts to inspire a perceptual theory of cognition may lead toa competitive and perhaps superior theory.

Clearly, every aspect of the theory developed here mustbe refined, including the process of schematic symbol for-mation, the construct of a simulator, the productive use oflanguage to construct simulations, the fusing of simulationswith perceived individuals to produce propositions, theability to represent abstract concepts, and so forth. Ideallythese theoretical analyses should be grounded in neuralmechanisms, and ideally they should be formalized com-putationally. Furthermore, a strong empirical case needs tobe established for myriad aspects of the theory. Again, thegoal here has only been to demonstrate that it is possible toground a fully functional conceptual system in sensory-motor mechanisms, thereby giving this classic theory newlife in the modern context.

ACKNOWLEDGMENTSThis work was supported by National Science Foundation GrantsSBR-9421326 and SBR 9796200. I am indebted to Karen OlsethSolomon, Ling-Ling Wu, Wenchi Yeh, and Barbara Luka for theircontributions to the larger project in which this paper originates,and to Jesse Prinz for extensive guidance in reading the philo-sophical literature. For helpful comments on earlier drafts, I amgrateful to Katy Boerner, Eric Dietrich, Shimon Edelman, ArthurGlenberg, Robert Goldstone, Annette Herskovits, Alan Malter,Jean Mandler, Robert McCauley, Douglas Medin, Gregory Mur-phy, Lance Rips, and Steven Sloman, none of whom necessarilyendorse the views expressed. I am also grateful to Howard Nus-baum for discussion on cognitive penetration, and to Gay Snod-grass for permission to use drawings from the Snodgrass and Van-derwart (1980) norms. Finally, I am grateful to Stevan Harnad forsupporting this article, for extensive editorial advice of great value,and for coining the term “simulator” as a replacement for “simu-lation competence” in previous articles.

NOTES1. This is not a claim about correspondence between percep-

tual symbols and the physical world. Although the structure ofperceptual symbols may correspond to the physical world in somecases, it may not in others. For example, philosophers have oftenargued that a correspondence exists for primary qualities, such asshape, but not for secondary qualities, such as color (e.g., K.Lehrer 1989). Neuroscientists have similarly noted topographic

correspondences between neuroanatomical structure and physi-cal structure (e.g., Tootel et al. 1982; Van Essen 1985).

2. Throughout this paper, double quotes signify words, anditalics signify conceptual representations, both modal and amodal.Thus, “chair” signifies the word chair, whereas chair signifies thecorresponding concept.

3. Some researchers argue that visual agnosia and optic apha-sia support a distinction between perception and conception. Be-cause these disorders are characterized by normal perceptual abil-ities but damaged conceptual and semantic abilities, they suggestthat perceptual and conceptual abilities reside in distinct neuralsystems. Detailed study of these disorders, however, suggests cau-tion in drawing such conclusions (e.g., Hillis & Caramazza 1995).When careful behavioral assessments are performed, correlateddeficits between perception and conception may actually be pres-ent. Also, bottom-up control of sensory-motor areas may remainafter top-down control is lost (sect. 2.4.6). Rather than providingevidence for two different representational systems, these disor-ders may provide evidence for two different ways of activating acommon representational system.

4. Various causal accounts of symbol grounding have been pro-posed in the philosophical literature, but these typically apply onlyto a small fragment of concepts and fail to provide a comprehen-sive account of how symbols in general are grounded. Further-more, empirical evidence for these theories is typically lacking.

5. Specifying the features computed in the sensory-motor cor-tices constitutes an undeveloped aspect of the theory. To a con-siderable extent, this problem belongs to a theory of perception(but see Schyns et al. 1998). Nevertheless, whatever features turnout to be important for perception should also be at least some-what important for cognition. Specifying how associative areasstore patterns of features in sensory-motor areas constitutes an-other undeveloped aspect of the theory, although the connection-ist literature suggests many possibilities.

6. When conscious states do occur, they do not necessarily ex-hibit one-to-one mappings with the neural states that producedthem (Pessoa et al. 1998).

7. Specifying how the cognitive system knows where to focusattention during the symbol formation process constitutes an un-developed component of the theory. Barsalou (1993) and Logan(1995) suggest several possibilities, but this is a central issue thatremains far from resolved in any theory.

8. Jesse Prinz suggested this account of triangle. Note that thequalitative neurons in this account constitute a modal representa-tion, not an amodal one. As defined earlier (sect. 1.1), modal refersto the fact that the same neurons represent triangles perceptuallyand conceptually. Thus, the qualitative neurons that represent tri-angle are not arbitrarily linked to the neural states that arise whileperceiving triangles. Instead, the neurons that represent triangleconceptually are a subset of those that are active when trianglesare processed perceptually.

9. Recent research suggests that object-oriented referenceframes may not be essential to categorization (e.g., S. Edelman1998; Tarr & Pinker 1989; but see Biederman & Gerhardstein1993). Regardless, the proposal here is that object-oriented refer-ence frames organize knowledge about a type of entity. Althoughsuch reference frames may not always become active during fa-miliar categorizations, they almost certainly exist in categoricalknowledge, as suggested by people’s robust ability to constructthree-dimensional images and perform transformations on them(see Finke, 1989, for a review).

10. Section 2.5 provides preliminary accounts of the frame for-mation and simulation processes. Nevertheless, many crucial as-pects of these processes remain undeveloped, including (1) the in-tegration of memories into frames, (2) the retrieval of informationfrom frames to construct simulations, (3) the integration andtransformation of information in simulations, and (4) the devel-opment of abstractions.

11. Why cognitive systems divide the world into some cate-gories but not others remains largely unresolved, as in other theo-

Barsalou: Perceptual symbol systems

608 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

ries of knowledge. One exception is so-called basic level categories.Because it is easiest to superimpose perceptual representations ofentities that have the same configurations of parts (Biederman1987; Rosch et al. 1976; B. Tversky & Hemenway 1985), percep-tual symbol systems predict that basic level categories should beparticularly easy to learn and process (Fig. 3). Also, perceptualsymbol systems naturally explain how ad hoc categories arise incognition (sect. 3.4.4).

12. The criteria for a simulation providing a satisfactory fit to aperceived entity remain unresolved in this theory.

13. As in most other theories of knowledge, how the right in-ferences are drawn at the right time remains an unresolved issue.Situational cues, however, are likely to be central, with particularinferences activated in situations to which they are relevant. Stan-dard associative mechanisms that implement contextual effects inconnectionism may be important, as may the use of affordancesdescribed next.

14. Several caveats must be made about the theoretical depic-tions in Figure 3. For simplicity, this figure only depicts percep-tual information in two dimensions, although its actual represen-tation is assumed to be three-dimensional. The object-centeredreference frame is also not represented, although some sort ofscheme for representing spatial information is assumed. Finally,the depictions in Figure 3 should not be viewed as implying thatinformation in a frame is represented pictorially or as consciousmental images. Again, the actual representations in a frame areconfigurations of neurons in perceptual systems that become ac-tive on representing the physical information conveyed in thesedrawings. It is essential to remember that these drawings are usedonly for ease of illustration, and that they stand for unconsciousneural representations.

15. Framing and contextualized meaning are closely related tothe philosophical debate on meaning holism, molecularism, andatomism (e.g., Cummins 1996; Fodor & LePore 1992; Quine1953). The position taken here is closest to molecularism, whichis the idea that concepts often exhibit local dependencies, ratherthan being globally interrelated (holism), or completely indepen-dent (atomism). Specifically, I assume that dependencies residemostly within simulators and less so beyond. A concept repre-sented recursively within a simulator often depends on other con-cepts in the same simulator but is relatively less affected by con-cepts in other simulators.

16. Detailed relations between linguistic and conceptual sim-ulators remain undeveloped in this theory. However, cognitive lin-guists make many suggestions relevant to developing these rela-tions.

17. These thicker boundaries, too, are theoretical notationsthat do not exist literally in cognitive representations. Instead, theystand for the cognitive operation of attention on perceptual rep-resentations of space. For a similar view of the role that attentionplays in conceptualization and semantics, see Langacker (1986).

18. Obviously, much remains to be learned about when per-ceptual symbols can and cannot combine productively, andabout when emergent features arise and why. Although these is-sues remain unresolved in all theories of knowledge, constraintsfrom embodiment and affordances are likely to play importantroles.

19. Although the focus here is on propositions that bind typesto individuals in the world, it is important to note that other kindsof propositions exist as well. For example, types can construe sim-ulators, simulations, operations on simulations, and so forth. The

mechanisms described later for representing abstract concepts inperceptual symbol systems have much potential for handlingthese other cases.

20. As noted earlier, specifying how the fit between conceptualand perceptual representations is computed constitutes an unde-veloped aspect of the theory.

21. Specifying how perceptual symbol systems use externalfactors and internal content to establish reference constitutes arelatively undeveloped aspect of the theory, as in other theories ofsymbolic function.

22. Contrary to standard Fregean analysis, senses here func-tion as psychological representations rather than as ideal descrip-tions that exist independently of human observers.

23. As discussed earlier for intentionality (sect. 3.2.8), resem-blance is neither necessary nor sufficient to establish the refer-ence of a perceptual symbol, with external factors playing criticalroles. Under many conditions, however, transformations of per-ceptual symbols do correspond to analogous transformations intheir referents, as described here. The conditions under whichanalogical reference holds remain to be determined.

24. If simulators were sufficiently schematic, such differencesmight not occur, given that the differences between referentscould be completely filtered out. To the extent that the symbol for-mation process fails to filter out all idiosyncratic details, however,simulators will differ. Widespread exemplar effects suggests thatidiosyncratic details often do survive the schematization process(e.g., Brooks 1978; Heit & Barsalou 1996; Medin & Schaffer 1978;Nosofsky 1984).

25. The purpose of this analysis is simply to provide a sense ofhow perceptual symbol systems could represent abstract con-cepts. Obviously, much greater detail is needed for a full account.In particular, the external and internal events that frame the sim-ulation must be specified adequately, as must the comparisonprocess, and the assessment of fit. Similar detail is necessary forall the other abstract concepts to follow.

26. If all simulators in long-term memory compete uncon-sciously to categorize an individual, then at this level a very largenumber of false propositions are represented, assuming that mostsimulators do not become bound to the individual successfully.However, one might want to limit the false propositions in cogni-tion only to those considered consciously. In other words, a falseproposition becomes represented when a perceiver explicitly at-tempts to map a simulator into an individual and fails.

27. The alternating simulation in Figure 8 implicitly repre-sents exclusive or, given that the horse and cow are never simu-lated simultaneously, in the middle region. If a third specializationcontaining both the horse and the cow alternated with the indi-vidual specializations, this three-way alternation would implicitlyrepresent inclusive or.

28. Simulators and simulations that construe the individuals inFigure 9B as chair, table, iron, and so forth are omitted for sim-plicity.

29. Kosslyn (1987; 1994) suggests that the right hemisphererepresents detailed metric information, whereas the left hemi-sphere represents qualitative categorical information. To the ex-tent that this formulation is correct, it suggests the following con-jecture: the right hemisphere represents individuals in perceptionand comprehension, and the left hemisphere represents the sim-ulators that construe them. Thus, a type-token proposition in-volves a mapping from a simulator in the left hemisphere to an in-dividual in the right hemisphere.

Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 609

Open Peer Commentary

Commentary submitted by the qualified professional readership of thisjournal will be considered for publication in a later issue as ContinuingCommentary on this article. Integrative overviews and syntheses are es-pecially encouraged.

Modality and abstract concepts

Fred Adamsa and Kenneth CampbellbaDepartment of Philosophy and bDepartment of Psychology, University ofDelaware, Newark, DE 19716. {fa; kencamp}@udel.eduwww.udel.edu/Philosophy/famain.htmlwww.udel.edu/psych/fingerle/Kencamp.htm

Abstract: Our concerns fall into three areas: (1) Barsalou fails to makeclear what simulators are (vs. what they do); (2) activation of perceptualareas of the brain during thought does not distinguish between the activa-tion’s being constitutive of concepts or a mere causal consequence (Barsa-lou needs the former); and (3) Barsalou’s attempt to explain how modalsymbols handle abstraction fails.

Accounts of concepts, cognition, and parsimony pique our inter-ests. Were the brain to use but one kind of symbol (perceptual) forall cognitive tasks, cognition would be economical. Barsalouclaims that perceptual symbols (modal symbols) can do whatevernonperceptual symbols (amodal symbols) were posited to do(Barsalou, sect. 1.2.2). If true, cognition would be economicallyand optimally efficient. However, we are not yet convinced thatmodal symbols can do everything amodal symbols can.

In what follows, we take up why we believe that Barsalou’s ac-count is not yet adequate. We believe that his description of sim-ulators is significantly incomplete. Furthermore, data recordingactivation of perceptual areas during thinking are consistent withthese areas being activated because of thinking, not as part ofthinking. Finally, we maintain that Barsalou’s account of simula-tors is far from showing how abstraction is possible.

Simulators must perform the function of concepts. But what aresimulators? Barsalou says what they do, not what they are. He bor-rows much of what simulators need to do from considering fea-tures of cognition that traditionally led to the notion of a concept.Concepts are for categorization, recognition (of token as of type),productivity and systematicity of thought (Aizawa 1997; Fodor &Pylyshyn 1988), and for abstraction and compositional features ofthought, generally (Barsalou, sect. 1.5). (Call these cognitive rolesR1 . . . Rn). But what is a simulator and how are simulators ableto do R1 . . . Rn? This is not clear. Barsalou often helps himself tothe very activities traditional concepts are thought to performwithout telling us what simulators are or how they are able to dothis. Now, to be sure, this target article is not supposed to containall of the answers. But we would like to see some potential for afuture reductive style explanation. It would be music to hear thatsimulators have properties P1 . . . Pn, and that having Pl . . . Pn ex-plains cognitive roles R1 . . . Rn (the traditional roles for con-cepts). Instead, we see Barsalou helping himself to R1 . . . Rnwithout telling us what simulators are or how they produce theseproperties.

A second significant feature of Barsalou’s account is his appealto neural evidence showing that when people are thinking (notseeing), perceptual or motor regions of the brain become activedue to massive cerebral interconnectivity (sects. 2.3, 2.4.7, 4.3).This suggests to Barsalou that perceptual or motor symbols in thebrain are constitutive of the concepts employed in thinking. SoFred might think “Ken is in his office” and perceptual regionsmight be activated that would have previously been used toprocess the sight of Ken in his office. We do not dispute the evi-dence that thinking activates perceptual or motor regions. But

Barsalou needs more (sect. 2.1). He needs to demonstrate thatthese neural connections are constitutive of the relevant concepts(of Ken, and of being in one’s office) in order to show that con-cepts do not shed their perceptual roots or ties. Otherwise it isopen to the claim that amodal concepts may, when activated, causeperceptual or motor symbols to be activated without those sym-bols being constitutive of the related concepts. Fred’s concept ofKen is of Ken. Thinking of Ken may activate Fred’s perceptualsymbol for mustache, because Ken has one. But having a mus-tache is not part of what it is to be Ken (he could “lose the mus-tache”). When we think of pencils, motor areas of the brain maybe activated for manipulating the hand. But although people canlearn to write with their feet (due to loss of hand function), thesemotor activations cannot be constitutive of our concept of pencil.They may just be empirical associations (Adams & Aizawa 1994).

Finally, we turn to Barsalou’s account of abstraction. To begin,consider Descartes’ classical examples of chiliagons or myriagons.How are our concepts of these figures perceptual (modal)? Theyseem to be totally dependent on nonmodal definitional under-standings (thousand-sided closed figure, million-sided closed fig-ure) respectively. We would be interested in hearing Barsalou’s ac-count of the modality of these concepts (or of an infinity of parallellines in a Lobachevskian space). Our personal simulators lack per-ceptual ties (symbols) for these concepts.

When Barsalou applies his machinery to abstract concepts, wefind no more success. For truth (sect. 3.4.3), we are supposed torun a simulator of a balloon above a cloud. This will then be com-pared with a perception of an actual balloon above a cloud. Amatch or mapping will elicit from an agent “It’s true that a balloonis above a cloud,” with truth grounded in the mapping (Barsalou,sect. 2.4.6). Supposedly, the agent would acquire the concept oftruth over disparate sequences of such mappings and with differ-ent modalities. We see two rather large problems with this ac-count. First, Barsalou’s evidence supports only that the agent hasthe concept of matching not of truth. If Al runs a simulator of whata fork is and then looks at a fork and sees a mapping, this wouldbe as good an account of Al’s concept of fork (or match) as of hisconcept of truth. So, we take it that Barsalou’s simulator accountof truth has important lacunae. Second, Barsalou’s appeal to theagent’s utterance is disturbing. The meaning of the English word“truth” is carrying some of the load in Barsalou’s explanation. Buthe is not entitled to help himself to the meaning of the Englishword to explain the agent’s acquisition of the concept of truth.Meanings of concepts come first. The word comes later (Dretske1981). Also this threatens to blur derived versus underived mean-ing for concepts (Dretske 1988; Searle 1983). The meaning of“truth” in a natural language derives from the meaning of truth inthe concepts of speakers of the language. Barsalou in severalplaces exploits the use of English sentences to express the con-tents of perceptual concepts. The problem, of course, is that En-glish (or any natural language) may smuggle meaning from amodalconcepts into Barsalou’s account.

What makes perceptual symbols perceptual?

Murat AydedeThe University of Chicago, Department of Philosophy, Chicago, IL 60637. [email protected] humanities.uchicago.edu/faculty/aydede/

Abstract: Three major attempts by Barsalou to specify what makes a per-ceptual symbol perceptual fail. One way to give such an account is to usesymbols’ causal/nomic relation to what they represent, roughly the waycontemporary informational psychosemanticists develop their theories,but this fails to draw the distinction Barsalou seems to have in mind.

Barsalou seems to suggest the following answer(s) to the questionof what makes perceptual symbols perceptual: perceptual symbols(PSs) are (1) modal, (2) analogical (i.e., in many cases they re-

Commentary/Barsalou: Perceptual symbol systems

610 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

semble their semantic value; e.g., referent), and/or (3) reside (areimplemented) mainly in the sensory-motor areas of the brain.

The first answer is not helpful because the modal/amodal di-chotomy is defined in terms of the perceptual/nonperceptual di-chotomy. The most one can get from the text is that “[PSs] are rep-resented in the same systems as the perceptual states thatproduced them” (sect. 1.1, para. 2). This does not help much ei-ther, because all we are told is that a PS is any symbol in whosecausal production a perceptual state essentially figures, and whichresides in the same system as the symbols underlying the percep-tual state. For this to work, we must have a workable notion of per-ceptual/nonperceptual systems, and I am not sure we do, inde-pendent of a corresponding notion of perceptual/nonperceptualsymbols. Hence it looks like defining modal/amodal symbols interms of the systems in which they reside will not break the cir-cularity. Furthermore, if causal production by a perceptual statewere sufficient to make a symbol perceptual, almost all symbolsposited by the recent tradition Barsalou opposes would be per-ceptual: in a sense, all symbols could be causally produced by per-ceptual states – if “causal production” is understood as “involve-ment in causal etiology.”

The second answer is not really elaborated in the text but is con-fined to some short remarks, such as that the “structure of a per-ceptual symbol corresponds, at least somewhat, to the perceptualstate that produced it” (sect. 1.1). Later in the article, Barsaloutells us a bit more but not much; not enough, anyway, to calm theobvious worries such terminology provokes. He says, for instance,that perceptual symbols resemble their referents (sect. 3.2.8), andthat certain kinds of changes in the form of the symbols systemat-ically result in changes in their semantic value, that is, in what theyrepresent (sect. 3.3). He illustrates this sense in his Figure 6. Buthe never tells us how to understand such language. This is worri-some, because such discourse borders on the absurd when takenliterally. Let me elaborate.

Mental representations (symbols) live dual lives. They are real-ized by physical properties in the brain. As such, they are physicalparticulars, that is, they can in principle be individuated in termsof brute-physical properties. Indeed, Barsalou repeatedly re-minds us that the pictures in his figures are nothing but a conve-nient notation to refer to certain activation patterns of neurons inthe sensory-motor areas of the brain. These are the perceptualsymbols. We can, then, talk about symbols as physical entities/events/processes realized in the brain. As such they have a form.But symbols also represent: they have a semantic content. Theircontent or semantic value is what they represent. The duality inquestion, then, is one of form/content, or as it is sometimes called,syntax/semantics. PSs must live similar lives. Barsalou’s second an-swer to my initial question seems to be an attempt to mark themodal/amodal distinction in terms of the form of the symbols, orthe syntactic vehicles of semantic content.

Well, then, how is the attempt supposed to go? How are we totranslate his language of “resemblance” so that his answer be-comes plausible? What is it about the formal properties of vehi-cles (recall: brute-physical properties of neuronal activation pat-terns) such that they are not semantically “arbitrary”? We are notgiven any clue in the text. Let me suggest an answer on Barsalou’sbehalf.

Perhaps the formal property F of a certain type of symbol S re-alized in my brain is semantically nonarbitrary in that its tokeningis under the causal/nomic control of instantiations of a certainproperty of a type of physical object that S refers to. But if the an-swer to our question is to be elaborated along anything like thisline, we have abandoned the project of specifying modal/amodaldistinction in purely formal terms, and have instead causallyreached out to the world, to the represented. It is not accidentalthat such a “reaching out” has been at the core of recent philo-sophical literature on how to naturalize psychosemantics, in at-tempts to give a naturalistic account of how certain brain states orneuronal events can represent things/events in the world and havesemantic content. This is a matter of the relational/semantic lives

of symbols, not their intrinsic/formal lives, from which I startedon behalf of Barsalou.

For Dretske (1981) and Fodor (1987; 1990), two leading infor-mational psychosemanticists, a concept, understood as a symbolicrepresentation realized in the brain, is (partly) the concept it is be-cause it stands in a certain systematic nomological/causal relationto its semantic value. If such relations were crucial to making sym-bols perceptual, then on this account, many informational theo-rists would automatically turn out to be defenders of PSs – not adesirable consequence.

I do not think that any general attempt to characterize whatmakes a mental symbol perceptual/modal (as opposed to nonper-ceptual/modal) can be successfully given in terms of the formalproperties of vehicles. This is plausible if we firmly keep in mindwhat we are talking about when we talk about mental symbols: ac-tivation patterns of neurons in sensory-motor areas of the brainwith a lot of physiological properties.

This leaves us with Barsalou’s third answer to my commentary’squestion. Perhaps this is indeed his main claim: (almost) all hu-man cognition can be and is done by the kinds of symbols (what-ever their forms are) residing in the sensory-motor areas of thebrain. But if this is Barsalou’s answer, it does not establish any dis-tinction. For one thing, it is open to the defender of the necessityof amodal symbols in cognition to claim that amodal symbols, forall we know, may reside in such areas. For another, if this is all thatthe perceptual/nonperceptual distinction comes to, the target ar-ticle loses much of its theoretical interest and bite.

Barsalou’s project is to establish a very strong and controversialclaim, namely, that all concepts can be exhaustively implementedby mental symbols which are in some important sense exclusivelyperceptual. Without an independent and noncircular account ofwhat this sense is, of what it is that makes symbols exclusively per-ceptual, I am not sure how to understand this claim, let alone eval-uate its truth. This is noted in the spirit of putting a challengerather than an insurmountable difficulty, because I know thatBarsalou has a lot to say on this (and on other issues I would havetouched on had I more space). Barsalou’s target article containsextremely rich and important material, not all of it clearly formu-lated and well defended. I very much hope that the target articlewill (at least) create the opportunity to remedy this weakness inthe future.

ACKNOWLEDGMENTMany thanks to Jesse Prinz and Philip Robbins for their helpful comments.

Perceptual symbols: The power and limitations of a theory of dynamicimagery and structured frames

William F. BrewerDepartment of Psychology, University of Illinois, Champaign, IL 61820. [email protected] www.psych.uiuc.edu/~brewer/

Abstract: The perceptual symbol approach to knowledge representationcombines structured frames and dynamic imagery. The perceptual symbolapproach provides a good account of the representation of scientific mod-els, of some types of naive theories held by children and adults, and of cer-tain reconstructive memory phenomena. The ontological status of per-ceptual symbols is unclear and this form of representation does notsucceed in accounting for all forms of human knowledge.

Importance of perceptual symbols. Much current cognitive psy-chology is based on amodal propositional representations. Barsa-lou’s target article is a powerful attempt to emphasize the impor-tance of perceptually based forms of representation. In a recentpaper Pani (1996) reviews the history of psychological approachesto imagery. He argues that American functionalists (e.g., Angell,Calkins, Hollingworth, James, Kuhlmann) rejected static “picture

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 611

in the head” approaches to imagery and proposed that imageswere symbolic, abstract, active, and schematic. Pani states thatduring the behaviorist era this tradition was forgotten so the ini-tial work in cognitive psychology (e.g., Bugelski, early Pavio)tended to take a more static approach, but that current work inimagery is evolving into a view much like that of the early Ameri-can functionalists. I think that Barsalou’s paper can be seen as set-ting a new high water mark in this tradition!

There are some domains where I think Barsalou’s perceptualsymbol approach can have a major impact. A number of philoso-phers (Black 1962b; Harré 1970; Hesse 1967) have argued that itis models that give a scientific theory the ability to explain phe-nomena and to extend the theory to cover new phenomena. Thesephilosophers of science often discussed scientific models in termsof images and were attacked by other philosophers who arguedthat images did not have the needed representational power (cf.Brewer, in press, for a review). I think Barsalou’s perceptual sym-bols provide almost exactly the right form of psychological repre-sentation for scientific models. Another related area where thistheory should play an important role is the study of naive theoriesin children and adults. For example, in our studies of children’sknowledge of observational astronomy (Vosniadou & Brewer1992; 1994), we found that children held a wide variety of theo-ries about the day/night cycle (e.g., the sun and moon move upand down parallel to each other on opposite sides of the earth).Perceptual symbols seem to be just the right form of representa-tion to account for our data.

Some unpublished data of mine also seems to bear on the per-ceptual symbol hypothesis. The Brewer and Treyens (1981) studyof memory for places is generally considered a classic demonstra-tion of schema operating in memory. However, some years ago,while thinking along lines parallel to some of those in Barsalou’starget article, I did an analysis of the order in which informationwas recalled in the written protocols from a room memory exper-iment. Physical proximity of objects to each other in the observedroom proved a better predictor of the order of written recall thandid conceptual relatedness. This suggests that the recalls were be-ing generated from a perceptual symbol form of representation ofthe room and not from an abstract, amodal, schema representa-tion.

Ontological status of perceptual symbols. In his earlier papersBarsalou (1993; Barsalou et al. 1993) seems to consider percep-tual symbols to be consciously experienced entities (for example,in Barsalou et al. 1993 he refers to his view as “the experiential ap-proach”). In the target article he has made a dramatic shift andstates that perceptual symbols are “neural states.” These two po-sitions have rather different implications. In current cognitive psy-chology the conscious symbol position is perhaps the more radicalsince that approach would be completely at odds with modern in-formation processing psychology, which is based on unconsciousmental processes. The experiential position makes the strong pre-diction that there will be conscious imagery (or other phenome-nal properties) in all tasks involving perceptual symbols. The datago against the hypothesis; for example, Brewer and Pani (1996)found that many participants show little imagery in certain typesof memory retrieval tasks (e.g., “What is the opposite of false-hood?”).

The neural position adopted by Barsalou is consistent with thestandard unconscious mental processes approach. (Given that wedo not yet have scientific knowledge about the specific “configu-rations of neurons” that underlie perceptual symbols, I do notthink the reductive claim, per se, has much practical import.) Theneural approach, unlike the experiential one, can easily deal withthe lack of images in tasks involving perceptual symbols. However,the perceptual symbol approach is potentially much more power-ful than most current theories in cognitive psychology, which arebased on representations in terms of unconstrained, arbitrary,amodal propositions. As Barsalou points out, perceptual symbolsare not arbitrary and have constraints imposed on them by the as-sumption that they share structure with perceptual objects. Thus,

the perceptual symbol approach can be confirmed by showing thatthe unconscious representations show the same constraints asconscious perceptual representations. There are some tasks wherethis logic can be applied, but it is hard to see how to do this withvery abstract concepts since one no longer knows how to definethe perceptual constraints.

Problems with representational imperialism. Barsalou arguesthat all cognitive representation is carried out with perceptualsymbols. A list of things the theory has no convincing way to rep-resent: (1) abstract constructs such as entropy, democracy, and ab-stract; (2) (non model) scientific theories such as evolution andquantum mechanics; (3) gist recall in abstract sentences (Brewer1975); (4) logical words such as “but,” “therefore,” and “because”;(5) language form; (6) the underlying argument structure of hisown article.

Conclusions. The perceptual symbol approach combines thestructured representations found in frames with a dynamic viewof imagery to produce a very powerful new form of representationthat gives a good account for some (but not all) domains of humanknowledge.

Perceptual symbol systems and emotion

Louis C. CharlandDepartment of Philosophy and Faculty of Health Sciences, University of Western Ontario, London, Ontario, Canada N6A [email protected] publish.uwo.ca/~charland

Abstract: In his target article, Barsalou cites current work on emotion the-ory but does not explore its relevance for this project. The connection isworth pursuing, since there is a plausible case to be made that emotionsform a distinct symbolic information processing system of their own. Onsome views, that system is argued to be perceptual: a direct connectionwith Barsalou’s perceptual symbol systems theory. Also relevant is the hy-pothesis that there may be different modular subsystems within emotionand the perennial tension between cognitive and perceptual theories ofemotion.

In his target article, Barsalou says he is concerned with perceptionin a sense that goes beyond the traditional five sensory modalities.He claims to be interested in “any aspect of perceived experience,including proprioception and introspection” (sect. 2.3, para. 3).Introspective processing is said to involve representational states,cognitive operations, and emotional states (sect. 2.3.1). Aside fromthis remark, Barsalou says nothing else about emotion. He citesthe work of four well-known emotion theorists; notably, Damasio(1994), Johnson-Laird (1983), LeDoux (1996), and Mandler(1975). But he fails to say anything substantial about the relation-ship between what they say about emotion and his case for per-ceptual symbol systems. Yet there are worthwhile connections topursue.

Perhaps the most obvious link between Barsalou’s views andemotion theory lies in the fact that many emotion theorists believethese have an important representational dimension. This is acommon and plausible sense of what counts as “symbolic” in cog-nitive science. There are variations on that theme in emotion the-ory. For example, in his neurobiological theory of emotion, Anto-nio Damasio refers to “perceptual images” and dispositionalrepresentations” (Damasio 1994, pp. 96–113). On his side, JosephLeDoux says emotional feelings have a “representational” dimen-sion (LeDoux 1996, pp. 296–300). A good philosophical exampleof this symbolic orientation in emotion theory is Robert Gordon.He proposes an account of the “logic” of emotion according towhich propositional content is central to emotion: when you“emote,” you emote that p, where p is a proposition (Gordon1987). In fact there is a long history in analytic philosophical workon emotion that can be interpreted along those lines. Sometimesthe symbolic element is belief (Lyons 1980). Sometimes it is nor-mative judgment (Solomon 1976). The notion of judgment also

Commentary/Barsalou: Perceptual symbol systems

612 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

plays an important part in a variety of psychological theories ofemotion. The best example here is Richard Lazarus, who identi-fies the appraisal component of emotion with “evaluative judge-ment” (Lazarus 1991). Most appraisal theories of emotion in psy-chology involve some such symbolic element.

There are therefore a variety of emotion theories according towhich these have an important symbolic dimension. In manycases, that thesis is combined with a second one, namely, that intheir symbolic capacity, emotions constitute a distinct informationprocessing system, governed by its own special affective principlesand regularities, with its own special affective representationalposits (Charland 1995). Emotion theories modelled along theselines come in two varieties. A good account of a “cognitive” ver-sion is Lazarus (1991). A good account of a “perceptual” version isZajonc (1980). Damasio (1994) is an excellent example of a theoryof emotion that manages to reconcile the main insights of bothcognitive and perceptual approaches (Charland 1997).

Much like vision and language, then, emotion can be argued toform a distinct symbol processing system of its own. Depending onthe theory in question, that system can be cognitive or perceptual,or a combination of both. This is one aspect of emotion theory thatis relevant to Barsalou’s project. Unfortunately, in his discussion of“representational processes” he fails to mention emotion (sect.2.4.7).

Emotion is also relevant to Barsalou’s project in another way. Itis possible to argue there are specialized perceptual symbol systemswithin the overall affective perceptual representational dimen-sions of emotion. That hypothesis can be interpreted to mean thereare various “modular” symbolic subsystems in emotion (Charland1996). It can also be interpreted to mean that individual emotionshave modular features of their own (de Sousa 1987). LeDoux ar-gues this in the case of fear (LeDoux 1996). A promising exampleof a perceptual emotion module is the facial expressive dimensionof emotion (Ekman 1992). Another is Jaak Panksepp’s hypothala-mic 4 command-circuit hypothesis (Panksepp 1982; 1998). Theselast two hypotheses raise interesting questions about the natureand limits of cognitive penetrability, another interest of Barsalou’s(sect. 2.4.6).

One final aspect of emotion theory Barsalou may want to con-sider has to do with the tension between cognitive and perceptualapproaches in cognitive science. There is in fact a partial, but notexact, mirror image of that debate in emotion theory. Emotiontheories are often classified in those terms and in a manner thatpits the two approaches against one another. I have argued else-where that this alleged incompatibility between cognitive and per-ceptual theories of emotion is more pernicious than it is justified,and that a comprehensive theory of emotion can reconcile themain insights of both (Charland 1997). There may be lessons herethat can inform the manner in which Barsalou frames the debatebetween cognitive and perceptual approaches. Of course, all ofthis requires a more detailed look at emotion theory than is possi-ble here. Nevertheless, I hope I have shown that there are im-portant connections to pursue in this regard.

Sort-of symbols?

Daniel C. Dennett and Christopher D. VigerCenter for Cognitive Studies, Tufts University, Medford, MA 02155.{ddennett; cviger01}@tufts.edu

Abstract: Barsalou’s elision of the personal and sub-personal levels tendsto conceal the fact that he is, at best, providing the “specs” but not yet amodel for his hypothesized perceptual symbols.

John Maynard Keynes was once asked if he thought in words orpictures. “I think in thoughts,” the great man is reported to havereplied. Fair enough, but now what? What kind of things arethoughts, and how do you make ’em out of brainstuff? Keynes’s

answer nicely alerts us to the dangers of oversimplification andfalse dichotomy, but is otherwise not much help. Similarly, Barsa-lou’s alternative answer: “we think in perceptual symbols,” is lessinformative than it might at first appear. There is something com-pelling about Barsalou’s proposal that cognitive processes be de-scribed in terms of simulators (and simulations) involving modalas opposed to amodal formats or systems of representation.Restoring to serious attention the idea that you don’t need a sep-arate (amodal) symbol system to support cognitive functions is aworthwhile project. Moreover Barsalou has interesting sugges-tions about features that such a perceptuo-motor system ought tohave if the brain, one way or another, is to do the work that needsto be done (“the ability to represent types and tokens, to producecategorical inferences, to combine symbols productively, to rep-resent propositions, and to represent abstract concepts” (sect.1.2.1), but just stipulating that this is possibly what happens in thebrain does not begin to address the hard questions.

What does Barsalou mean by “symbol”? He uses the familiarword “symbol” but then subtracts some of its familiar connota-tions. This is, in itself, a good and familiar strategy (cf. Kosslyn’s(1980) carefully hedged use of “image,” or for that matter, Fodor’s(1975) carefully hedged use of “sentence”). Once Barsalou’s sub-traction is done, however, what remains? It’s hard to say. If ever atheory cried out for a computational model, it is here. He says:“perceptual symbol systems attempt to characterize the mecha-nisms that underlie the human conceptual system” (our emphasis;last para., sect. 2.4.6), but here he simply conflates personal andsubpersonal cognitive psychology in equating mechanisms withrepresentations; “an important family of basic cognitive processesappears to utilize a single mechanism, namely, sensory-motor rep-resentations” (last para. of sect. 2.4.7). These representations, bybeing presupposed to have the very content of our intuitive men-tal types, must be implicated in a most impressively competentlarger structure about which Barsalou is largely silent. As “specs”for a cognitive system there is much to heed here, but the fact thatthis is only specs is easily overlooked. Moreover, if Barsalou’s per-ceptual symbols are not (personal level) thoughts but sub-personalitems of machinery, then the content they might be said to havemust be a sort of sub-personal content, on pain of reinstating vi-cious homuncular fallacies.

Are sort-of symbols an advance on sort-of sentences and sort-ofpictures? How? By not being amodal, one gathers, but also by be-ing only somewhat, or selectively, modal:

First, diagrams such as the balloon in Figure 4A should not be viewedas literally representing pictures or conscious images. Instead, thesetheoretical illustrations stand for configurations of neurons that becomeactive in representing the physical information conveyed in these draw-ings. (para. 3 of sect. 3.1)

Unless we are missing something, this is an assertion of specs with-out offering a clue about realization. As such, Barsalou’s proposalsdo not substantially improve on “pure” phenomenology – leavingall the hard work of implementation to somebody else. Consider:

A simulator becomes increasingly active if (1) its frame contains an ex-isting simulation of the individual, or if (2) it can produce a novel sim-ulation that provides a good fit. (para. 2 of sect. 3.2.1)

Fine; now let’s see a model that exhibits these interesting proper-ties. We want to stress, finally, that we think Barsalou offers somevery promising sketchy ideas about how the new embodied cog-nition approach might begin to address the “classical” problems ofpropositions and concepts. In particular, he found some novelways of exposing the tension between a neural structure’s carryingspecific information about the environment and its playing thesorts of functional roles that symbols play in a representational sys-tem. Resolving that tension in a working model, however, remainsa job for another day.

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 613

On the virtues of going all the way

Shimon Edelmana and Elise M. Breenb

aSchool of Cognitive and Computing Sciences, University of Sussex atBrighton, Falmer BN1 9QH, England; bLaboratory for Cognitive BrainMapping, Brain Research Institute, Institute of Physical and ChemicalResearch (RIKEN), Wako, Saitama 351-01, [email protected] www.cogs.susx.ac.uk/users/shimone

Abstract: Representational systems need to use symbols as internal stand-ins for distal quantities and events. Barsalou’s ideas go a long way towardsmaking the symbol system theory of representation more appealing, bydelegating one critical part of the representational burden – dealing withthe constituents of compound structures – to image-like entities. The tar-get article, however, leaves the other critical component of any symbol sys-tem theory – the compositional ability to bind the constituents together –underspecified. We point out that the binding problem can be alleviatedif a perceptual symbol system is made to rely on image-like entities notonly for grounding the constituent symbols, but also for composing theseinto structures.

Supposing the symbol system postulated by Barsalou is perceptualthrough and through – what then? The target article outlines anintriguing and exciting theory of cognition in which (1) well-spec-ified, event- or object-linked percepts assume the role tradition-ally allotted to abstract and arbitrary symbols, and (2) perceptualsimulation is substituted for processes traditionally believed to re-quire symbol manipulation, such as deductive reasoning. We takea more extreme stance on the role of perception (in particular, vi-sion) in shaping cognition, and propose, in addition to Barsalou’spostulates, that (3) spatial frames, endowed with a perceptualstructure not unlike that of the retinotopic space, pervade all sen-sory modalities and are used to support compositionality.

In the target article too, the concept of a frame is invoked as amain explanatory tool in the discussion of compositionality. Thereader is even encouraged to think of a frame as a structure withslots where pointers to things and events can be inserted. This,however, turns out to be merely a convenient way to visualize anentity borrowed from artificial intelligence: a formal expressionin several variables, each of which needs to be bound to thingsor events. An analogy between this use of frames and the secondlabor of Heracles suggests itself: opting for perceptual symbolswithout offering a perceptual solution to the binding problemis like chopping off the Hydra’s heads without staunching thestumps.

The good news is that there is a perceptually grounded alter-native to abstract frames: spatial (e.g., retinotopic) frames. Theorigins of this idea can be traced to a number of sources. In vision,it is reminiscent of O’Regan’s (1992) call to consider the visualworld (which necessarily possesses an apparently two-dimensionalspatial structure) as a kind of external memory. In language, amodel of sentence processing based on spatial data structures(two-dimensional activation maps) has been proposed a few yearsago (Miikkulainen 1993). In a review of the latter work, one of uspointed out that the recourse to a spatial substrate in the process-ing of temporal structure may lead to a welcome unification of the-ories of visual and linguistic representation (Edelman 1994).

From the computational standpoint, such unification could bebased on two related principles. The first of these is grounding thesymbols (Harnad 1990) in the external reality; this can be done byimparting to the symbols some structure that would both help todisambiguate their referents and help manipulate the symbols tosimulate the manipulation of the referent objects. This principleis already incorporated into Barsalou’s theory (cf. his Fig. 6). Thesecond principle, which is seen to be a generalization of the firstone, is grounding the structures built of symbols.

In the case of vision, structures (that is, scene descriptors) canbe naturally grounded in their distal counterparts (scenes) simplyby representing the scene internally in a spatial data structure (asenvisaged by O’Regan). This can be done by “spreading” the per-ceptual symbols throughout the visual field, so that in the repre-

sentation (as in the world it reflects) each thing is placed literallywhere it belongs. To keep down the hardware costs, the systemmay use channel coding (Snippe & Koenderink 1992; i.e., repre-sent the event “object A at location L” by a superposition of a fewevents of the form “object A at location Li”).

In the case of language, structures do not seem to have anythinglike a natural grounding in any kind of spatial structure (not count-ing the parse trees that linguists of a certain persuasion like todraw on two-dimensional surfaces). We conjecture, however, thatsuch a grounding is conceivable, and can be used both for repre-senting and manipulating semantic spaces, and for holding syn-tactic structures in memory (which needs then to be merely areplica, or perhaps a shared part, of the visual scene memory). Tosupport this conjecture, one may look for a “grammar” of spatialrelations that would mirror all the requisite theoretical constructsinvented by linguists for their purposes (the “syntactic” approachto vision, popular for a brief time in the 1980s, may have remainedbarren because it aimed to explain vision in terms of language, andnot vice versa). Alternatively, it may be preferable to aim atdemonstrating performance based on our idea (e.g., by imple-menting a version of Barsalou’s system in which spatial bufferswould play the role of the frames), rather than to argue futilelyabout theories of competence.

In neurobiology, perhaps the best piece of evidence for a per-ceptual symbol system of the sort proposed by Barsalou is pro-vided by phantom limb phenomena (Ramachandran & Hirstein1998). These can set in very rapidly (Borsook et al. 1998), areknown to occur in the congenitally limb deficient or in early child-hood amputees (Melzack et al. 1997), and may even be inducedin normal subjects (Ramachandran & Hirstein 1998). In a beauti-ful experiment, Ramachandran and Rogers-Ramachandran (1996)superimposed a mirror image of the intact arms of amputees ontothe space occupied by a phantom arm, and found that movementsof the mirrored intact hand produced corresponding kinestheticsensations in the phantom hand, even in a subject who had not ex-perienced feelings of movement in his phantom hand for some tenyears prior to testing. Likewise, touching of the intact mirroredhand produced corresponding, well-localized touch sensations inthe phantom hand. These findings support the idea of the world– somatosensory, as well as visual – serving as an external mem-ory (O’Regan 1992), and suggest a stronger relationship betweenvisuospatial and tactile/proprioceptive representations of “bodyspace” for normal subjects. It is interesting that these representa-tions may be linked, in turn, to the mental lexicon: electromyo-gram (EMG) responses to words with semantic content relatingto pain were found to be significantly different in the stumps ofamputees with chronic phantom limb pain, compared to the EMGin the intact contralateral limb in the same subjects (Larbig et al.1996).

Finally, the phantom phenomenon is not limited to the perceptof “body space” but may also be demonstrated in other modalities,notably, in the auditory system (Muhlnickel et al. 1998). All thissuggests that perceptual symbols, along with a spatial frame pro-vided by the experience of the external world, may (1) solve thesymbol grounding problem and (2) circumvent the binding prob-lem – two apparently not immortal heads of the Hydra that besetssymbolic theories of knowledge representation. [See also Edel-man: “Representation is Representation of Similarities.” BBS21(4) 1998.]

Commentary/Barsalou: Perceptual symbol systems

614 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

Creativity, simulation, and conceptualization

Gilles FauconnierDepartment of Cognitive Science, University of California, San Diego, La Jolla, CA 92093. gfauconnier@ ucsd.edu

Abstract: Understanding the role of simulation in conceptualization hasbecome a priority for cognitive science. Barsalou makes a valuable con-tribution in that direction. The present commentary points to theoreticalissues that need to be refined and elaborated in order to account for keyaspects of meaning construction, such as negation, counterfactuals, quan-tification or analogy. Backstage cognition, with its elaborate bindings,blendings, and mappings, is more complex than Barsalou’s discussionmight suggest. Language does not directly carry meaning, but rather serves,along with countless other situational elements, as a powerful instrumentfor prompting its construction.

When a child plays with sugar cubes and matchboxes, pretendingthat they are cars and buses, driving them all over the kitchentable, and emitting loud “brrr” noises, there is simulation going onin the everyday sense of the word. And there is also simulation go-ing on in the neurobiological and conceptual sense evoked byBarsalou in his target article. If the child’s activity makes sense andprovides enjoyment, it is because the brain is activating and run-ning some of the dynamic conceptual schemas linked to cars andbuses. The projection of such a dynamic frame overlooks and over-rides most of the “essential” features commonly associated withvehicles – size, appearance, roads, internal mechanics, and so on.And language is applied effortlessly to the situation: “Look,Mummy, the car hit the bus. The engine’s broken.” The activity isin no way exotic. Very young children master it, and use it to de-velop their conceptual and linguistic systems. The children are inno way deluded. They can maintain simultaneous representationsof objects as sugar cubes and as cars. They can mix frames and saythings like “Hey, don’t put the car in your coffee!” to the adult pick-ing up the cube.

The human capacity for extracting, projecting, and combiningdynamic schemas is anything but trivial. It lies at the heart of con-ceptual creativity. Yet, scientific accounts of meaning seldom do itjustice. Focusing only on necessary and sufficient conditions, oron prototypes, or on statistical inference from reality, will quicklyrule out our sugar cube adventure. And by doing so, it will rule outmost of the power of language and thought. Words like car go farbeyond picking out categories of objects. They prompt us toconstruct mappings and run dynamic schemas that can be wildlydifferent depending on circumstances. Reflect if you will on thecontribution of the conceptual world of automobiles to the un-derstanding of expressions like put a tiger in your tank, Maytag isthe Cadillac of washing machines, If cars were men, you’d likeyour daughter to marry this one [a Volvo ad], If cars had wings,we wouldn’t need bridges.

In stressing the crucial role of simulation in conceptualization,and in proposing the general scheme of perceptual symbols to ap-proach the issue in a neurobiologically plausible way, Barsaloudoes the cognitive science community a great service, offering toliberate it from the chains of amodal computation. In attemptingto cover an extremely wide range of issues, there are many com-plications that Barsalou does not address. Here are a few of them.

Note first that even the simple example above of children at playremains a problem for a literal application of Barsalou’s percep-tual symbols theory. The mapping of his Figure 3 will fail for thesugar cube and so will many other aspects of perceptual/concep-tual mappings commonly applicable to driving. The key here is se-lective projection. Creative conceptual projection operates on thebasis of partial mapping selecting relevant aspects of a complexconceptual domain (Fauconnier & Turner 1998; Tomasello 1998).This suggests that the requirement of a common simulator (sect.2.4.5) is too strong. More generally, the presupposition in Barsa-lou’s footnote 2, that for a word there is a corresponding concept,is dubious. Wittgensteinian musings and findings in cognitive se-mantics (Lakoff 1987) suggest a connection of words to multiple

schemas/simulators with many kinds of connections betweenthem, not a direct word to concept correspondence.

A second challenge is the theoretical understanding of emer-gent properties. Barsalou is quite right to point out that we canconceive of running chairs and croquet flamingos through simu-lation rather than by feature composition, and that this reflects af-fordances captured through the formation process. But what isthis formation process? We do not get it for free from the per-ceptual symbol hypothesis. Efforts are made in Fauconnier andTurner 1998, Coulson 1997, and Langacker 1991 to characterizesome high level operations and optimality principles that couldyield such emergent structure. The problem applied to concep-tual blends generally, including counterfactuals (Tetlock et al.1996; Turner & Fauconnier 1998), fictive motion (Talmy 1996),and grammatical constructions (Mandelblit 1997) is a difficultone. Interesting theories of neural computation that achieve someof this are explored by the Neural Theory of Language Project(Narayan 1997; Shastri & Granner 1996).

A third issue is how to unleash the power of quantifiers, con-nectives, negation, and variables in a perceptual symbol system. Ithink Barsalou is on the right track when he thinks of elementarynegation in terms of failed mappings. But the proposal needs tobe fleshed out. One virtue of a scheme like Barsalou’s is that it nat-urally makes some of the conception part of perception becauseperceptual symbols and dynamic frames are activated along withmore basic perception. We don’t see an object with features x, y,z in 3D position p and then compute that it’s a balloon above acloud. Rather we see it directly, conceptually and perceptually allat once, as a balloon above a cloud. Negation, conceived of asfailed mapping, is different, because there is an infinite numberof possible failed mappings for any situation, but we don’t want thecognitive system to try to register all these possible failures. In-stead, elementary negation is typically used to contrast one situa-tion with another that is very close to it. “The balloon is under thecloud” is a more likely negation of “The balloon is above the cloud”than “A typewriter is on a desk,” even though the latter would leadto massive mapping failure in that situation. So the notion of failedmapping needs to be elaborated and constrained. Furthermore,the simulations must be able to run internally in the absence of ac-tual events, and so the failed mappings should also be between al-ternative simulations. Quantifiers, generics, and counterfactualspose obvious similar problems for the extension of Barsalou’sscheme. One rewarding way to deal with such problems is to studythe more complex mappings of multiple mental spaces that orga-nize backstage cognition (Fauconnier 1997; Fauconnier &Sweetser 1996) and the cognitive semantics of quantifiers (Lan-gacker 1991).

Many other points should be addressed here, but cannot be forlack of space. Let me only mention that metaphorical and analog-ical mappings (Gibbs 1994; Hofstadter 1995; Indurkhya 1992;Lakoff & Johnson 1999) play a greater and more complex role thanBarsalou suggests, and also that in counterfactual and other con-ceptual blends we routinely find simulations that have no possiblecounterparts in the real world (Fauconnier & Turner 1998; Turner& Fauconnier 1998).

An important general point for cognitive scientists is that lan-guage does not directly carry meaning. Rather, it serves as a pow-erful means of prompting dynamic on-line constructions of mean-ing that go far beyond anything explicitly provided by the lexicaland grammatical forms (Fauconnier 1997). Fortunately for hu-mans and unfortunately for scientists, the lightning speed opera-tion of our phenomenal backstage cognition in a very rich mental,physical, and social world is largely unconscious and not accessi-ble to direct observation. Still, that is what needs to be studied inorder to really account for the famed productivity and creativity oflanguage and thought (Tomasello 1998).

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 615

Spatial symbol systems and spatialcognition: A computer science perspectiveon perception-based symbol processing

Christian Freksa, Thomas Barkowsky, and Alexander KlippelDepartment for Informatics and Cognitive Science Program, University of Hamburg, D-22527 Hamburg, Germany. {freksa; barkowsky; klippel}@informatik.uni-hamburg.de

Abstract: People often solve spatially presented cognitive problems moreeasily than their nonspatial counterparts. We explain this phenomenon bycharacterizing space as an inter-modality that provides common structureto different specific perceptual modalities. The usefulness of spatial struc-ture for knowledge processing on different levels of granularity and for in-teraction between internal and external processes is described. Map rep-resentations are discussed as examples in which the usefulness of spatiallyorganized symbols is particularly evident. External representations andprocesses can enhance internal representations and processes effectivelywhen the same structures and principles can be implicitly assumed.

The role of spatial relations for cognition. Neural representationsresulting from perception are often organized in sensoritopic rep-resentations, that is, according to spatial structure manifested inthe perceived configuration. As entities are perceived in spatial re-lation to one another, a representation that preserves spatial rela-tions is obtained without costly transformations. In linking the ex-ternal and the internal worlds, perception processes make use ofspatial organization principles.

A special feature of structure-preserving representations is thatthe same type of process can operate on both the represented andthe representing structure (Palmer 1978). [See also Palmer:“Color, Consciousness, and the Isomorphism Constraint” BBS22(6) 1999.] Thus, events in the external world can be reproducedinternally.

According to Barsalou (sect. 2.4), simulations with perceptualsymbols should underlie the same restrictions as their corre-sponding perception processes, as they share the properties forarrangement and order of occurrence with “real” perceptions.From this perspective, space – similar to time – plays a twofoldrole for cognitive processes: spatial location may be representedas a perceptual symbol (as a result of perception) and space pro-vides organizing structure for perceptual symbol systems.

As spatial structure is relevant to perception across modalityboundaries and as it behaves like a modality with respect to struc-tural constraints, space can be viewed as an inter-modality com-bining the advantages of specificity and generality of amodal andmodal representations, respectively. This inter-modality propertyof space may be essential for multi-modal sensory integration andfor cross-modal interaction.

Specificity versus generality of representations. “Generalpurpose computers” have been praised for their capability in im-plementing arbitrary concepts and processes. This capability hasbeen achieved by breaking up the structures of specific domainsand reducing everything to very general primitive principles thatapply to all domains. The generality of today’s computer systemsis achieved at the expense of an alienation of these systems froma real environment.

The computer is a “brain” that is only superficially connected tothe external world (cf. Barsalou sect. 4.4). In reasoning about theworld and in simulating processes in the world, computers carryout operations that do not structurally correspond to operations inthe world. To make computers function properly according to therules of a given domain, the specific domain structure must bemimicked and described in an abstract way in the general-purposestructure. As the domain-specific structure is not given intrinsi-cally but is simulated through expensive computational processes,certain operations are achieved at much higher computationalcost than in a more specialized structure.

Conversely, highly specific modal structures as manifested in

sensors can do little but react to the specific stimuli they are ex-posed to. To be effective, they have to be strongly adapted to theirenvironment. In particular, they are not able to perform cross-modal tasks, unless they have a specific interface to establish thisconnection.

Thus, spatial structure appears to provide a good combinationof the advantages of abstract general and concrete specific levelsof representation. In spatial structures, a certain degree of ab-straction is possible. At the same time, important relations rele-vant to our perception, conceptualization, reasoning, and actionare maintained.

Spatial structures provide principal advantages for processinginformation about spatial environments (Glasgow et al. 1995).This can be attributed to the fact that – unlike other informationstructures – spatial relationships are meaningful on many differ-ent granularity levels.

Distinctions that are made on high-resolution levels disappearon lower resolution levels. Preserving spatial structure, differentmeaningful conceptualization levels can be achieved simply by“looking” more closely or less closely (Freksa 1997).

This property of spatial structure can also be utilized on moreabstract levels: conceptual spaces in which small local changes canbe ignored in favor of the similarities on a higher level form con-ceptual neighborhoods (Freksa 1991) with interesting computa-tional properties (Freksa & Barkowsky 1996).

Maps as external representations for cognitive processes.Maps and map-like representations convey knowledge by depict-ing spatial entities in a planar external medium. Relevant geo-graphic entities are identified and depicted by more or less ab-stract symbols that form spatially neighboring pictorial entities onthe map’s surface (Barkowsky & Freksa 1997).

The usefulness of maps depends strongly on the spatial struc-ture of the representational medium and on human spatio-per-ceptual recognition and interpretation abilities. The symbols inthe map are perceived in relation to one another. The pictorialmap space is organized in spatial analogy to the world it repre-sents. Hence, the map becomes an interface between spatio-per-ceptual and symbolic concepts that are integrated in a single rep-resentational structure.

Maps as external knowledge representations preserve spatialrelations adapted from the perception of the represented spatialenvironment. Hence they support the conception and use of spa-tial information in a natural way. The spatial structure of maps ex-tends the cognitive facilities of the map user to the external rep-resentation medium (Scaife & Rogers 1996).

The cognitive information processing principles that are ap-plicable to both internal and external spatial representations canbe investigated by studying processes on external spatial repre-sentation media. This seems especially valid regarding spatially or-ganized knowledge for which the types of relations relevant toboth internal and external conceptions of environments can bestudied (Berendt et al. 1998). Moreover, as maps combine bothspatio-analogical and abstract symbolic information, the relevanceof spatial organization principles for non-spatial information canbe investigated.

Extending brain power: External memory and process model.The idea of using the same principles for internal and externalknowledge organization is computationally appealing. Commonstructures support the use of internal processes for using exter-nally represented knowledge. Consequently, external knowledgecan serve effectively as an extension of internal memory struc-tures.

Furthermore, internal and external processes that strongly cor-respond to one another can greatly enhance the communicationbetween the internal and the external worlds. A main reason forthis is that only a little information needs to be exchanged to referto a known corresponding process, as the common structures canact implicitly. In anthropomorphic terms we could say that it ispossible to empathize with familiar processes but not with unfa-miliar ones.

Commentary/Barsalou: Perceptual symbol systems

616 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

ACKNOWLEDGMENTWe are grateful to Deutsche Forschungsgemeinschaft (Spatial CognitionPriority Program) for support and to Stephanie Kelter for valuable com-ments.

Grounded in perceptions yet transformedinto amodal symbols

Liane GaboraCenter Leo Apostel, Brussels Free University, 1160 Brussels, [email protected] www.vub.ac.be/CLEA/liane

Abstract: Amodality is not incompatible with being originally derivedfrom sensory experience. The transformation of perceptual symbols intoamodal abstractions could take place spontaneously through self-organiz-ing processes such as autocatalysis. The organizational role played by “sim-ulators” happens implicitly in a neural network, and quite possibly, in thebrain as well.

The symbol grounding problem is important, and Barsalou’s tar-get article does an excellent job of directing our attention to it.However, to be grounded in perceptual experience does not meanto spend eternity underground. Just as a seed transforms into asprout which emerges from the ground as a plant, an abstract con-cept can originate through, or be grounded in, perceptual experi-ence, yet turn into something quite different from anything everdirectly perceived. To insist that abstractions are just arrange-ments of perceptual symbols is like insisting that a plant is just seed1 water 1 sunlight.

As Barsalou considers increasingly abstract concepts, he in-creases the complexity of the perceptual symbol arrangements. Tome it seems more parsimonious to say that what was once a con-stellation of memories of similar experiences has organized itselfinto an entity whose structure and pattern reside primarily at alevel that was not present in the constituents from which it was de-rived. This does not mean that abstractions cannot retain some-thing of their “perceptual character.”

Abstraction can transform sensory experiences into amodalsymbols. Barsalou assumes that viewing symbols as amodal is in-compatible with their being derived from sensory experience, be-cause of the “failure to provide a satisfactory account of the trans-duction process that maps perceptual states into amodal symbols”(sect. 1.2.2, para. 4). He admits that perceptual symbols are ana-logical, however, and that they are integrated “combinatorially andrecursively” (sect. 1.5); and he speaks of “filtering out the specificcircumstances” (sect. 2.3.1., para. 1). Aren’t these the kind of pro-cesses that could transduce perceptual symbols? Incidentally, itis strange that Barsalou cites Gentner, who works on structuremapping in analogy (e.g., Gentner 1983; Gentner & Markman1997) to support the notion that “mental models are roughlyequivalent to only the surface level” and “tend not to address un-derlying generative mechanisms that produce a family of relatedsimulations” (sect. 2.4.2).

Barsalou pays surprisingly little attention to the highly relevantwork of connectionists. The only rationale he provides is to say thatbecause “the starting weights on the connections . . . are set tosmall random values . . . the relation between a conceptual repre-sentation and its perceptual input is arbitrary” (sect. 1.2). This ismisleading. Just because there are many paths to an attractor doesnot mean the attractor is not attracting. Surely there are analogoussmall random differences in real brains. This quick dismissal ofconnectionism is unfortunate because it has addressed many ofthe issues Barsalou addresses, but its more rigorous approachleads to greater clarity.

Consider, for example, what happens when a neural networkabstracts a prototype such as the concept “depth,” which is usedin senses ranging from “deep blue sea” to “deep-frozen vegeta-bles” to “deeply moving book.” The various context-specific inter-

pretations of the word cancel one another out; thus the concept is,for all intents and purposes, amodal, though acquired throughspecific instances. There is no reason to believe that this does nothappen in brains as well. The organization role Barsalou ascribesto “simulators” emerges implicitly in the dynamics of the neuralnetwork. Moreover, since Barsalou claims that “a concept is equiv-alent to a simulator” (sect. 2.4.3), it is questionable whether thenew jargon earns its keep. Why not just stick with the word “con-cept”?

How would amodal cognition get started? Harder to counteris Barsalou’s critique that an amodal system necessitates “evolvinga radically new system of representation” (sect. 4.2). Barsalou re-peatedly hints at but does not explicitly claim that the origin of ab-stract thought presents a sort of chicken-and-egg problem. Thatis, it is difficult to see how an abstraction could come into existencebefore discrete perceptual memories have been woven into an in-terconnected worldview that can guide representational re-description down potentially fruitful paths. Yet it is just as difficultto see how the interconnected worldview could exist prior to theexistence of abstractions; one would expect them to be the gluethat holds the structure together. In Gabora 1998 and Gabora1999, I outline a speculative model of how this might happen,drawing on Kauffman’s (1993) theory of how an information-evolving system emerges through the self-organization of an au-tocatalytic network. Self-organizing processes are rampant in nat-ural systems (Kauffman 1993) and could quite conceivablyproduce a phase transition that catapults the kind of change inrepresentational strategy that Barsalou rightly claims is necessary.

Did the divide between humans and other animals begin withlanguage? It also seems misleading to cite Donald (1991) [Seealso: multiple book review of Donald’s Origins of the ModernMind BBS 16(4) 1993.] as supporting the statement “where hu-man intelligence may diverge [from animals] is in the use of lan-guage” (sect. 4.2). Donald writes:

Speech provided humans with a rapid, efficient means of constructingand transmitting verbal symbols; but what good would such an abilityhave done if there was not even the most rudimentary form of repre-sentation already in place? There had to be some sort of semantic foun-dation for speech to have proven useful, and mimetic culture wouldhave provided it.

Thus Donald argues that human intelligence diverged from ani-mals before the appearance of language, through the advent ofmimetic skill.

Despite these reservations, I found the target article interest-ing and insightful.

Embodied metaphor in perceptual symbols

Raymond W. Gibbs, Jr. and Eric A. BergDepartment of Psychology, University of California, Santa Cruz, Santa Cruz,CA 95064. [email protected]

Abstract: We agree with Barsalou’s claim about the importance of per-ceptual symbols in a theory of abstract concepts. Yet we maintain that therichness of many abstract concepts arises from the metaphorical mappingof recurring patterns of perceptual, embodied experience to provide es-sential structure to these abstract ideas.

We strongly endorse Barsalou’s argument that a perceptual theoryof knowledge can implement a fully functional conceptual system.One of Barsalou’s most important claims is that some abstract con-cepts are partly represented in terms of perceptual symbols, butmetaphor is only used to elaborate on and extend these abstractconcepts. For instance, our understanding of the concept “anger”is not entirely explained by viewing it in metaphorical terms as“liquid exploding from a container.” Barsalou claims that thismetaphor fails to capture the richness of our concept of “anger.”

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 617

Instead, direct perceptual symbols underlie the representation ofthe abstract concept for “anger,” which metaphor may elaborateupon as a secondary process. Direct perceptual representationsfor abstract concepts are presumably needed to guide the map-ping of concrete domains (e.g., “liquid exploding in a container”)into abstract target domains (e.g., “anger”). Thus, abstract knowl-edge domains are shaped by perceptual representations, but notfundamentally by metaphor.

We take issue with this conclusion. Metaphorical processes maybe essential in the formation and perceptual representation ofmany abstract concepts. For instance, Barsalou notes that peoplehave direct experience of “anger” in three ways. “Anger” first in-volves the appraisal of the initiating event, such as that a person’sgoals have been thwarted. Second, “anger” gives rise to intense,affective states. Finally, “anger” gives rise to significant behavioralresponses, such as seeking revenge or working toward removal ofthe obstacle. These three components of our experience providethe direct perceptual knowledge underlying the formation of per-ceptual symbols for the concept of “anger.”

Yet it is almost impossible to see these fundamental aspects ofour anger experiences apart from the metaphors we tacitly employto make sense of these experiences. Thus, the idea of “anger isheated fluid in a container” is one of several prominent metaphor-ical ideas that we have to structure our anger experiences. Centralis our understanding of the conceptual metaphor “anger is heatedfluid in a container” is the embodied experience of containment.We have strong kinesthetic experiences of bodily containmentranging from situations in which our bodies are in and out of con-tainers (e.g., bathtubs, beds, rooms, houses) to experiences of ourbodies as containers in which substances enter and exit. A big partof bodily containment is the experience of our bodies being filledwith liquids including stomach fluids and blood, and sweat that isexcreted through the skin. Under stress, we experience the feel-ing of our bodily fluid becoming heated.

Our immediate experience of anger is recognized and under-stood in terms of, among other things, our direct embodied expe-rience of being containers with fluids that sometimes get heatedwhen put under pressure from some external stimulus. Note thecorrespondences between different embodied experiences whilefeeling angry and the language we use to talk about different as-pects of the concept for “anger.” Thus, when the intensity of angerincreases, the fluid in the bodily container rises (e.g., “His pent-up anger welled up inside of him”). We also know that intense heatproduces steam and creates pressure on the container (e.g., “Billis getting hot under the collar” and “Jim’s just blowing off steam”).Intense anger produces pressure on the container (e.g., “He wasbursting with anger”). Finally, when the pressure of the containerbecomes too high, the bodily container explodes (e.g., “He blewup at me”). Each of these aspects of our folk understanding of theconcept of “anger” is a direct result of the metaphorical mappingof heated fluid in a container onto the abstract domain of “anger.”

Several kinds of evidence from psycholinguistics supports theidea that embodied metaphors underlie people’s understanding ofabstract concepts (Gibbs 1992; 1994). These studies showed thatpeople were remarkably consistent in their answers to questionsabout their understanding of events corresponding to particularbodily experiences that were viewed as motivating specific sourcedomains in conceptual metaphors (e.g., the experience of one’sbody as a container filled with fluid). For instance, people re-sponded that the cause of a sealed bodily container exploding itscontents out is the internal pressure caused by the increase in theheat of the fluid inside the bodily container, that this explosion isunintentional because containers and fluid have no intentionalagency, and that the explosion occurs in a violent manner. Theseresponses were provided without the experimenter mentioning toparticipants the idea of “anger” and as such simply reflect people’sunderstanding of their perceptual, embodied experiences.

More interesting, though, is that people’s intuitions about vari-ous source domains map onto their conceptualizations of differ-ent target domains in very predictable ways. For instance, several

experiments showed that people find idioms about anger to bemore appropriate and easier to understand when they are seen indiscourse contexts that are consistent with the embodied experi-ences underlying their concept of “anger.” Thus people find it easyto process the idiomatic phrase “blow your stack” when this wasread in a context that accurately described the cause of the per-son’s anger as being due to internal pressure, where the expres-sion of anger was unintentional and violent (all entailments thatare consistent) with the entailments of the source to target domainmappings of heated fluid in a container onto anger). But readerstook significantly longer to read “blow your stack” when any ofthese entailments were contradicted in the preceding story con-text. This evidence suggests that people’s understanding of vari-ous linguistic phrases were related to their embodied, metaphor-ical conceptualizations of different abstract concepts to whichidioms refer.

Another important part of the metaphorical nature of percep-tual symbols arises from the correlations people note in their per-ceptual, embodied experience. For instance, consider the abstractconcept of “theory,” which is motivated by several conceptualmetaphors, including “theories are buildings” (e.g., “Your theoryneeds more support” or “Your theory can’t hold up to the weightof the existing data”). At first glance, the abstract concept of a the-ory does not seem to be perceptually related to the buildings inwhich we generate, discuss, and dismantle them. But several moreprimitive metaphors underlie our metaphorical understanding of“theories are buildings,” such as the fact that “persisting is re-maining erect” and “structure is physical structure” (Grady 1997).These primitive metaphors arise from our perceptual, embodiedexperience where we note strong correlations between things, in-cluding animate objects, being erect and their persisting, and be-tween entities we view as structures and physical structures. Ourunderstanding of the abstract concept for “theory” – theories arebuildings” (now called “compound metaphors”) – emerges, inpart, from these recurring patterns of correlated, perceptual ex-periences. However, our rich understanding of abstract conceptsmust include the metaphorical mapping of these perceptual cor-relations as source domains onto the target domain of the abstractconcept. Under this view, abstract concepts are, as Barsalou ar-gues, represented in terms of perceptual symbols, yet these sym-bols are, in many cases, inherently structured by metaphor.

Perceptual symbols in languagecomprehension

Arthur M. GlenbergDepartment of Psychology, University of Wisconsin – Madison, Madison, WI53706. [email protected]

Abstract: Barsalou proposes (sect. 4.1.6) that perceptual symbols play arole in language processing. Data from our laboratory document this roleand suggest the sorts of constraints used by simulators during languagecomprehension.

A conundrum: Barsalou develops a strong case that cognition isnot based on rule-like (e.g., syntactic) manipulation of abstractamodal symbols. But words in a sentence are quintessentiallyamodal symbols operated on by syntactic rules. So, how can weunderstand language? In section 2.6, Barsalou sketches an answerthat is almost identical to one that we have developed (Glenberg& Robertson, in press; Glenberg et al., under review) and calledthe Indexical Hypothesis.

The Indexical Hypothesis specifies three steps in understand-ing language in context. First, words and phrases are indexed toobjects in the environment or to perceptual symbols from mem-ory. Second, affordances are derived from the objects or percep-tual symbols. In most ordinary language situations, these afford-ances are action-based. That is, the affordances are actions that

Commentary/Barsalou: Perceptual symbol systems

618 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

can be accomplished with the object by a person with the sametype of body as the comprehender. The particular affordances de-rived depend on the goals that need to be accomplished, affectivestates, and grammatical form. As an example, a wooden chair af-fords actions such as sitting-on, standing-on (e.g., to change a lightbulb), and if both the chair and the person are of the right form,a chair can be lifted to afford defense against snarling lions. Thus,the goals of resting, standing, or defense select some affordancesover others. The third step in language understanding is to com-bine, or mesh (Glenberg 1997) the affordances. Meshing is aprocess of combining concepts (perceptual simulators) that re-spects intrinsic constraints on combination. These constraintsarise because not all actions can be combined, given the way ourbodies work. Thus, whereas the affordances of chairs can bemeshed with the actions underlying the goals of resting, light bulb-changing, or defense, an ordinary chair cannot be meshed withgoal of propelling oneself across town or serving as a vase. Conse-quently, we can understand a sentence such as “Art used the chairto defend himself against the snarling lion,” but we find it difficultto understand, “Art used the chair to propel himself across town.”In general, if the affordances can be meshed to simulate coherentaction, then the sentence is understood; if the affordances cannotbe meshed, then the sentence is not understood.

To test these ideas, participants were asked to judge the sensi-bleness of afforded and non-afforded sentences in the context ofa setting sentence (see Table 1). In the afforded condition, the af-fordances of named objects can be meshed to accomplish statedgoals. Thus, sweaters and leaves can be combined to make a pil-low. In the non-afforded condition, the affordances cannot bemeshed to accomplish the goal. Hence, we predicted that experi-mental participants will have difficulty understanding the non-afforded sentences. Note that objects such as sweaters and news-papers are used in novel ways in both sentences. Thus, it is highlyunlikely that understanders can base their judgment on specificexperiences with the objects. In fact, we demonstrated using la-tent semantic analysis (LSA; Landauer & Dumais 1997) that intens of thousands of texts, a central concept of each sentence (e.g.,“pillow”) appears in contexts that are orthogonal to the contexts inwhich the distinguishing concepts (e.g., “water” and “leaves”) ap-pear.

Participants in the experiment rated the afforded sentences assensible (average of 4.58 on a 7-point scale from 1: virtual non-sense to 7: completely sensible), whereas they rated the non-afforded sentences as pretty much nonsense (average rating of1.25). One might argue that the ratings reflect syllogistic problemsolving rather than reading and comprehension. To answer this ar-gument, we wrote related sentences (see Table 1) in which (a) thedistinguishing concepts (e.g., “ski mask”) and the central concepts(e.g., “face”) were related according to the LSA text analysis, and(b) the affordances would mesh. Thus, the related sentences de-scribed situations we thought would be familiar to our partici-pants, whereas the afforded sentences described novel situations.If people were understanding the afforded sentences through syl-logistic problem solving, then the afforded sentences should beread more slowly than the related sentences that do not demandproblem solving because of their familiarity. In fact, the timestaken to read and understand the afforded and related sentenceswere statistically indistinguishable, whereas both were read fasterthan the non-afforded sentences. Because comprehension de-pended on the ability to derive affordances for objects (e.g., thatnewspapers can afford protection from the wind), we demon-strated that conceptual representations must include somethinglike perceptual symbols that allow such novel derivations. Becausenot all concepts can be combined, we demonstrated embodiedconstraints on conceptual combination and sentence understand-ing.

Action-based constraints on mesh serve three purposes in a per-ceptual symbol system. First, they are required to ensure theproper operation of perceptual simulators. Otherwise, peoplemight be led to believe that wet sweaters can be pillows and that

matchbooks provide protection from the wind. Second, by focus-ing on embodied constraints on action, we can better understandBarsalou’s claim (sect. 2.4.3) that “the primary goal of humanlearning is to establish simulators.” Being able to simulate is un-likely to have been selected by evolution if simulation did not con-tribute to survival and reproductive success, both of which requireaction, not just thought. On the other hand, if simulations resultin derivation of new affordances, then the simulations related di-rectly to effective action. Third, action-based constraints suggesta revision of some implications for perceptual symbol systems. Forexample, if the goal of cognition is to support action in the world,and it is action that grounds symbols (e.g., Newton 1996), then theaction-less perceptual symbols Barsalou describes in section 4.4are insufficient to generate intelligence. That is, although the ma-chine would have perceptual symbols, those symbols would notconvey to the machine how it should interact with the world it isexperiencing. Such a machine could not act intelligently.

Perception as purposeful inquiry: We electwhere to direct each glance, and determinewhat is encoded within and between glances

Julian HochbergDepartment of Psychology, Columbia University, New York, NY [email protected]

Abstract: In agreement with Barsalou’s point that perceptions are not therecords or the products of a recording system, and with a nod to an oldersystem in which perception is an activity of testing what future glancesbring, I argue that the behavior of perceptual inquiry necessarily makeschoices in what is sampled; in what and how the sample is encoded, andwhat structure across samples is pursued and tested; and when to concludethe inquiry. Much of this is now being hectically rediscovered, but a com-prehensive approach like the one Barsalou proposes should help preservewhat progress is made.

Barsalou provides what looks like a viable big picture, a percep-tual view of cognition, in which everything can take its place, withno crippling simplifications. I want to expand slightly on his as-sertion that perception is not a recording system. Indeed, percep-

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 619

Table 1 (Glenberg). Two example scenarios

Setting: Marissa forgot to bring her pillow on her camping trip.Afforded: As a substitute for her pillow, she filled up an oldsweater with leaves.Non-afforded: As a substitute for her pillow, she filled up an oldsweater with water.Related: As a substitute for her pillow, she filled up an old sweaterwith clothes.

Setting: Mike was freezing while walking up State Street into abrisk wind. He knew that he had to get his face covered pretty soonor he would get frostbite. Unfortunately, he didn’t have enoughmoney to buy a scarf.Afforded: Being clever, he walked into a store and bought a news-paper to cover his face.Non-afforded: Being clever, he walked into a store a bought amatchbook to cover his face.Related: Being clever, he walked into a store and bought a ski-mask to cover his face.

Note: Central concepts are in italic and distinguishing conceptsare in boldface. From Glenberg et al. Symbol grounding andmeaning.

tion is a purposive behavior, depending on the viewer’s intentionsand expectations.

This view is not new. To Helmholtz (1866/1962), the idea of anobject was what one would see after changing viewpoint. To J. S.Mill (1865/1965), the idea that an object is real and not imaginedreflects the permanent possibility of obtaining sensations from it.Hebb’s 1949 phase sequence embodied some of the better aspectsof these metatheories in something closer to an actual theory. Inthese and in other approaches (including Barsalou’s most com-prehensive system) a perception is not a transcribed recording ora given image: it refers to an active inquiry, its findings, and its con-sequences.

In none of these approaches is the perception composed of con-sciously accessible components, or sensations; in all of these, thepurpose or intention of the perceptual inquiry determines whatstimulation is attended, and the inquiry itself is a course of be-havior over time. Such behavior itself needs guidance, with con-straints on what will be acceptable timing and helpful peripheralsalience, constraints that have in effect determined much of ourvisual culture and technology (i.e., of text and display layout, pic-torial composition, movie cutting, etc.). (Human factors should ofcourse pay heed to experimental psychology, but psychologistswould do even better to study these aspects of the visual media, apoint I have pursued elsewhere at length.)

We can note at least three levels of perceptual inquiry, and noneis a record: each depends on the viewer’s purpose and resources,although they do so in different ways.

1. Looking behavior is in large part about bringing some spe-cific part of the retinal image, chosen in advance, close to thefoveal region (approximately the size of a thumbnail at half-arm’slength): Attentional shifts to specific parts of the field of view doin fact precede the eye movements which make them visible(Hoffman & Subramaniam 1995; Kowler et al. 1995). To the lowresolution of peripheral vision, the junctions of a drawn cube, forexample, afford notice of where occlusion information can befound; similarly, the gaps in text predict where words begin andend. Unless you choose to fixate these, you have no sensory infor-mation about which face of the cube is nearer, or what the word’sinitial letters are in fact, in cube and text respectively. With a di-rected perceptual task, or with familiar and normally redundanttext, one glance may satisfy the perceptual inquiry. Such percep-tual “inference” may furnish the wrong answer, as shown by proof-reading errors and by the acceptance of locally inconsistent (Pen-rose/Escher) figures.

2. Can we then treat perception as a record, complete or oth-erwise, if not of the optic array then of what is obtained from themomentary glance with its inhomogeneous retinal resolution? Notso: even within a fixation, both the viewer’s purpose (or instruc-tions) and the memory labels or structures available for encodingaffect what is encoded into working memory within that briefglance (normally about 0.25 sec – 0.5 sec). Whatever has not beenso encoded is effectively lost, although Sperling’s landmark partialreport experiments (Sperling 1960) show that that material wasbriefly available to the viewer for alternative encoding. Some ofwhat has been attributed to inattentional blindness most plausiblyreflects this forgetting of what was not intentionally encoded(Hochberg 1970; 1988; Wolfe 1998); that point applies as well tomultiglance inquiries, as noted next.

3. Does the perception of extended objects and events, whichrequire more than one glance, amount to a record of the encod-ings that were made in the component glances? Again, not so (ex-cept in forced, slow list-checking). When perceptual inquiry spansmultiple glances, normally at 2–4 glances/sec, the separate en-codings must themselves be encoded within or tested againstsome superordinate schematic structure from which they can beretrieved, or they too are lost. Both intention and memory areneeded to bridge successive glances at even relatively compact ob-jects (Hochberg 1968), layouts (Irwin 1996), or serial apertureviewing (Hochberg 1968); in listening to superimposed speech(Treisman & Geffen 1967, as reinterpreted by Hochberg 1970);

and in viewing superimposed filmed events (Neisser & Becklen1975) and the elisions that are common in normal film cutting(Hochberg & Brooks 1996).

Not only are perceptions not recordings, but the levels of per-ceptual inquiry (where one fixates, what one encodes from thatglance, whether one goes on to take more glances, and how oneencodes the several glances) offer different routes to selection andorganization that are inherent in the nature of perceptual inquiry(Hochberg 1998). With all of this cognitive involvement in theperceptual process, it has always seemed strange to stop at thefirewall that has separated theories about perception from thoseabout thinking and reasoning for most of this century. Barsalou’screative and encompassing proposal should help provide a com-prehensive context within which to examine perception as a cog-nitive behavior, and to place and hold what such examination re-veals.

Individuals are abstractions

James R. HurfordDepartment of Linguistics, University of Edinburgh, Edinburgh, EH8 9LL,Scotland, United Kingdom. [email protected] www.ling.ed.ac.uk/~jim/

Abstract: Barsalou’s move to a perceptual basis for cognition is welcome.His scheme contrasts with classical logical schemes in many ways, includ-ing its implications for the status of individuals. Barsalou deals mainly withperceived individuals, omitting discussion of cognized individuals. It is ar-gued that the individuality of cognized individuals is an abstraction, whichconforms in its manner of formation to other cognitive abstractions whichBarsalou discusses, such as truth and disjunction.

In his impressive and welcome target article, Barsalou is at painsnot to throw out some cherished symbolic babies with the amodalbathwater. Many of these babies, in their best-known incarnations,are the progeny of classical symbolic logic; they include the no-tions of proposition, truth, and abstraction. Barsalou sketches howthe work that such notions do in classical theories can also be donein his theory, and this involves casting these theoretical terms in anew light. Barsalou does not focus specifically on the notion of anindividual, but the idea figures persistently in his discussion, andit is clear that he does not want to throw it out. His theory impliesthat we should take a radically different view from that of classi-cal modern logics as to what individuals (alias particulars) are.

The classical modern logical view (e.g., Carnapa 1958) assumesan objectively given domain consisting of individual objects, typi-cally examplified as concrete (e.g., the sun, the moon, the personCharles). It usually goes without saying that each object in the do-main is distinctive and not identical to any other object. Further-more, the objects in the domain are taken as the most basic ele-ments of meaning. This is evidenced in the postulation by formalsemanticists such as Montague (1973) of e as a basic ontologicaltype, the type of individual entities. How such objects are knownor perceived by people is not the concern of logicians.

The logical form of a specific elementary proposition uses indi-vidual constants as arguments of predicates, and emphasizes thedifferent functions in the logical system of predicates and individ-ual constants. When Barsalou begins to discuss propositions (sect.3.2), his examples are such as ABOVE(ceiling, floor) and BELOW(floor, ceiling). Clearly, these are not the propositions envisagedby logic, because, without being dazzled by the change from up-per to lower case type and interpreting floor and ceiling in a nat-ural way, these latter terms are predicates and not individual con-stants. I can point to something, and say of it (i.e., predicate of it)that it is a floor. When students in an introductory logic class con-fuse predicates with individual constants, we penalize them. Butin the long run, I believe, such students and Barsalou are right,and classical modern logic has got it wrong.

Barsalou writes that “perceptual symbols need not represent

Commentary/Barsalou: Perceptual symbol systems

620 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

specific individuals . . . we should be surprised if the cognitive sys-tem ever contains a complete representation of an individual. . . .we should again be surprised if the cognitive system ever remem-bers an individual with perfect accuracy” (sect. 2.2.3). This againshows the contrast with the logical approach, in which there canbe no question of the completeness or accuracy of an individualconstant term in denoting an individual; relative completeness isout of the question because an individual constant term is atomic,and accuracy is given by the logician’s fiat.

At his most careful, Barsalou speaks of “perceived individuals”or “individuals in the perceived world,” and this is where his ver-sion of proposition is based. “Binding a simulator with a perceivedindividual . . . constitutes a type-token mapping. . . . This type-token mapping implicitly constitutes a proposition” (sect. 3.2.1).Thus, in the very act of perceiving something as belonging to somepre-established mental category, I can entertain a proposition, be-cause my attention at the moment of perception is fixed on thisone particular thing. The argument of the predicate is given by myattention at the moment. The numerical singularity of the argu-ment comes from the bottom up, because I am attending to justone thing; the predication comes from a match between some per-ceived property of the thing and top down information about apre-existing mental category. Thus, a Barsalovian proposition isformed.

The numerical singularity of a perceived object is a product ofthe observer’s attention at the time of perception. An observermay focus on a pile of rice or on a single grain, or on a pair of bootsattended to as a pair, or on two separate individual boots. If I at-tend to two objects, simultaneously but individually (say, a manand his shaving mirror), my perception of them as two individualobjects at the time provides slots for two arguments, and I can finda pre-established 2-place mental relation (say, use) to classify theperceived event. (The limit on how many individuals one can at-tend to at once may be similar to the limits of subitization in youngchildren – up to about four items.)

A classic formal semantic model (e.g., Cann 1993) might con-tain a set of individual entities, each satisfying the predicate ant;the denotation of the predicate ant is the set. Each individual antis the denotation of some individual constant (e.g., a1, a2, a3, . . .)in the logical language. Thus, a1 and a2 are distinguished by thefiat according to which the logician constructs his model. Theproposition ant (a1) is a different proposition from ant (a2); andno further distinction – for example by a predicate applying to oneant but not to another – is necessary in the logical system. The rep-resentation of the proposition in the logical language tells us whichant the proposition is about.

By contrast, in terms of perception, I can attend to certain prop-erties of an object and judge from these properties that it is an ant,but the perceived properties cannot tell me which ant it is. I knowthat the world contains more than one ant, because I have some-times seen many together, but all I know is that I have seen someant. Barsalou’s account of the storing of a basic proposition in long-term memory describes it as involving a “type-token fusion” (sect.4.1.5). This fusion is not further described, but it must in fact re-sult in the loss of identity of the token, the “whichness” of the orig-inally perceived object.

Barsalou is usually careful to prefix “perceived” onto “individ-ual.” We can see how an individual can be perceived, but can anindividual be cognized, and if so, how? If individuals lose their“whichness” during the process of storing a type-token fusion inlong-term memory, how can the perceptual symbols (the simula-tors) in my mind for an ant and for my mother differ in a way thatechoes the classical difference between a set and an individual (orbetween a property and an individual concept)?

A cognized individual (as opposed to a perceived individual) canbe constructed by the process involving the three mechanismswhich Barsalou claims are central to the formation of abstract con-cepts, namely, framing, selectivity, and introspective symbols. Ihave many experiences of my mother and form a simulator allow-ing me to recognize her and anyone exactly like her. But I am never

presented with evidence that there is anyone exactly like her, de-spite the thousands of opportunities for such a person to appear.Whenever I perceive my mother, there is never anyone else pre-sent with exactly her properties. The same can be said of the sun orthe moon, which I also represent as cognized individuals.

This account accords well with Barsalou’s account of abstrac-tion: “First, an abstract concept is framed against the backgroundof a simulated event sequence” (sect. 3.4.2). In the case of mymother, the event sequence is drawn from all my experiences ofher. “Second, selective attention highlights the core content of anabstract concept against its event background” (sect. 3.4.2). Theselected core content of the abstract notion of a cognized individ-ual is that this particular simulator is sui generis, that one has neverencountered two perceived individuals together fitting this simu-lator. “Third, perceptual symbols for introspective states are cen-tral to the representation of abstract concepts” (sect. 3.4.2). An in-trospective state involved in the abstract notion of an individual iscomparison of percepts.

Consider, briefly, classic cases of the identity relation as ex-pressed by Clark Kent is Superman or The Morning Star is theEvening Star. Each such sentence seems to express a propositionequating two different cognized individuals. Before Lois Lane re-alized that Clark Kent is Superman, she had two distinct cognizedindividual simulators. She always and only saw Clark Kent wear-ing glasses and a baggy suit in the newspaper office; and she al-ways and only saw Superman flying through the air in his red andblue cape and catsuit. On the day when she saw Clark Kent be-come Superman, these two individual concepts merged into a sin-gle, more complex cognized individual. If, at some later date, sheactually explained to someone, in English, “You see, Clark Kent ISSuperman,” she was not thereby reflecting two separate individu-als in her own cognition, but collaboratively assuming that herhearer would not yet have merged the two concepts.

There is no space to explore the role that language plays in es-tablishing abstract cognized individuals. The grammaticalizationof deictic terms into definite determiners, such as English the, andthe fossilization of definite descriptions into proper names, likeBaker and The Rockies, have given us devices for expressing theabstract notion “individual.” The existence of stable cognized in-dividuals in our minds, as opposed to merely transient perceivedindividuals, in terms of which Barsalou mainly writes, must becentral to the ability of humans to “construct simulations jointly inresponse to language about nonpresent situations, thereby over-coming the present moment” (sect. 4.2, para. 3).

Identity, individuals, and the reidentification of particulars areclassic problems in metaphysics and the philosophy of language.Barsalou has articulated a view of cognitive symbols which, if de-veloped, can shed valuable light on these classic problems.

Creativity of metaphor in perceptual symbol systems

Bipin IndurkhyaDepartment of Computer, Information, and Communication Sciences, TokyoUniversity of Agriculture and Technology, Tokyo 184-8588, [email protected]

Abstract: A metaphor can often create novel features in an object or a sit-uation. This phenomenon has been particularly hard to account for usingamodal symbol systems: although highlighting and downplaying can ex-plain the shift of focus, it cannot explain how entirely new features cancome about. We suggest here that the dynamism of perceptual symbol sys-tems, particularly the notion of simulator, provides an elegant account ofthe creativity of metaphor. The elegance lies in the idea that the creationof new features by a metaphor proceeds in the same way as the creationof regular features in a perceptual symbol.

Barsalou has outlined an ambitious program to reaffirm the role ofperception in cognition. The proposal is indeed quite sketchy, as

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 621

the author himself explicitly acknowledges, and many details willhave to be worked out before a skeptic can be convinced of its vi-ability. However, I find it intriguing, and applaud the author’s at-tempt to rescue perception from the back seat and put it back inthe driver’s seat where it belongs. Indeed, the perceptual ground-ing of all cognition and the close bi-directional coupling betweenperception and cognition have largely been ignored by cognitivescientists and philosophers alike for most of this century. Chalmeret al. (1929) and Goodman (1976; 1978) are among the few notableexceptions to this general trend. For instance, Goodman echoed anessentially Kantian theme when he wrote: “Although conceptionwithout perception is merely empty, perception without concep-tion is blind (totally inoperative)” (Goodman 1978, p. 6, emphasisGoodman’s). Though this slogan is certainly witty, a comprehensiveframework uniting conception and perception has not yet been at-tempted, as far as I am aware, until this target article by Barsalou.

I would like to focus my comments on a specific phenomenonhaving to do with the creativity of metaphor, where a metaphorcreates a novel feature of an object or a situation. Recognizing therole of perception has a particularly illuminating effect in explain-ing this phenomenon, and we show how it can be addressed in theframework of perceptual symbol systems outlined by Barsalou.

Philosophers have long noted that metaphors can create newperspectives or new features on a certain object, situation, or ex-perience; and in this creation lies the true cognitive force ofmetaphor (Black 1962a; 1979; Hausman 1984; 1989). This kind ofcreativity in problem solving has been studied, among others, byGordon (1961; 1965), Koestler (1964) and Schön (1963; 1979).Recent psychological research has also demonstrated this aspectof creativity in understanding metaphorical juxtaposition in poetry(Gineste et al. 1997; Nueckles & Janetzko 1997; Tourangeau &Rips 1991).

Cognitive psychology and cognitive science research havelargely addressed metaphor within amodal symbol systems, wheremetaphors are seen to arise from mappings between the symbolsof two domains. The best these approaches can do is to use the no-tion of salience, together with highlighting and downplaying, toargue that creativity of metaphor consists in highlighting low-salience features of an object or situation. However, highlightingand downplaying does not explain how completely new featurescan be created.

An example can perhaps illustrate this point. In StephenSpender’s well-known poem “Seascape,” the poet compares theocean to a harp. In reading the poem, one’s attention is invariablydrawn to the way the rays of the sun reflect on the ripples of a calmocean. This is not so much a matter of highlighting certain aspectsof the amodal representation of the ocean, but creating a new rep-resentation for it. If one insists on explaining this as highlighting,then the amodal representation of the ocean quickly grows to anenormous proportion, as it must include every possible featurethat can ever be associated with the ocean by any metaphor or anyother cognitive mechanism. (See also Indurkhya 1992.)

The notion of a simulator, however, allows for a rather elegantexplanation of how new features can be created. As the simulatorsof both the harp and the ocean are activated, they try to build asimulation together. In this process, certain features of each co-operate (we might say that they resonate), while others canceleach other out. In this example, one might build an association be-tween the images of the light reflecting on the strings of a harp asthey lie waiting to be strummed and the sunlight playing on theripples that seem to stand still in a calm ocean. The fact that thesimulators contain the perceptual information about their referentsis very crucial here, as the resonance occurs between the percep-tual components, which cannot be reduced to a mapping betweenthe symbolic levels. (After the metaphor has been assimilated, onecan try to explain the metaphor verbally and symbolically. How-ever, in many such metaphors, the verbal explanations are usuallylong and tortuous, and often hopelessly inadequate.)

The resonance of features might be unique in the sense that thisparticular confluence of features might never have existed before,

and could not have been activated by either of the concepts alone.Thus, a new feature emerges, which can then be explicitly incor-porated in some perceptual symbol schema (see also Indurkhya1998). What is elegant about this account is that the emergence ofnew features by a metaphor is akin to the way features are first cre-ated, when a perceptual symbol schema is formed directly by se-lective attention (cf. sect. 2.2).

This account is akin to Fauconnier and Turner’s conceptualblending approach (Turner & Fauconnier 1995; see also Faucon-nier’s accompanying commentary in this issue), where they showhow concepts from many spaces blend together to producemetaphorical meanings. Barsalou’s simulators extend Fauconnierand Turner’s theory in a significant way by not only incorporatingthe perceptual dimension, but also giving it a paramount role toplay; so that the account of creativity of metaphor outlined abovecan better be dubbed “perceptual blending.”

Indeed, imagery and perceptual episodic memory have beenknown to play a key role in understanding certain metaphors(Marschark et al. 1983; Paivio 1979). This claim has been strength-ened by recent neurolinguistic research (Bottini 1994; Burgess &Chiarello 1996), and some researchers argue that metaphors areessentially grounded in perception (Dent-Read & Szokolszky1993). What Barsalou has tried to show us here is that metaphorsare not special in this way, because all concepts are essentiallygrounded in perception.

The uncanny power of words

Paul J. M. JorionThéorie et Praxis, Maison des Sciences de l’Homme, 75270 Cedex 06, Paris.paul [email protected] aris.ss.uci.edu/~jorion

Abstract: In their quality as acoustic or visual percepts, words are linkedto the emotional values of the state-of-affairs they evoke. This allows themto engender meanings capable of operating nearly entirely detached frompercepts. Such a laying flat of meanings permits deliberation to take placewithin the window of consciousness. In such a theatre of the imagination,linguistically triggered, resides the originality of the human psyche.

I’m going for a walk in a wilderness park in Southern California.At the entrance of the park I am handed a leaflet about possiblesightings of mountain lions. There are some instructions abouthow to act when faced with a puma. It sounds pretty exciting. I amon my own. I start walking, wondering how likely it is that I willbe faced with the animal. Soon enough my attention drifts away,I’m day-dreaming while I walk, more or less absorbed in the “in-ner monologue” of the stream of consciousness. At some point Ihear “within my head” – as part of inner speech: “I’m seeing one. . . I’m seeing one.” This makes me focus on the scene. Indeed(the “indeed” with its element of pause is essential in the process),I suddenly realize that about fifty yards away from me on the trail,there’s a puma, heading away from me into the brush.

What does this little piece of introspection reveal? There hasbeen a visual perception of the animal no doubt, but it accedes toconsciousness in a very indirect manner: the actual awareness istriggered by a sentence of inner speech, that is, the visual perceptgenerates a piece of linguistic behaviour that then focuses my vi-sual attention. I must have seen the animal unconsciously, whichI then only properly see at the conscious level once the percepthas been translated into words within inner speech.

A number of supposed coincidences work undoubtedly in thesame manner. I am at the station, and I think to myself, “This goodJonathan, what a pity I haven’t seen him for so many years.” Andlo and behold! Would you believe this? Here’s Jonathan! What anextraordinary coincidence! Well no: I must have perceived himunconsciously (the mystery of peripheral vision?) half a second orso before. Enough time to have the inner speech elaborate on howtrue a friend he is.

Commentary/Barsalou: Perceptual symbol systems

622 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

This is what makes us human beings: we don’t perceive hunger,we hear instead the inner voice saying “I kind of feel hungry.” Withus, the Word is indeed truly at the beginning. Although our per-ception is just like that of any other superior mammal: we, thespeaking mammal, are faced with the concept before the percept(acute pain might be an exception).

What the word and the sentences in which they combine allowis deliberation: “Shall I pursue the walk toward the puma, or shallI retreat – just in case?” With any other mammal, there is proba-bly very little deliberation, very little scrutinizing of alternativelines of conduct: The percepts elicit a combination of emotionsthat trigger the composition of a hormonal cocktail leading to amotor response, without any necessity for a display in the “win-dow” of consciousness (Jorion 1999). What is amusing about Buri-dan’s thought experiment about an ass paralyzed between twoequidistant sacks of oats is precisely that donkeys do not act thatway: you need reasoning with sentences to hesitate in this manner.Buridan’s ass would go for the first sack that entered his visualfield. The level where consciousness operates is essentially andcrucially the level of the concept. And this is why – contrary to cur-rent popular views – there is probably no consciousness withoutthe concept, the word, that is, no consciousness in animals.

Even so, with the example of the puma I have come a long waytoward giving Barsalou’s thesis some plausibility. In many other in-stances we are dealing with semantic issues pretty distant frompure perception. Identifying “perceptual symbols” with conceptsas Barsalou does, breaks down somewhere along the road: ofcourse there is some connection between words and percepts. Ofcourse, saying “The apple, here on the table,” has some relation-ship with perception: it is an instance of putting a percept intowords. But take another example, which Barsalou will be at painsto dismiss as it is a proposition borrowed from his target article:“an amodal symbol system transduces a subset of a perceptualstate into a completely new representation language that is inher-ently nonperceptual” (sect. 1.2). What proportion of the meaningof Barsalou’s sentence actually boils down to percepts or even to“perceptual symbols”? Very little indeed: is ten percent a gener-ous figure? Unless . . .

Unless one is prepared to regard concepts as what they likewiseare: percepts, auditory in speech, visual in writing (and hallucina-tory in the inner monologue?). But then the issue addressed byBarsalou dissolves into a quagmire of the universality of the per-cept. But here lies the solution to the question he aptly raises: acommon feature underlies both percepts and concepts; their in-trinsic association with an affective value, part of the affect dy-namics which motivates and moves us along.

It is such a network of affective values linked to the satisfactionof our basic needs, and deposited as layered memories of appro-priate and inappropriate responses, that allows “imagination” tounfold: to stage simulations of attempts at solution. What makes“The lion, the witch, and the wardrobe” an appealing title? Thatin the context of a child’s world the threesome brings up similaremotional responses and are therefore conceptually linked (Jo-rion 1990, pp. 75–76).

It is part of the originality of human experience through themedium of language that the affective dynamics have the capac-ity to operate in a world of words only, dissociating themselvesfrom any immediate physical resonance, that is, separated in anessential manner from the experience of sensual percepts (otherthan words). Barsalou’s sentence quoted above is a convincing in-stance of such a process. “Can you open the window?” say I, andthe window gets opened. The miracle of words. But also, “Tell meif proposition 209 is unconstitutional,” and you may say Yes or No.Such is the power of language.

Reinventing a broken wheel

Barbara LandauDepartment of Psychology, University of Delaware, Newark, DE [email protected]

Abstract: Barsalou is right in arguing that perception has been unduly ne-glected in theories of concept formation. However, the theory he proposesis a weaker version of the classical empirical hypothesis about the rela-tionship between sensation, perception, and concepts. It is weaker be-cause it provides no principled basis for choosing the elementary compo-nents of perception. Furthermore, the proposed mechanism of conceptformation, growth and development – simulation – is essentially equiva-lent to the notion of a concept, frame, or theory, and therefore inherits allthe well-known problems inherent in these constructs. The theory of sim-ulation does not provide a clearly better alternative to existing notions.

Barsalou is right in arguing that perception has indeed been un-duly relegated to the back burner in theories of concept formationspecifically, and cognition more generally. He is also right that therich nature of perception has been ignored, and that it has oftenbeen mischaracterized as analogous to a record – a blurred ver-sion of what impinges on the sensory systems. Finally, he is rightthat a key problem in understanding the nature of concepts (es-pecially lexical concepts) concerns how many of these aregrounded in our perceptual experience.

However, the theory Barsalou proposes does not provide astrong and clear alternative to existing theories. Moreover, al-though he intends it to be a reinvention of the classical empiricisthypothesis, it is actually much weaker than the theories of Lockeand Hume, and founders in its lack of specification of what simu-lation could be. It is weaker than the classical empiricist hypothe-sis because it provides no principled basis for choosing the ele-mentary components of perception which will form the buildingblocks for concepts. The classical theory assumed that the atomswere those sensory impressions defined by the structure of thesense organs as they interact with the physics of the world. Oper-ating on these were the processes of association and reflection.The pitfalls of constructing everyday concepts – such as Table,Ball, Above – from such elementary sensations are well known;but the advantage of such a system was its principled nature andits clear circumscription of what could be the elementary compo-nents of a compositional system (see Fodor 1981b).

Barsalou is right that much of recent history in cognition hasbeen occupied with understanding the structure of those higher-level concepts that must play a role in interpreting perceptual fea-tures. Many have noted that the importance of any property orfeature must be considered in the context of a particular conceptin order to gain an appropriate weighting: this is why we all un-derstand that feathers are somehow very important to the conceptof a bird, but much less important (if important at all) to the con-cept of a hat. Such observations correctly suggest that such vari-able weighting demands a theory of the underlying concept – justwhat we set out to explain by listing features. The burden of ex-plaining lexical concepts thus lies in understanding the concept it-self – under what explanations the properties are related – andthis is not likely to be forthcoming from a list of properties or fea-tures – even if these are expanded to include arguably nonprimi-tive features such as feathers.

Barsalou is also right that the emphasis on such higher-levelconcepts has often been made at the expense of understanding therelationship between what we now understand about perception,and these higher-level concepts. Nowhere is this imbalance moreclearly shown than in the literature on lexical concept acquisition.A preponderance of recent research has been designed to showthat young children can go beyond the limits of perception, mak-ing inferences about category membership even in cases whereperceptual information is degraded or absent. These demonstra-tions clearly show top-down effects in young children; but they donot support the notion that perceptual organization is less thancrucial in the early formation of these representations. Indeed, re-

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 623

cent work in the infancy literature provides compelling demon-strations of the extent to which perceptual organization during thefirst year of life can support the formation and use of categories(Quinn & Eimas 1996). Perceptual systems provide richly struc-tured and dynamic information from the earliest developmentalmoments; and understanding the relationship between these sys-tems and later modes of categorization is clearly a critical compo-nent of understanding human cognition.

However, the mechanism that Barsalou proposes is unlikely tomove us forward in this understanding. Stimulation is essentiallyequivalent to the notion of a concept, frame, or theory. As such, ithas all of the same virtues (being able to explain the dynamic,structured, and interpretive nature of concepts), and inherits allof the same flaws. Consider Barsalou’s claim to have made prog-ress in understanding how we represent Truth (sect. 3.4.3): asimulated event sequence frames the concept. But what is theconcept? Simulations devoid of content do no better at character-izing our knowledge than concepts devoid of content. Without un-derstanding the underlying notions of even such modest conceptsas Bird, Hat, or Above, we are in no better position to understandhow it is that we manage to gain, generalize, or ground our knowl-edge.

Latent Semantic Analysis (LSA), a disembodied learning machine, acquires human word meaning vicariously from language alone

Thomas K. LandauerInstitute of Cognitive Science, University of Colorado at Boulder, Boulder, CO80309. [email protected]/faculty/landauer.html

Abstract: The hypothesis that perceptual mechanisms could have morerepresentational and logical power than usually assumed is interesting andprovocative, especially with regard to brain evolution. However, the im-portance of embodiment and grounding is exaggerated, and the implica-tion that there is no highly abstract representation at all, and that human-like knowledge cannot be learned or represented without human bodies,is very doubtful. A machine-learning model, Latent Semantic Analysis(LSA) that closely mimics human word and passage meaning relations isoffered as a counterexample.

Barsalou’s hypothesis implies a path for evolution of human cog-nition that is consistent with its time course. Animals lackingspecifically human brain structures make inferences, so the abilityshould exist in processes we share, primarily sensorimotor sys-tems. But humans surpassed other animals by growing a huge vol-ume of less specific cortex and developing far superior communi-cation and abstract thinking. Language composed of largelyperception-independent symbols is the chief vehicle. These moreabstract processes communicate in both directions with percep-tion, but part of our representational capacity must have attaineda higher level of abstraction, autonomy, and conceptual scope.

Thus, I think Barsalou over-stretches what might be concluded.He apparently wants to persuade us that the components of cog-nition are all and only perceptual events or their simulations.Hence cognition is “embodied,” always incorporates perceivedidiosyncratic bodily states, is exclusively “grounded” in percep-tion, represents nothing as abstract as discrete “amodal” symbols,and thus, he concludes, nonbiological machines are in principleincapable of human-like concepts.

If Barsalou were right, no machine lacking human sensory inputcould autonomously acquire human-like knowledge from experi-ence. In particular, it could not learn to represent words and sen-tences, prime candidates for symbolic status, in such a way thattheir relations to each other would closely resemble any human’s.However, there is an existence proof to the contrary. A machine-

learning implementation of Latent Semantic Analysis (LSA) (Lan-dauer & Dumais 1997; Landauer et al. 1998a; 1998b) induceshuman-like meaning relations from a “senseless” stream of unde-fined arbitrary symbols. In these simulations, LSA is given thesame naturally occurring text as that read by a typical human.The input is solely strings of letters separated into words and para-graphs. Without human assistance, LSA maps the initially mean-ingless words into a continuous high-dimensional “semanticspace.”

LSA passes multiple choice tests at college level, scores in theadult human range on vocabulary tests, mimics categorical wordsorting and semantic relationship judgments, accurately mirrorsword-to-word and sentence-to-word priming, correctly reflectsword synonymy, antonymy, and polysemy, induces the similarity ofphrases such as “concerned with social justice” and “feminist,” andhas been used – with no other source of data – to rank knowledgecontent of essays indistinguishably from expert humans. If wordmeaning for humans depended strongly on information derivableonly from direct, embodied perception, this would be impossible.

The essential theory behind LSA is that the brain can use themutual constraints inherent in the way events of any kind – in-cluding perceived objects and motoric plans – occur in eachother’s context over many millions of occasions to induce a com-plete mapping of the similarity of each to each. Of course, LSA,as currently implemented for word-learning, has limitations. Triv-ially, it has not lexicalized idioms and aphorisms and misses mean-ings that changed after the training corpus was collected. A moreserious lack is a model for dynamically generated temporarymeanings such as metaphorical expressions; LSA represents onlystatistically modal long-term meanings. (Note that Glenberg’s un-published studies cited by Barsalou as “strong evidence” againstLSA are based on these sorts of incompleteness.)

LSA representations are not devoid of information about per-ception, emotion and intent. They derive from the way that living,feeling humans use words to describe experience. The way writ-ers use “love” and “mauve” reflects much, if not everything, thatexperience has taught them. The point is that all this can belearned and carried by a program running in silicon. Were LSAalso to have other sensory input than words, it would “ground”meanings in them too, but, given how well it does already, the dif-ference would necessarily be small. The nonrealistic brittlenessand unrealisticness of traditional AI “physical symbol systems” liesin their representational discreteness and their lack of meaningwithout suspect human assignment of features, Barsalou’s “trans-duction problem.” LSA has no such problem. Language providesdiscrete symbols in words; LSA learns continuous valued mean-ing representations for them from linguistic experience.

LSA does not do everything that humans do with concepts, butenough to refute Barsalou’s (and Searle’s, etc.) strong assertions.It does not have perceptions and feelings, but it can do significantcognition as if it did. Boyle’s law is not a gas, but it simulates someproperties of gases well. LSA is not human, but it can simulate im-portant aspects of human cognition. And, unlike simulating a gas,the practical consequence of simulating cognition is not just likecognition, it is cognition!

Barsalou’s claim that a machine cannot have human conceptsbecause each person has experience based on a unique body is ared herring. We need simulate only one typical human, similar, butnot identical, to others. A critical aspect of human concepts is theirmutual comprehensibility; the sharable properties are what wewant to understand first and foremost. Likewise, the fuss oversymbol grounding is overblown. The most primitive sensitivity totouch is useless without inference. There is nothing more physio-logically or logically real about perception than about abstract cog-nition. Both are neural processes designed to promote adaptivefunction, only some of which needs to depend on perceptual in-put or feedback. It makes as much sense to speak of perception asgrounded in inference as the other way around. Like LSA, humanscan learn much that is needed for their inference and deductionengines from the shared use of symbols independent of percep-

Commentary/Barsalou: Perceptual symbol systems

624 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

tion, and sometimes do interesting abstract thinking, such asmath, for which a strong tie to perception is either irrelevant or anuisance.

A view from cognitive linguistics

Ronald W. LangackerDepartment of Linguistics, University of California, San Diego, La Jolla, CA92093-0108. [email protected]

Abstract: Barsalou’s contribution converges with basic ideas and empiri-cal findings of cognitive linguistics. They posit the same general architec-ture. The perceptual grounding of conceptual structure is a central tenetof cognitive linguistics. Our capacity to construe the same situation in al-ternate ways is fundamental to cognitive semantics, and numerous paral-lels are discernible between conceptual construal and visual perception.Grammar is meaningful, consisting of schematized patterns for the pair-ing of semantic and phonological structures. The meanings of grammati-cal elements reside primarily in the construal they impose on conceptualcontent. This view of linguistic structure appears to be compatible withBarsalou’s proposals.

Barsalou’s stimulating contribution converges in myriad respectswith basic ideas and empirical findings of cognitive linguistics.Perceptual symbol systems are a promising vehicle for the imple-mentation of cognitive linguistic descriptions, which in turn reveala rich supply of phenomena that both support the general con-ception and offer significant challenges for its future develop-ment.

Perceptual symbol systems and cognitive linguistic approachesposit the same basic architecture, involving abstraction from ex-perience, the flexible activation of abstracted symbols in top-down processing, and their recursive combination producing anopen-ended array of complex conceptualizations. A central claimof “cognitive grammar” (Langacker 1987; 1990; 1991) is that alllinguistic units arise from actual events of language use throughthe general processes of schematization and categorization.Grounded in bodily experience, elaborate conceptual systems areconstructed through imaginative devices such as metaphoricalprojection (Lakoff & Johnson 1980; Lakoff & Turner 1989; Turner1987), the creation of mental spaces (Fauconnier 1985; 1997),conceptual blending (Fauconnier & Sweetser 1996; Fauconnier &Turner 1998), and productive simulations. The same devices, aswell as the conceptual structures already assembled, are dynami-cally employed in the construction of linguistic meanings. Ratherthan being mechanically derived from the meanings of its parts,an expression’s meaning is actively constructed, largely in top-down fashion, from the fragmentary clues provided by its consti-tutive elements. Its coherence derives from their reconciliation ina complex simulation that draws on all available resources (con-ceptual, linguistic, and contextual).

The perceptual grounding of conceptual structure is a basictenet of cognitive linguistics. “Image schemas” abstracted fromsensory and kinesthetic experience have been claimed by Johnson(1987) and by Lakoff (1987) to be the building-blocks of concep-tual structure, the abstract commonality preserved or imposed inmetaphorical mappings (Lakoff 1990), and the structures mostcritically employed in reasoning. I myself have speculated that allconceptualization is ultimately derivable from certain irreduciblerealms of experience – including time, space, as well as sensory,emotive, and kinesthetic domains – however many levels of orga-nization may intervene. I have further suggested that sensory andmotor images are essential components of linguistic meanings(Langacker 1987). Talmy (1996) has coined the word “ception” toindicate the extensive parallels he has noted between concep-tion and perception. The same parallels have led me (Langacker1995) to use the term “viewing” for both (without however takingany position on the extent to which cognition might be visuallygrounded).

Fundamental to a conceptualist semantics is the recognition ofour multifaceted ability to conceive and portray the same situationin alternate ways, resulting in subtly different linguistic meanings.This capacity for “construal” has numerous components that areposited through linguistic analysis and strongly supported by theirdescriptive utility. With striking consistency, these components ofconstrual bear evident similarities to basic aspects of visual per-ception (Langacker 1993). For instance, linguistic expressions cancharacterize a situation at any desired level of precision and detail(“granularity”), as seen in lexical hierarchies such as thing . crea-ture . reptile . snake . rattlesnake . sidewinder. This progres-sion from highly schematic to increasingly more specific expres-sions seems quite analogous to the visual experience of seeingsomething with progressively greater acuity while walking up to itfrom a distance.

In visual perception, we can distinguish the maximal field ofview, the general region of viewing attention (e.g., the stage, inwatching a play), and the specific focus of attention within that re-gion (e.g., a particular actor). An analogous set of constructs is re-quired for the semantic description of linguistic expressions. Theterm knee, for example, evokes the conception of the body for its“maximal scope,” selects the leg as its “immediate scope” (generallocus of attention), within which it “profiles” the major joint (theconceptual referent, or focus of attention). In expressions thatprofile relationships, one participant – called the “trajector” – isaccorded a kind of prominence analogous to that of the “figure”within a visual scene. For instance, above and below profile refer-entially identical relationships along the vertical axis; their se-mantic contrast resides in whether trajector status is conferred onthe higher or the lower participant. Thus X is above Y is used tospecify the location of X, while Y is below X specifies the locationof Y. I take this contrast as being akin to the perceptual phenom-enon of figure/ground reversal.

This is merely a sample of the parallels one can draw betweenvisual perception and the constructs needed for conceptual se-mantic description. These constructs are not required just forexpressions pertaining to visual, spatial, or even physical situations– they are fully general, figuring in the characterization of expres-sions relating to any realm of thought and experience, both con-crete and abstract. Moreover, they also prove crucial for grammar.An expression’s grammatical class, for example, is determined bythe nature of its profile in particular (not its overall conceptualcontent), and trajector status is the basis for subjecthood (Lan-gacker 1998). Once the importance of construal is fully recog-nized, the conceptual import of grammatical structure is dis-cernible. In fact, the pivotal claim of cognitive grammar is that allvalid grammatical constructs are meaningful, their semantic contri-butions residing primarily in the construal they impose on con-ceptual content. Lexicon and grammar are seen as forming acontinuum fully describable as assemblies of “symbolic structures”(i.e., pairings between phonological and semantic structures). I be-lieve this conception of linguistic structure to be broadly compat-ible with Barsalou’s notion of perceptual symbol systems.

Can handicapped subjects use perceptualsymbol systems?

F. LowenthalCognitive Sciences, University of Mons, B-7000 Mons, [email protected] www.umh.ac.be/~scoglab/index.html

Abstract: It is very tempting to try to reconcile perception and cognitionand perceptual symbol systems may be a good way to achieve this; but isthere actually a perception-cognition continuum? We offer several argu-ments for and against the existence of such a continuum and in favor ofthe choice of perceptual symbol systems. One of these arguments is purelytheoretical, some are based on PET-scan observations and others arebased on research with handicapped subjects who have communication

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 625

problems associated with cerebral lesions. These arguments suggest thatmodal perceptual symbols do indeed exist and that perception and cogni-tion might have a common neuronal basis; but perceptual and cognitiveactivities require the activation of different neuronal structures.

1. Do the main differences between perceptual and nonpercep-tual symbol systems favor the perceptual ones? There seem tobe two major differences between modal perceptual symbol sys-tems and amodal nonperceptual symbol systems:

a. The modal “perceptual” symbols are global symbols,whereas the amodal “nonperceptual” symbol systems are combi-nations of small preexisting elements, such as words;

b. In the case of modal symbols, the coding procedure used totransform a perception into a symbol and store it in long termmemory could be direct, from one neuronal zone to another; butin the case of amodal symbol systems, this coding procedureclearly requires the mediation of a preexistent, fully developed,symbol system such as a language.

This mediation of a preexistent, fully developed symbol systemrequires a framework, such as that described by Chomsky (1975),with its inborn LAD (language acquisition device), but it is totallyunacceptable in a constructivist framework, such as those de-scribed by Piaget (Droz & Rahmy 1974), Bruner (1966), and Vy-gotsky (1962). The second difference between modal and amodalsymbol systems thus favors perceptual modal symbols.

2. Do perceptually based activities favor conceptualization?We have observed and treated subjects with major brain lesionswho have lost all form of language (Lowenthal 1992; Lowenthal &Saerens 1982; 1986) using a nonverbal technique (Lowenthal1978) involving unfamiliar sets of objects that must be manipu-lated and under certain technical constraints that make certain ac-tions possible and others impossible.

Lego bricks provide a simple example: When a subject is askedto make a path with Lego bricks on the usual baseboard, he dis-covers that the bricks used must be parallel to one side of theboard. In the Lego world, there are only flat and right angles, nodiagonals. Papert (1980) described how he discovered the first el-ements of algebra without instruction by playing with the gearwheels his grandfather had given him. Diènesí Attribute blocks,pegboards, and other devices such as geoboards can be used in thesame way for more sophisticated exercises (Lowenthal 1986;1992). All share a common property: they have constraints thatmake certain actions possible and others impossible. This providesthe subject with a “logical structure” adapted to the device used.The technical constraints define the “axioms” of this logic. It hasbeen suggested (Lowenthal 1986) that manipulating such deviceshelps the subject, first, to discover the relevant elements and, sec-ond, to combine small relevant elements to form bigger relevantblocks. This process continues until the subject reaches a satisfac-tory solution. It is hypothesized that the technical constraints onthe devices enable subjects to formulate hypotheses in a nonver-bal way, and to test and adapt them.

When our subjects were confronted with new problem situa-tions, they began to respond by trial and error but quickly madesignificant progress. The concept formation was clearly based onmanipulating and seeing. Hence the concepts were probably builtat the Bruner’s first two (1966) representation levels.

Mauro (1990) has shown that subjects with clearly localizedbrain lesions can reconstruct a communication function even ifthe brain lesions are large and bilateral symmetrical; but this is notthe case in subjects with diffuse brain lesions. These differencescan be interpreted as follows: the nonverbal processing mentionedabove enables subjects with large but localized brain lesions to usenondamaged neurons to reconstruct, at least partially, the equiv-alent of the damaged neuronal structures. This cannot be the casefor patients with diffuse lesions because too many neuronal struc-tures have been qualitatively damaged.

These observations imply an important role for the sensorimo-tor cortex as the substrate for the elaboration of concepts, pro-vided there are still large “untouched” zones available. It may also

imply that, in the reorganization process, neurons could be in-volved in conceptualization even though they do not belong to thesensorimotor cortex. In the cases observed, we have enough datato conclude that the reconstructed communicative function isbased entirely on the reconstructed symbols, and that these werecreated by the patient while he was using a technique based onmanipulation and vision, hence perception. This does not allow usto conclude that the reorganized cognitive activity functions as anextension of perceptual activities.

3. Are there data showing that perceptual and cognitive ac-tivities have a common neuronal basis and share commonneural structures? PET-scan data show that a majority of corticalneurons are involved in vision, which is a perceptual task; but theyalso suggest that the same neuron could be involved in differentstructures corresponding to different tasks. Mellet et al. haveshown that the areas active in mental imagery tasks (images imag-ined but not actually perceived) are the same as the ones that areactive during actual visual perception, except for the primary vi-sual area V1 (Tzourio & Mellet 1998). They also showed that whenconcrete words are evoked (banana) some visual areas are active,but not V1, while no visual area is active in the case of abstractwords (justice). Burbaud et al. observed that many prefrontal neu-rons are activated when subjects are engaged in computing tasks,such as addition. The active prefrontal neurons are not the samewhen the subject is engaged in a task such as “add 624 and 257mentally” or in a task such as “imagine you are writing on a black-board and add 624 and 257” (Burbaud 1998; Burbaud et al. 1995).According to these authors, neural activity differs depending onthe subject’s preferred strategy and on his lateralization.

All these observations point to the same conclusion: the neu-rons of the sensorimotor cortex can play a role in cognitive activi-ties but other neurons might also be involved. Cognitive mecha-nisms are not the same as perceptual ones: both mechanisms sharesome neural structures, but certain structures seem to be associ-ated only with one. It might be the case that the activation of sen-sorimotor neurons is only associated with the evocation of symbolsand concepts, but not with the cognitive treatment of these sym-bols.

Conclusions. There accordingly seem to be theoretical argu-ments in favor of perceptual symbol systems. Perceptually basedactivities also seem to favor the emergence of conceptualization insubjects with localized brain lesions, and this conceptualizationseems to be the basis of a renewed cognitive activity; this does notseem to be the case in subjects with diffuse lesions. Nevertheless,PET-scan data seem to show that although perceptual and cogni-tive activities share common neural substrate, the structures in-volved can be very different. These findings argue against the ex-istence of a perception-cognition continuum.

Whither structured representation?

Arthur B. Markmana and Eric Dietrichb

aDepartment of Psychology, University of Texas, Austin, TX 78712; bProgramin Philosophy, Computers, and Cognitive Science, Binghamton University,Binghamton, NY 13902-6000.

[email protected] [email protected]/psy/faculty/markman/index.htmlwww.binghamton.edu/philosophy/home/faculty/index.html

Abstract: The perceptual symbol system view assumes that perceptualrepresentations have a role-argument structure. A role-argument struc-ture is often incorporated into amodal symbol systems in order to explainconceptual functions like abstraction and rule use. The power of percep-tual symbol systems to support conceptual functions is likewise rooted inits use of structure. On Barsalou’s account, this capacity to use structure(in the form of frames) must be innate.

Barsalou’s perceptual symbol systems project is exciting and im-portant. Among its virtues, it lays the groundwork for a more em-

Commentary/Barsalou: Perceptual symbol systems

626 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

bodied approach to cognition and it subtly changes the way cog-nitive scientists approach mental content, thereby changing whattools theories of mental content or intentionality can use.

A core distinction in cognitive science is the one between per-ceptual and conceptual representation. Barsalou points out thatthis distinction is problematic, because there is no successfuldemonstration of how these types of representations are con-nected. He suggests that this difficulty should be taken as a signthat perceptual representations are all there is. There are noamodal representations.

In order to make this account plausible, it is critical to demon-strate how key aspects of conceptual representation like abstrac-tion can be cashed out in a perceptual symbol system (PSS). Thesefunctions of conceptual representation are accomplished in a PSSby assuming that perceptual representations have role-argumentstructure in the form of frames.

It is not surprising that Barsalou suggests that frames form thecore of PSS. He has long regarded frame representations as beingwell suited to be the medium underlying conceptual representa-tion (Barsalou 1992). Further, many cognitive scientists adoptedrepresentational systems with some kind of role-argument struc-ture in order to facilitate the abstraction process (see e.g., Fodor1981a; Markman 1999; Shank 1982). And finally, structured rep-resentations have been suggested as the basis of models of centralperceptual processes like object recognition (Biederman 1987;Marr 1982). Thus, there is some reason to believe that perceptualrepresentations are structured, and structured as Barsalou claimsthey are.

Naturally, the assumption that frames are critical to perceptualsymbol systems leads to the question of where the capacity to rep-resent information with frames comes from. There are two possi-bilities. One is that the frames develop over the life of a cognitivesystem from simple unstructured representations (e.g., vectors orindependent features). A second possibility is that the capacity tobuild frames is an inherent capability of a PSS. These possibilitiesare just a version of the standard “learned versus innate” debatein cognitive science.

To our knowledge, all attempts to construct complex structuredrepresentational schemes starting with only unstructured repre-sentations have foundered. For example, attempts to account forthe development of complex representational capacities using as-sociative connectionist models were not successful (Fodor &Pylyshyn 1988; Marcus 1998). It is possible to create complex rep-resentational structures in a connectionist model, but such struc-ture has to be built in ahead of time using other techniques thatgive the connectionist system a classical structuring capability(e.g., Shastri & Ajjanagadde 1993). A very good example of this isSmolensky’s interesting work on connectionism. His tensor prod-uct approach does have classical constituent structure alreadybuilt in (see the series of papers: Smolensky 1990; Fodor &McLaughlin 1990; Smolensky 1995; and McLaughlin 1997).

In light of this difficulty, Barsalou assumes that the capacity toform structured representations is an inherent component of aPSS. That is, he assumes stimuli that contact a cognitive agent areconverted into frame representations early on, and that the ca-pacity to do this is innate in the system or organism. Althoughstructured representations have been incorporated into models ofperceptual processes, they are not a necessary component of mod-els of perception (e.g., Ullman 1996).

The assumption that representations are structured is extraor-dinarily important for the PSS account. Once these frames areconstructed, many of the techniques of concept formation and ab-straction used by proponents of structured amodal representa-tions can be incorporated into PSS including the ability to makesimilarity comparisons and to reason by analogy (Gentner 1983;Gentner & Markman 1997). Thus, once Barsalou assumes thatrepresentations in a PSS are frames, the ability of this system toaccount for higher level cognitive abilities is virtually assured.

Where the perceptual symbol system approach differs fromprevious approaches to structured representations is in assuming

that the components of representational frames are tied to per-ception rather than being derived from a central multimodal rep-resentation language (or perhaps from language ability itself ).However, it is here that Barsalou’s approach has its greatestpromissory note. It is a bold step to posit that the structure repre-sentations that form the basis of conceptual abilities are closelytied to perception. It is now critical to demonstrate how a true per-ceptual system could give rise to representations of this type.

Development, consciousness, and theperception/mental representation distinction

Lorraine McCuneDepartment of Educational Psychology, Rutgers University, New Brunswick,NJ 08903. [email protected]

Abstract: Perceptual symbol systems provide a welcome alternative toamodal encapsulated means of cognitive processing. However, the rela-tions between perceived reality and internal mentation require a more dif-ferentiated approach, reflecting both developmental differences betweeninfant and adult experience and qualitative differences between con-sciously perceived and mentally represented contents. Neurological evi-dence suggests a developmental trajectory from initial perceptual states ininfancy to a more differentiated consciousness from two years of age on.Children’s processing of and verbal expressions regarding motion eventsprovides an example of the changing capacity for mental experience.

Barsalou defines a “symbol” as a “record,” a neurologically em-bodied result of the neural state underlying a given perceptual ex-perience. Perceptual symbols so defined are described as themedium for all cognitive processing, both conscious and uncon-scious. The author is persuasive in his argument that an embod-ied modal system, directly relating biology to experience is a par-simonious and effective basis for mental processes. However,theoretical considerations regarding the term “symbolic” and de-velopmental issues require further elucidation.

In the realm of consciousness, perception has traditionally beenconsidered distinct from mental, “imaginal” or “symbolic” repre-sentation (Sartre 1984; Werner & Kaplan 1963). Perception is theongoing experience of present reality, while representation, ac-complished through the mediation of symbols, allows considera-tion of past, future, and counterfactual phenomena. Both types ofconscious experience could have their basis in the perceptual sym-bol systems as defined, but the distinction between them needs tobe addressed within the theory.

It has become common to roughly equate infant mentation withthat of adults as Barsalou does, “From birth, and perhaps before,the infant simulates its expected experience and assesses whetherthese simulations map successfully into what actually occurs”(sect. 3.4.3, para. 6). The author acknowledges the importance ofdevelopmental transitions, but only up to a point, noting that “asinfants’ perceptual and bodily systems develop, their conceptualsystems should change accordingly. Nevertheless the same basicform of conceptual representation remains constant across bothevolution and development, and a radically new form is not nec-essary” (sect. 4.2, para. 5). I would like to suggest that the infant’sinitial capacity from birth and before is limited to direct percep-tual experiences of which neural records may be kept, based onthe infant’s attentional focus, as Barsalou proposes. But during thefirst two years the child develops the initial capacity for mentallyrepresenting reality, a transition which can be documented be-haviorally as well as on the basis of changing neurological struc-tures. Without such mental representation capacity the child isclearly incapable of entertaining propositions as described in sec-tion 3.2. The capacity for sensory and motor understanding ofsuch relationships as “above” and “below” is not equivalent topropositional analysis.

Observable behavioral changes are demonstrated in activities

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 627

such as object search and play which are apparently under thechild’s conscious control. Evidence from postnatal neurologicaldevelopment favors an initial organization emphasizing sensoryand motor areas which show rapid early synaptogenesis (e.g., vi-sual cortex at 3–4 months; Huttenlocher 1994) with later growthin prefrontal areas leading to maximum density at about 12months and increasing functional capacity as brain growth con-tinues rapidly throughout the second year (Bates et al. 1994; Hut-tenlocher 1994; Vaughn & Kurtzburg 1992). This is the period ofinitial development and consolidation of mental representation asexpressed in object search, representational play, and language.Changes in prefrontal functioning at 9–10 months differentiates12-month-olds who tolerate long versus short delays on the Pi-agetian A not B search task, originally considered a measure ofearly mental representation (Fox & Bell 1993; Piaget 1954). Theproposed inhibitory and integrational role of prefrontal cortex forthis task and for the indirect object search task (Diamond et al.1994) could also account for shifts in representational play associ-ated with language changes in the second year of life (McCune1995). High versus low language producers in the 14–20-monthage range have exhibited different patterns of event-related po-tential (ERP) responses to comprehended versus familiar words,suggesting that the speech production itself may contribute toneurological organization for language processing (Mills 1994).

Sartre (1948) distinguished perceptual from imaginal (mentallyrepresented) experience. A perceptual consciousness of a portraitattends to the pictorial properties of color, the appearance of thesubject, and other information directly available to the senses. Ina representational consciousness the portrait is merely a vehiclefor evoking a mental experience of the individual portrayed. Asadults we shift rapidly between these modalities in everyday life.There is reason to suppose the child achieves this capacity onlygradually, the capacity to shift from perception to representationkeeping pace with the developing capacity for representational ex-perience. Both Piaget (1962) and Werner and Kaplan (1963) pro-posed bodily imitation of external events as the initial vehicle forinternalizing perceptual and motor experiences which might laterbe represented in consciousness without such bodily accompani-ment.

Motion events which infants witness and in which they partici-pate provide a domain of experience including perceptual, motor,representational, and linguistic developments during the first twoyears of life which demonstrates the link between the earliest per-ceptual experience and the onset of propositional judgment. Ob-jects and people in the world maintain their identity and appear-ance while undergoing a variety of motions in space. By 16 weeks-of-age, infants use kinematic information to discriminate the formof a pictured object from the background and to maintain theidentity of a three-dimensional object, distinctions that cannot bemade for stationary objects even at 32 weeks (Kellman 1993).Talmy (1985) proposed that “the motion situation” provides astructure in the real world that is consistently expressed syntacti-cally and semantically across languages, but by varying linguisticmeans. Children beginning to speak produce words referring tocommon relational aspects of motion events which differ by partof speech across languages, while maintaining equivalent concep-tual content. These words, first described by McCune-Nicolich(1981) as unified by a common underlying structure, give evi-dence of a universal set of cognitive/linguistic primitives consis-tent with their derivation from the child’s experience of motionevents.

Motion events display temporal event sequences, where thechild’s production of a relational word indicates a mental compar-ison of the present state of affairs to a prior, expected, or desiredreverse alternative, for example: iteration or conjunction (more,again) and potential reversibility or negation (no, ohoh, back).These early single word “propositions” provide the first linguisticevidence that the child is using a system of mental representation– most likely a perceptual symbol system – for cognitive process-ing.

Simulations, simulators, amodality, and abstract terms

Robert W. Mitchell and Catherine A. ClementDepartment of Psychology, Eastern Kentucky University, Richmond, KY40475. [email protected] [email protected]

Abstract: Barsalou’s interesting model might benefit from defining simu-lation and clarifying the implications of prior critiques for simulations (andnot just for perceptual symbols). Contrary to claims, simulators (or frames)appear, in the limit, to be amodal. In addition, the account of abstractterms seems extremely limited.

Barsalou’s proposal for grounding cognition in perceptual simula-tion is intriguing, and seems almost an elaboration of some ideasabout the development of cognition from sensory-motor pro-cesses presented by Jean Piaget (e.g., 1945/1962; 1947/1972),who remains uncited. Barsalou attempts to avoid many of theproblems of viewing cognition as based on perceptual “images” byletting in image-like structures (“simulations”) by the back door ofneural processing. Although “perceptual symbols” are uncon-scious, componential, indeterminate, dynamic, and nonindividu-alistic records of neural states or activations related to perception,simulations can be conscious, determinate, static, and (somewhat)individualistic image-like entities, or the reverse (unconscious, in-determinate, etc.). Records of neural states/activation are them-selves organized into a simulator (frame or concept, seeminglycomparable to Piagetian schemata), which produces simulationsbased on perceptual states, thereby making clear that the simula-tion is not a copy of perception.

Barsalou’s analysis is a useful way to avoid the presumed pitfallsof perception-based cognition but, oddly, “simulation” is never de-fined. Simulations are described as similar to “specific images ofentities and events that go beyond particular entities and eventsexperienced in the past” (sect. 2.4.2), with the important conse-quence that much of the enormous literature on perception-basedcognition is still relevant to his model. It would seem, though, thatmany of the criticisms of the past (e.g., those of Berkeley andHume) had not been directed, as Barsalou claims, to perceptualsymbols, but rather to simulations. Thus, although the perceptualsymbols avoid the problems about cognition derived from per-ception, it is unclear that simulations avoid these problems. In ad-dition, in attempting to conceptually separate potentially con-scious simulations from nonconscious perceptual symbols (sect.2.1.1), Barsalou implies that consciousness is not at all necessaryfor cognition; but bringing up blindsight as an instance of non-conscious cognition ignores the fact that blindsight occurs only inpeople who previously had conscious visual experiences.

Simulations are repeatedly described as multimodal in thesense that they can incorporate multiple and diverse perceptualmodalities, but the particular aspects described (frames, special-izations) are always within one modality (predominantly vision) ata time. One potentially problematic aspect for the proposal iswhether frames capable of being specialized multimodally are, inthe limit, amodal when the same element in the frame can be rep-resented in multiple modes. For example, problems arise whendescribing cross-modal knowledge such as the kinesthetic-visualmatching present in mirror-self-recognition and bodily imitation(Mitchell 1994). If “body image” is represented both visually andkinesthetically, and both aspects can represent (or simulate) thesame thing (the body), this representation/simulation must be ei-ther amodal (and accessible to both modes) or simultaneouslymultimodal. If it is simultaneously multimodal, the knowledgestructure must be a connection (neural or other) between visuallyand kinesthetically stored simulations coordinated into a frame –a sensorimotor matching or mapping between frames, not just apossible specialization as either visual or kinesthetic. If this framedevelops for the body image with all its possible specializationscompletely unspecified as to being kinesthetic or visual, isn’t theframe then amodal? Similarly, rhythm can be detected in one’s

Commentary/Barsalou: Perceptual symbol systems

628 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

own movements, in the visual experience (or memory) of another’smovements, or auditorily in music. When one recognizes the iden-tity in these different rhythms, is this a mapping among simula-tions, a mapping from simulations to perceptions, or an amodaldetection of temporal consistency? More complex: How is a non-perceptual analogy recognized if a frame needs to be perceptuallyspecialized?

Barsalou’s analysis of truth, though impressive, seems more adescription of how to decide if something is true while alreadyhaving a notion of truth to decide about. A matching between a vi-sual simulation and a visual experience could represent not only“truth,” but also “similar,” “comparable,” and “looks like.” Giventhat “truth” does not seem to be a perceptual construct, moreseems necessary to justify the claim that people’s core intuitivesense of “truth” is represented by matching simulation to percep-tion. Similar problems seem likely with other perceptually un-specifiable terms such as “can,” “might,” “electricity,” “ignorant,”and even “thing.” It will, then, be interesting to see how far per-ception-based cognition can go in elucidating how we understandthe world and ourselves.

Introspection and the secret agent

Natika Newton1

Nassau County Community College, New York, NY 11530.

[email protected]

Abstract: The notion of introspection is unparsimonious and unnecessaryto explain the experiential grounding of our mentalistic concepts. Instead,we can look at subtle proprioceptive experiences, such as the experienceof agency in planning motor acts, which may be explained in part by thephenomenon of collateral discharge or efference copy. Proprioceptive sen-sations experienced during perceptual and motor activity may account foreverything that has traditionally been attributed to a special mental activ-ity called “introspection.”

Barsalou’s target article presents a convincing case for perceptualrather than amodal symbol systems. Considered in light of a pro-posal like this one, the traditional amodal view appears, in hind-sight, strikingly misguided and implausible. My only serious ob-jection concerns Barsalou’s remarks on introspection (sects. 2.3and 2.4) which he sees as “an aspect of perceptual experience” likeproprioception. He points out that introspective processing is rel-atively poorly understood.

It is questionable whether there is an aspect of perceptual ex-perience that corresponds to “introspection.” Barsalou does notdefine the term, and talks about it somewhat in the style of Lockewho believed that we can attend to either outer or inner experi-ence, and when doing the latter, will perceive mental operations– for example, comparing, representing, and so on. Why believethat there are specific brain mechanisms for processing percep-tual experiences of other brain mechanisms? There would seemto be more parsimonious ways to explain what was traditionallycalled “introspection” without taking that risky step toward infi-nite regress, especially without evidence of any correspondingbrain structures.

Barsalou does talk about selective attention and our ability toabstract, and suggests that in introspection, selective attention canfocus on our “ability to represent something in its absence.” Thisprocedure seems to me to be a case of theorizing within a partic-ular conceptual framework involving “the mind,” “representa-tion,” and so on, but not requiring any ability to experience the“ability to represent.” The concept of representing, if Barsalou isright, must be perceptually grounded like any other concept; butit can be grounded in the perceptual experience of external rela-tions between representer and represented, not necessarily in theactivity of representing all by itself. The ability to represent thingsmay well be logically entailed by our representational experience

together with the theories we hold about the brain, but that en-tailment is not the same thing as the representing ability being anexperiential aspect of the process of representation, in the waythat, for example, visual features such as colors or outlines can be.

Instead of talking about introspection, I suggest that we look forsubtle proprioceptive experiences that accompany activities likerepresenting, attending to, and so forth, and that may be unno-ticed but taken for granted as essential aspects of the experiencesof those activities together with their sensory objects. For exam-ple, when one sees a visible object, one is aware not only of thevisible features of the object, but of the proprioceptive sensationsof focusing the eyes and orienting the body toward the object.These ordinary perceptual experiences may be all that is neces-sary to ground the concept of “seeing.”

One aspect of experience that has traditionally seemed to re-quire a special form of access like introspection is the experienceof agency that accompanies all voluntary intentional activity, evensubtle perceptual activity such as ocular control. But even this ex-perience is proprioceptive. It appears to be produced by corollarydischarge, or efference copy: information about motor commandsto the muscles that is sent to the parietal lobe and to the cerebel-lum from the motor and premotor cortex, for purposes of com-parison with the expected movement. This information about mo-tor commands is attended to by the subject normally only when theexpected feedback does not match the commands, or is lacking al-together as when the relevant limb is paralyzed. In ocular motorcontrol, the experience is almost continuous, since the eyes mustbe constantly adjusted in response to shifts in the visual field (Houket al. 1996). The subjective feeling of agency is strikingly absentwhen motor activity is initiated artificially in experimental condi-tions; in such cases the movement is “automatically referred to asource (an intention) distinct from the self” (Jeannerod 1997, p.189). There is evidence that information from efference copy isheld in working memory, and hence would be accessible to con-sciousness like any other perceptual experience (Houk et al. 1996).

Barsalou has supplied us with all the basic mechanisms neededto ground our entire conceptual system in sensorimotor experi-ence. All that is needed is to tease out a few more of the tricks usedby our bodies to inform us of what we are doing and what is hap-pening to us so that we can be, or at least feel that we are, full in-tentional agents. Efference copy is one trick that bears more study.Unless systems like this prove to be inadequate to explain self-awareness, concepts like introspection should probably be kept inthe archives.

NOTE1. Author’s mailing address is 15 Cedar Lane, Setauket, NY 11733.

Can metacognition be explained in terms of perceptual symbol systems?

Ruediger OehlmannUniversity of Essex, The Data Archive, Psychology Unit, Wivenhoe Park,Colchester, CO4 3SQ, England. [email protected]/~roehl

Abstract: Barsalou’s theory of perceptual symbol systems is consideredfrom a metacognitive perspective. Two examples are discussed in terms ofthe proposed perceptual symbol theory. First, recent results in researchon feeling-of-knowing judgement are used to argue for a representation offamiliarity with input cues. This representation should support implicitmemory. Second, the ability of maintaining a theory of other people’s be-liefs (theory of mind) is considered and it is suggested that a purely simu-lation-based view is insufficient to explain the available evidence. Both ex-amples characterize areas where Barsalou’s theory would benefit fromadditional detail.

Barsalou presents a series of very careful and challenging argu-ments for a perceptual theory of knowledge. This commentary will

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 629

consider Barsalou’s view of introspection from the perspective ofmetacognition. A common view in various characterisations ofmetacognition is that the term describes mechanisms of monitor-ing and controlling cognition (Flavell 1976; Nelson & Narens1990). This commentary will be centered around two examplesthat require thinking about an agent’s knowledge and beliefs. Theexamples will be reviewed from the perspective of Barsalou’s per-ceptual symbol system theory and difficulties to explain somemetacognitive results in terms of this theory are highlighted. Thefirst example involves feeling-of-knowing judgement.

Feeling of knowing (FoK) judgements have been considered aspart of the monitoring process. The classical view is that these arejudgements about whether a given currently nonrecallable itemcan be retrieved from memory (James 1890). A recent extensionof this definition views FoK as a rapid, preretrieval stage (Miner& Reder 1994; Reder & Ritter 1992), and sees the function of FoKto be judging the expected retrievability of queried information.For example, Reder and Ritter (1992) presented their subjectswith calculation tasks such as 23*34 5 ?, and asked them to makea decision whether they wanted to obtain the solution by retrievalfrom memory or calculation. The results showed that subjectswere able to make a very rapid initial evaluation of a question be-fore an answer was attempted. Retrieving the actual answer re-quired more time than the strategy choice decision, which wasbased on familiarity with problem parts rather than with the an-swer. Reder and Ritter therefore concluded that FoK judgementis based on familiarity with cues derived from the problem.Schunn et al.’s (1997) source activation confusion (SAC) modeladdressed the issue of cue familiarity by using input nodes to rep-resent problem components such as 23, *, and 34. These nodeswere linked to a node that represented the entire problem 23*34that in turn was connected to a node which represented the solu-tion. Activation passing from input nodes to the entire-problemnode allowed FoK judgement whereas activation passing from thisnode to the solution node allowed answer retrieval.

Simulators could obviously be used to represent a problem andits solution. More specifically, the frames that form part of a sim-ulator (target article, sect. 2.5, para. 1) could store the differentcomponents of the problem and solution as was indicated in Fig-ure 3 of the target article (for a car). The solution could then beretrieved if the same problem was presented again. This approachwould not support an intermediate representation of an input cue,however, as this is achieved in the SAC model. In contrast to FoKjudgement that is essentially part of monitoring an agent’s ownknowledge, the second example is concerned with thinking aboutthe beliefs of others.

Theory of mind (ToM) has been characterized as the ability toassign mental states to self and to others and to use knowledgeabout these states to predict and explain behavior (Leslie 1987;Premack & Woodruff 1978). A classical example of ToM is thefalse belief task described by Wimmer and Perner (1983). It in-volves the child Maxi, who puts chocolate in a blue cupboard andgoes playing. In the meantime, the mother uses part of the choco-late and puts the remaining part into the green cupboard. Wim-mer and Perner reported that when asked where Maxi would lookfor the chocolate after returning from playing, most children un-der four years of age attributed to Maxi what they believed them-selves, that the chocolate is now in the green cupboard.

Barsalou relates introspection to representational and emo-tional states (target article, sect. 2.3.1, para. 1). A clarification isneeded of how his theory relates mental states such as beliefs anddesires to introspection. If these were meta-level experiences rep-resented by simulators, it would mean that both beliefs and de-sires could be simulated. However, Gopnik and Slaughter (1991)demonstrated that three-year-olds can report mental states suchas desires and perceptions that are just past but now different,whereas they confirmed that three-year-olds failed false belieftasks. In addition, three-year-olds pass discrepant belief tasks,which require that another person maintain a belief that is notmisrepresented but different from the agent’s belief (Wellman &

Bartsch 1988). Gopnik and Wellman explain the different resultsfor false belief tasks and discrepant belief tasks by pointing outthat the child fails to realize that the belief in the false belief taskis misrepresented. If beliefs were simply simulated, these differ-ent results would not be understandable. Another explanation hasbeen put forward that is based on the assumption that children’searly conceptions of the mind are also implicit theories and thatchanges of these conceptions are theory changes (Gopnik & Well-man 1994; Leslie 1994; Leslie & German 1995). This assumptionhas been referred to as theory-theory; it argues that young chil-dren do not have an explanatory psychological construct that isbased on a theoretical conception of belief. Therefore their ex-planations do not consider beliefs and especially false beliefs ormisrepresentations. This suggests that rather than relying solelyon simulation, a perceptual theory of knowledge should accountfor constructs derived from implicit theories as well.

In conclusion, two examples have been presented to indicatewhy it is difficult to explain some metacognitive results in terms ofperceptual symbol systems. The first focused on frames and theirorganization in a simulator. In the SAC model (Schunn et al.1997), there is clearly a functional distinction between units thatrepresent input components and intermediate units that are in-volved in FoK judgement. Nodes that store solutions can only beretrieved after activation has been passed to relevant intermedi-ate nodes. It appears that in Barsalou’s perceptual symbol system,such a distinction would have to be clarified for links betweenframes before simulators would be able to support FoK judge-ment. The second example questioned simulations as the only ap-proach to understanding beliefs and suggested that some resultscould be better explained by assuming implicit theories. Hence itmight be useful if a theory of perceptual symbol systems allowedfor explanations based on implicit theories as well as simulations.

Selecting is not abstracting

Stellan OhlssonDepartment of Psychology, University of Illinois at Chicago, Chicago, IL60607. [email protected]

Abstract: Barsalou’s hypothesis that mental representations are con-structed by selecting parts of percepts encounters the same difficulties asother empiricist theories: They cannot explain concepts for which in-stances do not share perceptible features (e.g., furniture) or for whichthere are no relevant percepts (e.g., the end of time). Barsalou’s attempt toreduce falsity to failed pattern matching is an elementary error, and thegenerativity of his simulators cannot be attained without nonterminalsymbols. There is not now, and there never was, any reason to be inter-ested in empiricist theories of knowledge. Abstraction is a fundamental as-pect of human cognition.

Abstraction. Empiricist theories of knowledge claim that peoplederive knowledge from experience, either by extracting sharedfeatures from sets of exemplars (Aristotle, Locke) or by combin-ing sense data (Carnap, Russell). Barsalou proposes a differentprocess: to select and recombine parts of percepts.

Classical empiricist theories could not explain abstract con-cepts. Barsalou’s theory is no more successful, because selectionis the wrong tool for the job. A part of a thing is not more abstractthan the thing itself. For example, a wheel is as concrete an entityas a car. Hence, the percept of a wheel is not at a higher level ofabstraction than a percept of a car. Selecting is not abstracting.

A closely related difficulty for Barsalou’s theory is that the in-stances of some concepts do not share any perceptible features.Consider furniture, tools, and energy sources. No perceptible fea-ture recurs across all instances of either of these categories.Hence, those concepts cannot be represented by combining partsof past percepts.

Scientific concepts illustrate the same point (Ohlsson & Lehti-nen 1997). The physicist’s concept of a mechanical system includes

Commentary/Barsalou: Perceptual symbol systems

630 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

moons and pendulums. However, my brain cannot take parts ofmy percept of the moon and build a perceptual representation ofa pendulum out of them. The path of the moon across the sky doesnot look at all like the swings of a pendulum, so the correspond-ing percepts do not share any part that can be used to encode theconcept.

A slightly different challenge is posed by entities like the healthcare system, the middle class, the recent election, and the Internet.They lack spatio-temporal boundaries and have indeterminatephysical embodiments, so they are not perceived in any normalsense. Hence, they cannot be represented by parts of percepts.

An even stronger challenge is posed by concepts that do not cor-respond to any percepts at all. Consider the future, the end of time,and life after death. I have never perceived these things, so howcould my mental representations of them consist of parts of per-cepts? Mathematical concepts make the same point. What partsof which percepts are supposed to constitute my concepts of thesquare root of minus one, infinite sets, and the derivative of a func-tion? Finally, consider fictional concepts. What parts of which per-cepts are supposed to constitute my concepts of time travel, in-stantaneous materia transmitters, or telepathy?

In section 3.4.2, Barsalou postulates two capabilities other thanselection for constructing abstract concepts: the ability to repre-sent event sequences and the ability to store percepts of intro-spective states. As the above examples show, abstract concepts arenot typically associated with any particular event sequences or in-trospections. Barsalou’s treatment of abstraction is entirely inade-quate.

Truth and falsity. Barsalou proposes to reduce truth and falsityto pattern matching. This approach seems to work as long as weonly consider positive matches. One might argue that to establishthe truth of an assertion like the cat is on the mat, it is sufficient tolook in the relevant direction and match the pattern ON(CAT,MAT) to the incoming perceptual information. If the pattern fits,then it is true that the cat is on the mat.

However, as implementers of the early production system lan-guages quickly discovered, falsity cannot be reduced to a failedpattern match. Suppose somebody quietly pulls the rug out of theroom while I am looking out the window and that the cat remainsseated, enjoying the ride. Next time I look around, the patternON(CAT, MAT) no longer matches my perception, but I cannotconclude from this that the cat is on the mat is false. This is a com-putational formulation of the old insight that the absence of evi-dence is not evidence for an absence. For example, the fact thatastronomers have so far failed to find any patterns in the sky thatindicate communication attempts by extra-terrestrials does notprove that there are aliens out there is false. To propose to reducefalsity to failed pattern matching is an elementary technical error.

Generativity. Barsalou assigns wonderful powers to his simula-tors, but he does not describe how they are supposed to operate.However, it appears that simulators can generate infinitely variedrepresentations (“the simulator for chair can simulate many dif-ferent chairs under many different circumstances,” sect. 2.4.3,para. 2).

This claim is incompatible with the perceptual symbol idea, be-cause a process that can generate a range of representations can-not itself be one of those representations. For example, considerthe sequence of measurements 15, 17, 19, . . . and the generativemechanism N(i 1 1) 5 f [N(i)] 5 N(i) 1 2. The function f is notitself a member of the sequence 15, 17, 19. . . . This observationis general: a generative mechanism requires nonterminal symbols(f, N, 1, etc.). Hence, Barsalou’s simulators, if they are to carryout the generative function he assigns them, cannot themselvesconsist of nothing but perceptual symbols.

Conclusion. There is not now, and there never was, any posi-tive reason to be interested in empiricist theories of cognition.Empiricists are forever arguing, as does Barsalou, to survive theobvious objections. Better be done with such theories once andfor all and focus on the fundamental and remarkable fact that hu-mans are capable of forming abstractions. The most natural ex-

planation for this fact is that abstract concepts are exactly whatthey appear to be: internally generated patterns that are appliedto perceptual experience to interpret the latter (Ohlsson 1993). Asuccessful theory of human cognition should start from the prin-ciple that although knowledge is of experience it does not derivefrom experience.

A little mechanism can go a long way

David A. Schwartz,a Mark Weaver,b and Stephen Kaplan,caDepartment of Psychology, University of Michigan, Ann Arbor, MI 48109-1109; bCorvus Development, Inc., Ann Arbor, MI 48104; cDepartment ofElectrical Engineering and Computer Science and Department ofPsychology, University of Michigan, Ann Arbor, MI [email protected] [email protected] [email protected]

Abstract: We propose a way in which Barsalou could strengthen his posi-tion and at the same time make a considerable dent in the category/ab-straction problem (that he suggests remains unsolved). There exists a classof connectionist models that solves this problem parsimoniously and pro-vides a mechanistic underpinning for the promising high-level architec-ture he proposes.

No cognitive theory should be required to answer all the questionswe might ask about the origins and structure of knowledge (at leastnot all at once, in a single article), and Barsalou, to his credit, tellsus explicitly which questions his theory does not attempt to an-swer. However, by restricting his attention to the high-level archi-tecture (leaving out any discussion of detailed mechanisms) andby claiming that a laundry list of issues related to categories andconcepts remain “unresolved in all theories of knowledge,” Barsa-lou may be creating a misleading impression about the currentstate of perceptually-grounded theories of cognition – as well asfailing to take advantage of some valuable sources of support.

Barsalou takes some pains to show us that perceptual theoriesof thought have a long and distinguished pedigree. We find it cu-rious, therefore, that he neglected the history of perceptually-grounded connectionist theory. Barsalou cites Pulvermüller(1999) several times as an example of connectionist work that iscompatible with a perceptual symbol-system perspective. But Pul-vermüller’s approach is directly inspired by Hebb’s cell-assemblytheory, which was, of course, proposed in 1949 in “The Organiza-tion of Behavior.” We agree with Barsalou that not all connec-tionist models can be considered to support “perceptual symbols”;similarly, not all connectionist models can properly be consideredas descendents of Hebb’s cell-assembly proposal. However, thereis a substantial group which can be so considered, a group that wehave previously referred to as “active symbol” connectionists(Kaplan et al. 1990), which includes not only Pulvermüller (1999),but also, for example, Braitenberg (1978; [see also Braitenberg etal. “The Detection and Generation of Sequences as a Key to Cere-bellar Function” BBS 20(2) 1997]), Edelman (1987), Grossberg(1987; [see also Grossberg: “Quantized Geometry of Visual Space”BBS 6(4) 1983]), Kaplan et al. (1991), and Palm (1982). AlthoughHebb’s main concern was a cell-assembly’s capacity to supportpersistence, not perceptual groundedness, cell-assemblies, asHebb described them, do have the kind of perceptual grounded-ness that is critical for Barsalou. [See also Amit: “The HebbianParadigm Reintegrated” BBS 18(4) 1995.]

Barsalou cites the Hebbian strengthening of connections be-tween active neurons as one way in which perceptual symbolsmight form, but he seems not to have considered that this sameassociative mechanism by itself ensures that the resulting concep-tual structures will be abstract, schematic, and categorical. By theHebbian learning process, cells that initially exhibit patterns of ac-tivity that are weakly positively correlated grow increasingly in-terconnected and come to function as a single unit; and cells thatinitially display uncorrelated or weakly negatively correlated firing

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 631

patterns will grow increasingly distinct functionally and end up asmembers of different neural circuits.

What kind of neural representation will this sort of process gen-erate? Consider a layer of cortex whose neurons respond to dif-ferent feature analyzers at the sensory interface (e.g., shape, color,texture). A first encounter with a given object would set up a fairlydiffuse pattern of cortical activity and would lead to a modeststrengthening of the connections among the active elements.Because every object is encountered in some context, some of theactive elements would represent features of the object, whereasothers would represent contextual features. Given repeated en-counters with the same object in varying contexts, the neuronsrepresenting the object’s features will grow into a highly intercon-nected network while those representing contextual features willtend to fall away. Hebb coined the terms “recruitment” and “frac-tionation” to describe these two processes whereby neurons be-come incorporated into or excluded from a network.

The same processes of recruitment and fractionation occurringat a stage further removed from the sensory interface can growneural representations that stand for classes of objects. Given re-peated encounters with a set of objects that share certain features,birds for example, the neural units responding to the most invari-ant features (e.g., feathers and beaks) will grow into a highly in-terconnected functional unit, whereas the more variable features(e.g., color, size) will be excluded from the set of core elements.There is in principle no limit to the number of different levels ofabstraction at which this process might operate. Given sufficientcortical “depth” and sufficiently varied experience, we might ex-pect an individual to develop neural representations that stand notonly for classes of objects, but for classes of classes, and so on adinfinitum. An assembly of neurons that forms in this fashion willexhibit many of the properties Barsalou attributes to perceptualsymbols. It will be schematic, in that it represents only a subset ofthe features that any actual object manifests at any given time. Itsubserves categorization, in that the same assembly responds tovarying instances of some class of objects that have features incommon. It is inherently perceptual, dynamic, and can participatein reflective thought. In addition, it allows for recognition on thebasis of partial input, typicality effects, prototype formation, andimagery. And note that we get all of this, in a sense, for free. Themechanisms of recruitment and fractionation that build the re-quired perceptual symbols at the same time isolate the invariantfeatures of some class of objects without the need for any outsideintelligence to guide the process along.

By contrast, Barsalou seems to place much of the burden forperceptual symbol formation on an unexplained process of selec-tive attention. According to his theory, attention selects some par-tial aspect of a perceptual experience and stores this aspect inlong-term memory. Moreover, he offers no account of why in anygiven perceptual experience attention might focus on some fea-tures and not on others. This we find troubling, for in the absenceof any principled constraint on which subsets of experience cometo serve as dynamic internal representations, Barsalou’s percep-tual symbols seem only marginally less arbitrary than the symbolspostulated by the amodal and connectionist theories he rightlycriticizes.

By distancing himself from connectionist approaches in gen-eral, Barsalou not only undermines his capacity to deal with theparticular problem of categories and abstractions, but also his ca-pacity to specify any sort of mechanism for the perceptual ap-proach. He perhaps did not realize that some forms of connec-tionism, in particular the Hebbian or “active symbol” version, arewell suited to support and give structure to his position.

Truth and intra-personal concept stability

Mark SiebelResearch Group in Communication and Understanding, University of Leipzig,04109 Leipzig, Germany. [email protected]

Abstract: I criticize three claims concerning simulators: (1) That a simu-lator provides the best-fitting simulation of the perceptual impression onehas of an object does not guarantee, pace Barsalou, that the object belongsto the simulator’s category. (2) The people described by Barsalou do notacquire a concept of truth because they are not sensitive about the poten-tial inadequacy of their sense impressions. (3) Simulator update preventsBarsalou’s way of individuating concepts (i.e., identifying them with simu-lators) from solving the problem of intra-personal concept stability be-cause to update a simulator is to change its content, and concepts with dif-ferent contents are distinct.

Barsalou’s theory involves three claims about simulators that Iwish to criticize. They concern (1) true propositions, (2) the de-velopment of a concept of truth, and (3) intra-personal conceptstability.

1. According to Barsalou (sects. 2.4.4 and 3.2.3), one entertainsa true proposition to the effect that a perceived individual belongsto a certain category if one’s simulator for that category providesthe best-fitting simulation of the perceptual impression one has ofthe individual. However, that account rests on a confusion be-tween what seems to be true (to a person) and what is true.

Consider someone who sees a horse that looks like a donkey tohim; the horse causes a neural representation that resembles theimpressions he normally has when he sees a donkey. Conse-quently, the simulator that provides the best-fitting simulation ofthe person’s impression is his simulator for donkeys, not for horses.Thus, according to Barsalou, the person does not only classify thehorse as being a donkey, his conceptual system also provides a trueproposition. Because the simulator for donkeys produces a satis-factory simulation of the horse, the horse should belong to thesimulator’s category. But, obviously, a horse does not become adonkey because it seems to be a donkey. Otherwise it would benearly impossible to make a mistake in classifying an object. Theresemblance between a perceptual representation of an objectand representations created by a simulator for a certain categorydoes not guarantee the object’s category membership. Whetherthe object belongs to the category depends on the veridicality ofthose representations.

2. Barsalou’s account of the way we acquire a concept of truthis infected by a similar problem. In Barsalou’s view, people acquirea concept of truth by “learn[ing] to simulate the experience of suc-cessfully mapping an internal simulation into a perceived scene”(sect. 3.4.3, para. 2). They develop such a concept by, for example,comparing perceptual simulations they form by hearing the ut-terance of a sentence and perceptual impressions they have of acorresponding scene. If the simulation resembles the impressionto a certain extent, they learn to reply to the utterance with “That’strue.”

Roughly, truth is correspondence to the facts: a representationis true if and only if it represents a realized state of affairs. WhatBarsalou’s people learn, however, is not construing truth as corre-spondence to the facts, but as correspondence to certain repre-sentations, namely, sense impressions of perceived scenes. Theirconcept of “truth” is instantiated whenever the simulation causedby hearing an utterance corresponds to the impression caused bylooking at a certain scene – regardless of whether the impressionitself is veridical. Barsalou’s people are not sensitive about the po-tential inadequacy of their sense impressions. In a word, they donot develop a concept of truth because it is too easy to elicit assentfrom them: they say “That’s true” whenever the way in which theyconceptualize the meaning of the utterance resembles the way inwhich they perceive the crucial scene. They do not know what itis for an utterance to be true because they do not take into con-sideration that their perceptual impressions can lead them astray.

3. As Barsalou (sect. 2.4.5) himself recognizes, an adequate

Commentary/Barsalou: Perceptual symbol systems

632 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

theory of concepts must allow for a certain extent of intrapersonalconcept stability. It must allow, for example, that the concept oneuses in classifying an object on Thursday is the same as the con-cept one used on Monday to classify another object. Barsalou(sects. 2.4.3–2.4.5) tries to achieve stability by identifying con-cepts with simulators: even if different conceptualizations (i.e.,simulations) are responsible for classifying different objects as be-ing members of the same category, the same concept is used inthese processes as long as the same simulator provides these best-fitting simulations.

That way of individuating concepts, however, remains too fine-grained to save intra-personal concept stability, for Barsalou(sects. 2.4.4 and 3.2.2) says that every successful simulation up-dates the corresponding simulator by integrating perceptual sym-bols of the categorized object into it. In other words, the catego-rization of an object supplies the simulator with new perceptualinformation about the members of its category so that it is able tocreate further simulations afterwards. Thus, updating a simulator(i.e., a concept) means to change its content; and since conceptswith different contents are distinct, the update leads to anotherconcept.

Concepts are to be individuated by their extension (their refer-ents) and their content (the information they contain about theirreferents). Bertrand’s concept of donkeys is already different fromSusan’s concept of horses because it represents another category.And if Bertrand’s concept of horses does not contain the same in-formation as Susan’s concept of horses, they are likewise distinct,although they have the same extension. To use a Fregean term,even concepts for the same category differ if their “modes of pre-sentation” are distinct, that is, they represent that category in dif-ferent ways.

Hence, an update of a simulator entails that it is no longer thesame concept. Although the category of objects it represents re-mains the same, we have a different concept afterwards, becausethere is something added to its content, namely, perceptual infor-mation about the object categorized before. Hence it is hard to seehow Barsalou’s account could save concept stability within indi-viduals. If the categorization of an object is followed by an updateof the corresponding simulator, then the next assignment of an ob-ject to the same category does not arise from the same simulatorbecause its content changed. Because different contents implydifferent concepts, updating prevents a person from using thesame concept twice.

In order to save concept stability within individuals, one mustnot identify concepts with simulators as a whole, but only with acertain core of them. Much as shared concepts between individu-als only require similar simulators, the identity of a concept withinan individual merely requires that the underlying simulator nothave changed too much.

A perceptual theory of knowledge: Specifying some details

Aaro ToomelaDepartment of Psychology, University of Tartu, Tartu, Estonia EE [email protected]

Abstract: We attempt to resolve some details of Barsalou’s theory. (1) Themechanism that guides selection of perceptual information may be the ef-ferent control of activity. (2) Information about a world that is not acces-sible to the senses can be constructed in the process of semiotic media-tion. (3) Introspection may not be a kind of perception; rather, semioticallymediated information processing might be necessary for the emergenceof introspection.

Barsalou describes a perceptual theory of knowledge for whichthe need emerges from the historical divergence of the fields ofcognition and perception in psychology. Russian psychology, how-

ever, especially Vygotsky-Luria’s school of psychology, never sep-arated thinking from perception (Luria 1969; 1973; 1979; Vygot-sky 1934/1996). Barsalou also suggests that there is no theory ofknowledge to explain when emergent features arise and why (note18). Yet the primary objective of the field of semiotics is to un-derstand semiosis and the knowledge-making activity (Danesi1998). The question of how novel information is generated is thor-oughly analyzed in the Tartu-Moscow school of semiotics (e.g.,Lotman 1983; 1984; 1989). Thus, there are some ideas in Vygot-sky-Luria’s psychology and in semiotics that may be useful for fill-ing gaps in Barsalou’s theory.

Selection of information may be guided by the efferent controlof activity. According to Barsalou, selective attention has a centralrole in the construction of perceptual symbols (e.g., sect. 2.2), buthe does not explain how the cognitive system knows where to fo-cus attention during the symbol formation process (note 7). He as-sumes that a conceptual system may be based on perceptual sym-bols. However, if we do not know how attention selects perceptualinformation then there is a possibility that the cognitive systemthat underlies selective attention is amodal. (Of course that sys-tem may also be modal, but the point is that Barsalou cannot ex-clude amodal systems from his theory without demonstrating thatsuch a selection mechanism – which “knows” where to focus at-tention – is modal.)

Luria’s theory offers a modal solution to the problem of atten-tion in the form of a special system in cognitive functioning that isresponsible for programming, regulating, and monitoring activity.In addition to afferent modalities, there is an efferent (basicallymotor) modality. The same system supports attention (Luria 1969;1973). It is possible that isolation of novel information in percep-tion for subsequent storage in long-term memory is realized bythat efferent system. There is some supporting evidence, for ex-ample, the disappearance of stabilized retinal images (Alpern1972; Cornsweet 1970), showing that activity (eye-movement) isnecessary for visual sensation. There is also evidence that visualagnosias are accompanied with disorders in eye-movements(Luria 1969; 1973). This finding indicates that activity is also nec-essary for the functioning of central perceptual mechanisms.

The relationship between consciousness and introspection isunclear. Barsalou suggests that conscious experience may be nec-essary for the symbol formation process to occur initially (sect.2.2). By “consciousness,” Barsalou means awareness of experienceor qualia (personal communication, December 1, 1998). At thesame time, he assumes that the term “perceptual” refers both tothe sensory modalities and to introspection (sect. 2.3.1). Accord-ing to him, introspection represents experiences (representationalstates, cognitive operations, and emotional states; sect. 2.3.1). It isnot clear how it is possible to be aware of experiences for buildingperceptual-introspective knowledge about the same experiences.We return to this problem below.

Information about a world that is not accessible to the sensescannot be based on perceptual symbols alone. Barsalou assumesthat every kind of knowledge can be represented with perceptualsymbols. The perceptual theory of knowledge is not very convinc-ing regarding abstract concepts even though the solution pro-posed there is not entirely implausible. However, there exists akind of knowledge that must be amodal in essence: the knowledgeabout a world which is qualitatively out of reach of our senses. Hu-mans do not possess perceptual mechanisms for perceiving elec-tromagnetic fields, electrons, and other submolecular particles.None of the modalities of the human perception can react to aneutrino. How such knowledge is constructed is not explained inBarsalou’s theory (note 18 partly recognizes this problem).

One solution to the question of how novel information is gen-erated is proposed by Lotman (e.g., 1983; 1984; 1989). Accordingto him novel information emerges only if at least two differentmechanisms of information processing interact, the noveltyemerging when the results of one type of information processingis “translated” into the other. Humans possess such mechanisms,sensory-perceptual and amodal-linguistic. Novel information is

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 633

created in the process of semiotic mediation, in the “dialogue” be-tween the perceptual and linguistic mechanisms. In the process ofsemiotic mediation, perceptual events acquire entirely new mean-ings (cf. Lotman 1983; Toomela 1996). The same basic mechanismmay be responsible for the emergence of novel knowledge abouta world that is accessible to the senses. In this case noveltyemerges when information about the same object or phenomenonis processed in a dialogue between different sensory modalities.

Introspection may be a kind of semiotically mediated knowl-edge. It is not possible to go into detail here, but it is worth con-sidering the possibility that introspection and consciousness rep-resent a qualitatively novel knowledge that is not accessible to oursenses. It might not be accidental that “introspective processing ispoorly understood” (sect. 2.3.1). To the best of my knowledge,there is no direct evidence that humans possess an introspectivesystem analogous to sensory mechanisms. Rather, introspectionmight be a result of semiotic mediation (cf. Luria 1973; 1979; Vy-gotsky 1925/1982; 1934/1996).

The idea that humans possess different mechanisms for pro-cessing the same information fits with some of Barsalou’s conclu-sions. First, the right hemisphere represents individuals in per-ception and comprehension and the left hemisphere representssimulators that construe them (note 29). It is also possible, how-ever, that right hemisphere processes information more in termsof perceptual characteristics and the left hemisphere more with“linguistic simulators” (cf. Lotman 1983). Second, in sect. 3.4.2,Barsalou refers to the finding that abstract concepts tend to acti-vate frontal brain regions. Prefrontal brain regions are also knownto be related to verbal regulation of behavior and verbal thinking(Luria 1969; 1973). The construction of abstract concepts may re-quire semiotically mediated thinking supported by frontal struc-tures. Finally, Barsalou suggests that the dissociation of visual ag-nosia and optic aphasia may provide evidence for two differentways of activating a common representational system (note 3).That interpretation also accords with the idea that the same in-formation might be processed in different ways in the same cog-nitive system.

Most of my remarks have concerned information in footnoteswhere undeveloped aspects of the theory are recognized. Thereare many theoretically important ideas in the body of the targetarticle that not only fit with other perceptual theories but also ex-tend our understanding of the role of perception in cognition.Some interesting ways to improve the perceptual theory of knowl-edge might be found in Vygotsky-Luria’s school of psychology andin Lotman’s school of semiotics.

ACKNOWLEDGMENTThis work was supported by The Estonian Science Foundation.

External symbols are a better bet than perceptual symbols

A. J. WellsDepartment of Social Psychology, The London School of Economics andPolitical Science, London WC2A 2AE, England. [email protected]

Abstract: Barsalou’s theory rightly emphasizes the perceptual basis ofcognition. However, the perceptual symbols that he proposes seem illsuited to carry the representational burden entailed by the architecture inwhich they function, given that Barsalou accepts the requirement for pro-ductivity. A more radical proposal is needed in which symbols are largelyexternal to the cognizer and linked to internal states via perception.

I strongly endorse Barsalou’s emphasis on the perceptual natureof cognition and I share his view that theories based on amodalsymbol systems suffer from serious difficulties. Transduction andsymbol grounding (cf. sect. 1.2.2), in particular, are fundamentalproblems. Barsalou proposes that at least some of the problems

can be solved by hypothesizing a class of internal representations,that is, perceptual symbols, that stand in a different relation to theproximal stimuli that produced them than do amodal symbols.The proposal looks radical at first sight but on further examinationseems rather modest. It is precisely the modesty of the proposalthat gets it into difficulties and I wish to recommend a more rad-ical proposal.

Much traditional theorizing about the linkages between the ex-ternal world and internal thought processes acknowledges the fol-lowing broad classes of entities. There are distal events and ob-jects, proximal stimuli, perceptual states and cognitive states.Barsalou seems to acknowledge these classes. Roughly, it is as-sumed that distal entities cause proximal stimuli, which cause per-ceptual states, which cause cognitive states. The latter are oftenassumed to consist (at least in part) of structured symbolic ex-pressions. Cognitive computation over input symbol structuresleads to output symbol structures that guide behavior. Behaviorcan have causal consequences for distal entities thus closing thecausal loop.

Barsalou’s proposal is modest because it is concerned, essen-tially, with just one link in the chain, the link between perceptualstates and symbol structures. Significantly, his proposal does notrequire a revised account of either the perceptual states that pro-duce symbol structures or the cognitive computations that trans-form them. His core claim is that changing the nature of the sym-bol structures is sufficient to repair the deficiencies in amodalsymbol systems theorizing. Thus he proposes simply that amodalsymbol structures should be replaced with perceptual symbolstructures which have two primary characteristics not shared withamodal structures. Perceptual symbol structures are hypothesizedto be modality specific (or multi-modal) and analogical. The forceof the latter point is that “the structure of a perceptual symbol cor-responds, at least somewhat, to the perceptual state that producedit” (sect. 1.1). Thus the suggestion is that the distinction betweenperceptual symbols and amodal symbols is akin to the distinctionbetween analogue and digital modes of representation. The pri-mary requirements have a number of further consequences dis-cussed in section 2.2. Perceptual symbols are dynamic not discrete,contextually variable, nonspecific and potentially indeterminate.They have, in other words, a degree of plasticity that is not char-acteristic of amodal symbols. They are, nevertheless, required tosupport the full range of computations needed for a fully fledgedconceptual system. Most important in this regard, Barsalou arguesthat perceptual symbol structures must be capable of unboundedgenerativity and recursive elaboration (sect. 3.1).

Sympathetic though I am to the idea that perception and cog-nition should be better integrated, I think that Barsalou’s proposalis almost guaranteed not to work. It is essential for unboundedgenerativity and recursion that primitive symbol tokens be amodaland reliably identifiable independent of context. Barsalou pro-poses to give up these essential characteristics without a clear pic-ture of how the much sloppier perceptual symbols are supposedto perform the same functions. He is right to emphasize the con-straints on symbols arising from the neural foundations of the cog-nitive system, and right to note that the evidence for amodal sym-bols is not strong, but changing the nature of the symbols does notseem to be the solution to the problem. If a theory of cognitiveprocessing requires unbounded generativity and recursion, thenintroducing sloppy symbols is a weakness. A better bet, if you be-lieve that neural systems cannot support amodal symbols and therequisite compositional and recursive processes, is to question thenature of the requirement for productivity. It is also unclear howperceptual symbols solve the transduction problem. It is not clear,for example, that abstraction from perceptual states via selectiveattention is superior to any other current proposal in this regard.

Can one then characterize the requirement for productivity ina way that relieves the burden on internal representation and al-lows perceptual processes to play a larger part in cognitive pro-cessing generally? I believe that it can be done but it requires amore radical picture of the nature of cognitive architecture than

Commentary/Barsalou: Perceptual symbol systems

634 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

the one Barsalou paints. The traditional view of productivity reliesprimarily on internal resources for both storage and processing.Hence the requirement for internal structures that can store ar-bitrarily complex, iterated, nested symbolic expressions. It is quiteplausible, however, given the existence of external symbol storagemedia such as books, that productivity might be understood as aninteraction between external symbol structures and internal pro-cesses. In the most extreme case there need be no internal sym-bol storage. An argument in favor of models of cognitive architec-ture with external symbolic resources has been advanced by Wells(1998) based on a re-analysis of Turing’s theory of computation. Ifone argues that cognitive architecture extends into the environ-ment of the cognizer, cognition does indeed become integrallylinked to perception. Moreover, the representational burden oninternal states can be greatly reduced. Donald (1991; see also BBSmultiple book review of Donald’s Origins of the Modern MindBBS 16(4) 1993) also emphasizes the significance of external sym-bol storage for theories of cognitive architecture.

Barsalou’s proposal demonstrates the need for a theory of thiskind. He, among others, has pointed out the substantial difficul-ties faced by theories based on amodal symbol systems. Hisachievement in the present target article is to have set out a com-prehensive outline of what a theory based on perceptual symbolswould be like. He has, so to speak, covered all the angles and in-dicated the central problems that any theory using perceptualsymbols would have to solve. In doing this, he has, I think, shownthat such a theory is unlikely to succeed.

ACKNOWLEDGMENTI am grateful to Bradley Franks for discussion and advice.

Perceiving abstract concepts

Katja Wiemer-Hastings and Arthur C. GraesserDepartment of Psychology, The University of Memphis, Memphis, TN 38152-6400. [email protected] [email protected]/faculty/graesser/graesser.htm

Abstract: The meanings of abstract concepts depend on context. Percep-tual symbol systems (PSS) provide a powerful framework for representingsuch context. Whereas a few expected difficulties for simulations are con-sistent with empirical findings, the theory does not clearly predict simula-tions of specific abstract concepts in a testable way and does not appear todistinguish abstract noun concepts (like truth) from their stem concepts(such as true).

Do perceptual symbol systems (PSS) solve the “challenge of anyperceptual representation system,” that is, can they account forthe representation of abstract concepts? Abstract concepts haveno perceivable referents. It is therefore challenging to specifytheir representation. It is well documented that comparatively fewattributes are generated for abstract concepts in free generationtasks (Graesser & Clark 1985; Markman & Gentner 1993; McNa-mara & Sternberg 1983). Hampton (1981) has shown that manyabstract concepts are not structured as prototypes.

With such difficulties in mind, it may seem somewhat surpris-ing that a perceptually based theory should provide a compellingframework to represent abstract concepts. However, we arguethat the nature of abstract concepts requires such an approach.Previous theories have emphasized that such concepts depend oncontext (e.g., Quine 1960). Grounding abstract concepts in con-text, however, only works if context provides useful informationfor their meaning representation. That is, context itself needs tohave a representation that is grounded. And of course, the ulti-mate grounding mechanism of humans is perception.

Concept representations characterize a concept and distinguishit from other concepts. Barsalou demonstrates that PSS can inprinciple represent abstract concepts, but can it also handle theirdifferentiation? This issue is important. Abstract concepts are of-

ten very similar to each other, therefore requiring very fine dis-tinction. Examples for similar abstract concepts are argument, dis-agreement, and conflict. The similarity of such word sets shouldresult in very similar simulations because the concepts occur insimilar contexts. At the same time, however, we know that thereare subtle differences in meaning, and such differences must becaptured by representations. Do PPS capture such differences?

Suppose separate groups were asked to list attributes for eitherargument or for disagreement. Conceivably, both groups wouldcome up with similar lists, suggesting identical representations.However, if asked to generate attributes for both concepts, theymight attempt to list distinctive features via a careful comparisonof simulations. This illustrates a problem: PSS have the power torepresent abstract concepts, but the theory does not provide prin-cipled predictions for simulations of particular abstract concepts.Instead, ad hoc simulations account for a concept depending on acontext or on contrast concepts.

Contexts of abstract concepts may vary considerably. Hampton(1981) pointed out that abstract concepts are “almost unlimited inthe range of possible new instances that could be discovered or in-vented” (p. 153). This is especially true for a subset of abstractnouns that have a content, such as belief, truth, and idea. Contentconcepts reflect various degrees of abstraction, as illustrated forthe concept of truth:

1. I knew he was not telling me the truth. (Truth is a concretereferent in a situation; it is a “fact.”)

2. I know that he always tells the truth. (Truth is a feature that isshared by a set of statements.)

3. Scientists want to know the truth. (Truth is the general, absoluteknowledge of how things are.)

Barsalou’s suggested simulation for truth is appropriate for (1) and(2), in that it involves the comparison of facts and statements. For(3), however, it seems that the simulation needs to express a higherlevel of abstractness, because this truth (the set of facts) is as yetunknown. It is unclear how PSS would handle cases like (3).

A second interesting aspect of content concepts is that their con-tent can vary infinitely. This may be associated with the observa-tion that abstract concepts are difficult to define and overall takelonger to process (for an overview see Schwanenflugel 1991). Con-sider the following expressions of ideas: He might be in the otherroom, perhaps you need to dial a 1 first, let’s have pizza, and so on.How can an abstracted representation of idea be constructed fromsuch examples? What do these ideas have in common?

They probably share contextual aspects, such as an initial prob-lem: I cannot find him, I cannot get through, or what should weeat? Then, somebody expresses an idea. Consequently, differentthings may happen: An opposing reaction (We had pizza yester-day), a new behavior to solve the problem (he was indeed in theother room, or dialing 1 does not help either), or an alternative idea(I’ve just looked there. Perhaps he is outside?). Due to the varia-tion, it may be difficult to identify which aspects are critically re-lated to the word meaning and should therefore be used to framethe simulation. However, if concepts are presented in a specificcontext, the context can frame the simulation and provide clearreferents. Compare the following contexts:

1. “It is too bad that he quit his job. He always had such goodideas!”

2. “I have lost my key somewhere.”“Did you search your jacket pockets?”“No – great idea!”

Example (1) requires some abstraction of idea in the context of ajob, whereas in (2), the idea is a concrete suggestion to do some-thing. A simulation should be easier to construct in the secondcase because the situation sets up an event sequence. This doesnot present a problem, however, since it is consistent with the ob-servation that abstract language is processed faster if presented ina meaningful context (Schwanenflugel & Shoben 1983).

Commentary/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 635

Finally, it is unclear how a perceptual symbols system distin-guishes between a stem concept (“true”) and a complex abstractnoun (“truth”). It is true that the moon orbits the earth. Also, oc-casionally it is true that we are tired. We know that these state-ments are true by comparison with the facts. It seems that the sim-ulations for true and truth would be the same. However, it couldbe argued that truth and something being true are very differentconcepts.

Perceptual symbols in languagecomprehension: Can an empirical case be made?

Rolf A. Zwaan, Robert A. Stanfield, and Carol J. MaddenDepartment of Psychology, Florida State University, Tallahassee, FL 32306-1270. freud.psy.fsu.edu:80/~zwaan/

Abstract: Perceptual symbol systems form a theoretically plausible alter-native to amodal symbol systems. At this point it is unclear whether thereis any truly diagnostic empirical evidence to decide between these systems.We outline some possible avenues of research in the domain of languagecomprehension that might yield such evidence. Language comprehensionwill be an important arena for tests of the two types of symbol systems.

Barsalou provides convincing theoretical arguments for why a per-ceptual symbol system should be preferred over amodal systems:greater parsimony, the ability to predict rather than postdict ef-fects, resolution of the symbol-grounding problem, consistencywith the notion of embodiment, and avoidance of an evolutionaryquantum leap presupposition. He also discusses examples of em-pirical evidence. It is not clear to us at this point how diagnosticthese are with respect to a comparison between perceptual andamodal symbol systems. For example, Barsalou (sect. 2.3) inter-prets neuroimaging evidence showing that visual areas of the brainare active during conceptual tasks involving animals and motor ar-eas during conceptual tasks involving tools as support for percep-tual symbol systems. Proponents of amodal systems might grantthat representations are stored in the corresponding perceptualareas of the brain, but still maintain that these representations areamodal. The problem, as Barsalou (sect. 1.2.2) notes is thatamodal systems might be able to explain (away) many findings ona post-hoc basis.

Because we find Barsalou’s view congenial and inspiring, we at-tempt to identify some existing and possible lines of research thatmight have diagnostic value with respect to the comparison be-tween perceptual and amodal symbol systems and as such couldbolster the case for perceptual symbol systems empirically.Amodal propositional systems are arguably more dominant in thearea of language comprehension than in any other area of cogni-tion. Therefore, it is a crucial arena for evaluating perceptual andamodal symbol systems.

We will first clarify our understanding of language comprehen-sion in a perceptual-symbol framework, extrapolating from Barsa-lou’s comments. Language comprehension is tantamount to theconstruction of a mental model of the described state of affairs: asituation model (e.g., Zwaan & Radvansky 1998). Barsalou (sect.2.4.2) argues that situation models are, in terms of a perceptual-symbol framework, equivalent to surface level simulations only.1We agree with this characterization. Situation models representspecific situations bound in time and space. As such, they are to-kens, whereas related knowledge structures, such as scripts andframes, are types. We understand simulators to be types and sim-ulations to be tokens. Thus, frames and scripts are assemblies ofsimulators (which differ in that the simulators become activatedin a specified temporal order in scripts but not in frames), whereassituation models are simulations generated in part by frames andscripts. These simulations run more smoothly – and perhaps seemmore perception-like – when their simulators can be assembled

and sequenced quickly. This happens when comprehenders havea great deal of background knowledge, for example in the form offrames and scripts. Thus, skilled text comprehension is analogous(but not identical!) to the perception of unfolding events. Such aperspective might be better equipped to explain how readers be-come immersed in narrative worlds (Zwaan, in press) than wouldamodal systems.

How could research on situation models yield evidence that isdiagnostic with respect to a comparison between perceptual andamodal symbol systems? We sketch five existing and potentiallines of research.

1. Perceptual effects during comprehension. Morrow and Clark(1989) demonstrated that readers’ interpretation of motion words,such as “approach” depends on perceptual characteristics of thesituation. Furthermore, comprehenders provide faster responsesto words denoting events and objects present in the situation com-pared to discontinued events and nonpresent objects (e.g., Mac-donald & Just 1989; Zwaan 1996). There is no straightforward wayto account for these findings using propositional analyses.

2. Rapid integration of perceptual and language-based infor-mation. Klatzky et al. (1989) found that the comprehension of ver-bally described actions is facilitated when subjects are allowed tofirst form their hand into a shape appropriate for the action (forexample, fingers pinched together when reading about throwinga dart).

3. Interference between secondary perceptual/imagery taskand situation-model construction. Fincher-Kiefer (1998) recentlyshowed that comprehenders who had to keep a high-imagery sen-tence unrelated to the text in working memory had greater diffi-culty constructing a situation model than comprehenders who hadto keep a low-imagery sentence in working memory.

4. Effects of embodiment on comprehension. We know of no re-search in this domain. Yet, this type of evidence might yield strongevidence against amodal systems. For example, it might be that abasketball player represents the situation described by “He pickedup the basketball” differently (one-handed) from a smaller person(two-handed). One would also assume that women who havegiven birth construct different mental representations of a storyabout childbirth than other women or than men. Amodal systemswould have no a priori way to account for this.

5. Physiological effects of comprehension. Clinical psycholo-gists have amassed a great deal of relevant evidence, which is notdiscussed by Barsalou. For example, Lang and his colleagues haveconceptualized and demonstrated how emotional imagery canhave powerful physiological effects (Lang 1979).

Presumably, the evidence yielded by these lines of researchwould not be equivalent in diagnostic value. For example, therapid integration of language-based and perceptual informationand the physiological effects of comprehension are not necessar-ily predicted by amodal symbol systems, but might be accommo-dated by them post hoc, maybe with some additional assumptions.The occurrence of perceptual effects during comprehensionwould be more difficult to accommodate by amodal systems. Evenmore diagnostic would be demonstrations of interference of un-related perceptual representations on the construction of situa-tion models. Finally, the most diagnostic evidence, in our view,would be demonstrations of variable embodiment on compre-hension processes. Perhaps none of these lines of research willyield convincing evidence for perceptual symbols independently,but taken together they might. At the very least, they would forceproponents of amodal symbol systems to postulate an increasinglyunwieldy and implausible range of special assumptions to postdictthe findings.

Barsalou has provided a viable alternative against reigningamodal symbol systems. He has successfully integrated ideas andconcepts from linguistics, philosophy, and neuroscience into acognitive-psychological framework. The perceptual-symbol frame-work shines a searchlight on research topics like the five listedabove, which have received scattered attention in the past. Assuch, it could provide a unified account of the extant and future

Commentary/Barsalou: Perceptual symbol systems

636 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

findings and thus influence the agenda for cognitive science inyears to come.

NOTE1. This might not be entirely accurate for mental models in general, be-

cause some are more generic and therefore more like simulators; see John-son-Laird’s (1983, pp. 422–29) typology.

Author’s Response

Perceptions of perceptual symbols

Lawrence W. BarsalouDepartment of Psychology, Emory University, Atlanta, GA [email protected] userwww.service.emory.edu/~barsalou/

Abstract: Various defenses of amodal symbol systems are ad-dressed, including amodal symbols in sensory-motor areas, thecausal theory of concepts, supramodal concepts, latent semanticanalysis, and abstracted amodal symbols. Various aspects of per-ceptual symbol systems are clarified and developed, includingperception, features, simulators, category structure, frames, anal-ogy, introspection, situated action, and development. Particularattention is given to abstract concepts, language, and computa-tional mechanisms.

I am grateful for the time and energy that the commenta-tors put into reading and responding to the target article.They raised many significant issues and made many excel-lent suggestions. Even the strongest criticisms were usefulin clarifying misunderstandings and in addressing mattersof importance. I have organized the commentaries into twogeneral groups: (1) those that defended amodal symbol sys-tems and (2) those that addressed the properties of per-ceptual symbol systems. Because the second group was solarge, I have divided it into four smaller sections. In thefirst, I address a wide variety of issues surrounding percep-tual symbol systems that include perception, features, sim-ulators, frames, introspection, and so forth. The final threesections address the topics raised most often: abstract con-cepts, language, and computational mechanisms.

R1. Defense of amodal symbol systems

No commentator responded systematically to the criticismsof amodal symbol systems listed in sections 1.2.2 and 1.2.3of the target article. Little direct evidence exists for amodalsymbols; it is difficult to reconcile amodal symbols with ev-idence from neuroscience; the transduction and groundingof amodal symbols remain unspecified; representing spaceand time computationally with amodal systems has notbeen successful; amodal symbol systems are neither parsi-monious nor falsifiable. Expecting a single commentator toaddress all of these concerns in the space provided wouldcertainly be unfair. I was struck, however, by how little at-tempt overall there was to address them. Having once beendeeply invested in amodal symbol systems myself (e.g.,Barsalou 1992), I understand how deeply these convictionsrun. Nevertheless, defending amodal symbol systemsagainst such criticisms seems increasingly important tomaintaining their viability.

A handful of commentators defended amodal systems invarious other ways. Although most of these defenses wereconcrete and direct, others were vague and indirect. In ad-dressing these defenses, I proceed from most to least con-crete.

R1.1. Amodal symbols reside in sensory-motor areas ofthe brain. The defense that struck me as most compellingis the suggestion that amodal symbols reside in sensory-mo-tor regions of the brain (Aydede; Zwaan et al.). A relatedproposal is that perceptual representations are not consti-tutive of concepts but only become epiphenomenally activeduring the processing of amodal symbols (Adams &Campbell). In the target article, I suggested that neuro-science evidence implicates sensory-motor regions of thebrain in knowledge representation (sect. 2.1., 2.2, and 2.3).When a lesion exists in a particular sensory-motor region,categories that depend on it for the perceptual processingof their exemplars can exhibit knowledge deficits. Becausethe perception of birds, for example, depends heavily onvisual object processing, damage to the visual system canproduce a deficit in category knowledge. Neuroimagingstudies of humans with intact brains similarly show thatsensory-motor areas become active during the processingof relevant categories. Thus, visual regions become activewhen accessing knowledge of birds.

Several commentators noted correctly that these find-ings are consistent with the assumptions that (a) amodalsymbols represent concepts and that (b) these symbols re-side in sensory-motor regions (Adams & Campbell;Aydede; Zwaan et al.). If these assumptions are correct,then damage to a sensory-motor region could produce adeficit in category knowledge. Similarly, sensory-motor re-gions should become active when people with intact brainsprocess categories.

This is an important hypothesis that requires careful em-pirical assessment. Damasio’s (1989) convergence zonesprovide one way to frame the issue. According to his view,local associative areas in sensory-motor regions capture pat-terns of perceptual representation. Later, associative areasreactivate these sensory-motor representations to simulateexperience and thereby support cognitive processing. Asthe quotation from Damasio (1989) in section 2.1 illus-trates, he believes that reactivating sensory-motor repre-sentations is necessary for representing knowledge – acti-vation in a nearby associative area never stands in for them.In principle, however, activation in local associative areascould stand in for sensory-motor representations duringsymbolic activity, thereby implementing something alongthe lines of amodal symbols, with perceptual representa-tions ultimately being epiphenomenal.

Behavioral findings argue against this proposal. Studiescited throughout the target article show that perceptualvariables predict subjects’ performance on conceptualtasks. For example, Barsalou et al. (1999) report that oc-clusion affects feature listing, that size affects property ver-ification, and that detailed perceptual form predicts propertypriming. The view that activation in associative areas rep-resents concepts does not predict these effects or explainthem readily. Instead, these effects are more consistentwith the view that subjects are reactivating sensory-motorrepresentations. Studies on language comprehension simi-larly exhibit such effects (sects. 4.1.6, R4.5, and Zwaanet al.).

Response/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 637

Further empirical evidence is clearly required to resolvethis issue with confidence. Neuroimaging studies that as-sess both sensory-motor and local associative areas duringperception and conception should be highly diagnostic.Similarly, lesion studies that assess simultaneous deficits inperception and conception should be useful, as well. If sen-sory-motor representations are epiphenomenal, then pa-tients should frequently be observed who exhibit deficits insensory-motor representation but not in correspondingknowledge.

R1.2. The causal theory of concepts. Ignoring my explicitproposals to the contrary, Aydede claims that I addressedonly the form of perceptual symbols and ignored their ref-erence. From this flawed premise, he proceeds to the con-clusion that a causal theory of amodal symbols must be cor-rect. On the contrary, the target article clearly addressedthe reference of perceptual symbols and agreed with Ay-dede that causal relations – not resemblances – are criticalto establishing reference. Section 2.4.3 states that simula-tors become causally linked to physical categories, citingMillikan (1989). Similarly, section 3.2.8 is entirely about theimportance of causal mechanisms in establishing the refer-ence of perceptual symbols, citing Dretske (1995). Indeed,this section makes Aydede’s argument that resemblance isinadequate for establishing a symbol’s reference. Althoughresemblance is often associated with perceptual views ofknowledge, I took great pains to dissociate the two.

Although I would have valued Aydede’s analysis of myactual proposal, reminding him of it provides an opportu-nity for elaboration. I agree with the many philosopherswho hold that much conceptual knowledge is under thecausal or nomic control of physical categories in the envi-ronment. Indeed, this assumption underlies all learningtheories in cognitive science, from exemplar theories toprototype theories to connectionism. In this spirit, section2.4.3 proposed that simulators come under the control ofthe categories in the world they represent. As a person per-ceives the members of a category, sensory-motor systemsare causally driven into particular states, which are cap-tured by various associative areas. The target article doesnot specify how these causal processes occur, disclaimingexplicitly any attempt to provide a theory of perception(sect. 2, para. 1–3). Nor does it make any claim that per-ceptual states resemble the environment (although they doin those brain regions that are topographically mapped). Itis nevertheless safe to assume that causal processes link theenvironment to perceptual states. The critical claim of per-ceptual symbol systems is that these causally produced per-ceptual states, whatever they happen to be, constitute therepresentational elements of knowledge. Most critically, ifthe environment is causally related to perceptual states, itis also causally related to symbolic states. Thus, perceptualsymbol systems constitute a causal theory of concepts. Sim-ulators arise in memory through the causal process of per-ceiving physical categories.

In contrast, consider the causal relations in the amodaltheories that Aydede apparently favors (as do Adams &Campbell). According to these views, physical entities andevents become causally related to amodal symbols in thebrain. As many critics have noted, however, this approachsuffers the problems of symbol grounding. Exactly what isthe process by which a physical referent becomes linked toan amodal symbol? If philosophers want to take causal mech-

anisms seriously, they should provide a detailed processmodel of this causal history. In attempting to do so, they willdiscover that perception of the environment is essential(Harnad 1987; 1990; Höffding 1891; Neisser 1967), andthat some sort of perceptual representation is necessary tomediate between the environment and amodal symbols.Aydede notes that perceptual representations could indeedbe a part of this sequence, yet he fails to consider the im-plication that potentially follows: Once perceptual repre-sentations are included in the causal sequence, are they suf-ficient to support a fully functional conceptual system? AsI argue in the target article, they are. If so, then why includean additional layer of amodal symbols in the causal se-quence, especially given all the problems they face? Noth-ing in Aydede’s commentary makes a case for their exis-tence or necessity.

R1.3. Supramodal systems for space and time. Two com-mentaries suggest that spatial processing and temporal pro-cessing reside in amodal systems (Freksa et al.; Mitchell& Clement). According to this view, a single spatial systemsubserves multiple modalities, allowing coordination andmatching across them. Thus, the localization of a sound canbe linked to an eye movement that attempts to see thesound’s source. Similarly, a single temporal system sub-serves multiple modalities, coordinating the timing of per-ception and action. Because the spatial and temporal sys-tems are not bound to any one modality, they are amodal,thereby constituting evidence for amodal symbol systems.

To my knowledge, the nature of spatial and temporal sys-tems remains an open question. To date, there is no defin-itive answer as to whether each modality has its own spatialand temporal processing system, or whether general sys-tems serve all modalities. Indeed, both modality-specificand modality-general systems may well exist. Ultimately,this is an empirical question.

Imagine, however, that domain-general systems arefound in the brain. In the framework of the target article,they would not constitute amodal systems. As defined insection 1.2, an amodal symbol system contains representa-tions that are transduced from perceptual states – they donot contain representations from perception. If domain-general systems for perceiving space and time exist, theirrepresentations do not satisfy this definition for amodalsymbols. To see this, consider the representations thatwould arise in domain-general systems during perception.Although these representations are not modality-specific,they are nevertheless perceptual. They constitute funda-mental parts of perception that integrate the specificmodalities – what I will call supramodal representations. Toappreciate why these representations are not amodal, con-sider the representation of space and time during concep-tual processing. If supramodal representations in percep-tion are also used to represent space and time duringconceptual processing, they constitute perceptual symbols.Amodal symbols for space and time would only exist ifsupramodal representations during perception were trans-duced into a new representation language that supports con-ceptual processing. Should no such transduction processoccur, supramodal representations of time and space formcentral components of perceptual symbol systems.

R1.4. Latent semantic analysis (LSA). Based on my argu-ments about artificial intelligence (sect. 4.4), Landauer

Response/Barsalou: Perceptual symbol systems

638 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

takes me to mean that no silicon-based system could ac-quire human knowledge. He then shows how computers thatimplement LSA mimic humans on a wide variety of knowl-edge-based tasks. I never claimed, however, that amodalsystems cannot mimic humans behaviorally! Indeed, sec-tion 1.2 noted that amodal symbols have so much powerthat they can probably mimic all human behavior. My claimin section 4.4 was different, namely, if knowledge isgrounded in sensory-motor mechanisms, then humans andcomputers will represent knowledge differently, becausetheir sensory-motor systems differ so radically. I acknowl-edged the possibility that future technological develop-ments might produce computers with sensory-motor sys-tems closer to ours, in which case their knowledge could berepresented similarly. No one is currently under the illu-sion, however, that the input-output systems of today’scomputers are anything like human sensory-motor systems.Thus, if knowledge is implemented in perceptual systems,it would have to take very different forms.

In making his case for LSA, Landauer fails to addressthe problems I raise for amodal symbol systems in section1.2: (1) What is the direct evidence that co-occurrence fre-quency between words controls performance? A systembased on co-occurrence frequencies can mimic human be-havior, thereby providing indirect evidence that this con-struct is useful, but is there evidence implicating this basicunit of analysis directly? (2) What is the neural story? Dolesion and neuroimaging data provide support? (3) How areperception and conception linked? Everyone would agreethat a conceptual system helps us interpret perceivedevents in the world, but how do word co-occurrences maponto perceived individuals in the world? When we perceivean object, how is it categorized using LSA? When we con-ceptualize an object, how do we find its referent in theworld? (4) How does LSA represent knowledge aboutspace and time? It is difficult to see how a system that onlytracks word co-occurrence can represent these basic as-pects of perception. Other concerns about LSA arise, aswell. How does LSA accomplish the conceptual functionsaddressed in the target article, such as distinguishing typesand tokens, implementing productivity, and representingpropositions? Using only word correlations, it does notseem likely that it can implement these basic human abili-ties. Finally, are co-occurrences tracked only betweenwords, or between corresponding amodal symbols, as well?If the latter is the case, what is the nature of this amodal sys-tem, and how is it related to language?

It is important to note that correlation – not causation –underlies LSA’s accounts of human data. Because word co-occurrence exists in the world and is not manipulated ex-perimentally, infinitely many potential variables are con-founded with it. To appreciate this problem, recall how LSAworks. On a given task, the likelihood of a particular re-sponse (typically a word) is predicted by how well it corre-lates with the overall configuration of stimulus elements(typically other words). Thus, responses that are highly cor-related with stimulus elements are more likely to be pro-duced than less correlated responses. The problem is thatmany, many other potential variables might well be corre-lated with word co-occurrence, most notably, the percep-tual similarity of the things to which the words refer. It is aninteresting finding that computer analyses of word correla-tions produce a system that mimics human behaviors. Itdoes not follow, though, that these correlations are causally

important. Of course the human brain could work like LSA,but to my knowledge, no evidence exists to demonstratethis, or to rule out that variables correlated with word co-occurrence are the critical causal factors.

A number of empirical findings suggest that knowledgeis not grounded in word co-occurrence. First, aphasics typ-ically lose only language; they do not lose knowledge(Lowenthal). If knowledge were simply represented inword co-occurrence, they should lose knowledge, as well.Second, Glenberg reports that perceptual affordances en-ter into language comprehension, and that word co-occur-rence cannot explain these results (also see sects. 4.1.6 andR4.5). Landauer believes that this is because words forcertain idioms, metaphors, and modern phrases have notbeen included in the language corpus that underlies LSA,but again the problem of correlation versus causation arises.Landauer wants to believe that these missing entries havecausal significance, when actually their status is merely cor-relational. Because Glenberg’s affordances are also correla-tional, more powerful laboratory techniques are needed totease these factors apart. To the extent that sensory-motoraffordances are implicated in conceptual processing, how-ever, LSA has no way of handling them. In this spirit,Solomon and Barsalou (1999a; 1999b) controlled thestrength of lexical associations in the property verificationtask and found effects of perceptual variables. In a task in-volving only linguistic stimuli, nonlinguistic factors signifi-cantly affected performance when linguistic factors wereheld constant.

One way to extend LSA would be to apply its basic as-sociative mechanism to perceptual elements. Just as co-occurrences of words are tracked, so are co-occurrences inperception. Depending on the specific details of the for-mulation, such an approach could be virtually identical tothe target article’s proposal about frames. As described insection 2.5., a frame accumulates components of percep-tion isolated by selective attention, with the associativestrength between components reflecting how often theyare processed together. Thus, the co-occurrence of percep-tual components lies at the heart of these perceptuallygrounded frames. Notably, however, it is proposed thatthese structures are organized by space and time, not sim-ply by associative strength. To later simulate perceptual ex-perience, the mere storage of associations will not be suffi-cient – extensive perceptual structure will be necessary, aswell. If so, a simple extension of LSA from words to per-ception probably will not work.

Furthermore, we have known since Chomsky (1957) thatthe human brain is not simply a generalization mechanism.Besides generalizing around known exemplars, the brainproduces infinite structures from finite elements produc-tively. Not only is this true of language, it is also true of con-ception (Fodor & Pylyshyn 1988). People can conceive ofconcepts that they have never encountered and that arenot generalizations of past concepts. Neither a word-basednor a perception-based version of LSA can implementproductivity. There is no doubt that the human brain is anassociative device to a considerable extent, yet it is also aproductive device. Although LSA may explain the associa-tive aspects of cognition, it appears to have no potential forexplaining the productive aspects.

Finally, in a superb review of picture-word processing,Glaser (1992) concluded that a mixture of two systems ac-counts for the general findings in these paradigms. Under

Response/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 639

certain conditions, a system of word associations explainsperformance; under other conditions, a conceptual systemwith a perceptual character explains performance; understill other conditions, mixtures of these two systems are re-sponsible. Based on my own research, and also on reviewsof other relevant literature, I have become increasinglyconvinced of Glaser’s general thesis, although perhaps in asomewhat more radical form: People frequently use wordassociations and perceptual simulations to perform a widevariety of tasks, with the mixture depending on task cir-cumstances (e.g., Barsalou et al. 1999; Solomon & Barsalou1999a; 1999b; Wu & Barsalou 1999). Furthermore, I havebecome convinced that LSA provides an excellent accountof the word association system (as does Hyperspace Ana-logue to Language, HAL; Burgess & Lund 1997). Whenpeople actually use word associations to perform a task,LSA and HAL provide excellent accounts of the underlyingprocessing. However, when a task demands true conceptualprocessing, or when the word association strategy isblocked, the conceptual representations required are im-plemented through perceptual simulation. Although suchsimulations may rely on structures of associated perceptualcomponents, they also rely on nonassociative mechanismsof perception and productivity.

R1.5. Amodal symbols arise through abstraction. Adopt-ing the framework of connectionism, Gabora argues thatamodal symbols arise as similar experiences are superim-posed onto a common set of units. As idiosyncratic featuresof individual experiences cancel each other out, shared fea-tures across experiences coalesce into an attractor that rep-resents the set abstractly. I strongly agree that such learn-ing mechanisms are likely to exist in the brain, and that theyare of considerable relevance for perceptual symbol sys-tems. Most important, however, this process does not sat-isfy the definition of amodal symbols in section 1.2. To theextent that Gabora’s learning process occurs in the originalunits that represent perception, it implements a perceptualsymbol system, even if generic attractors evolve over learn-ing. As described in section 2.5 and shown in Figure 3, rep-etitions of the same perceptual symbol are superimposed toproduce increasingly generic components of frames. Con-versely, Gabora’s learning process would only become anamodal symbol system if a population of nonperceptualunits were to recode perceptual activity, as in a feedforwardnet with hidden units (sect. 1.2). As long as no such recod-ing takes place, however, I view Gabora’s proposal as highlycompatible with the proposal that abstractions are groundedperceptually. Schwartz et al. make virtually the same pro-posal as Gabora but view it as an implementation of a per-ceptual symbol system, not as an implementation of anamodal symbol system.

R1.6. General skepticism. The commentaries by Landauand Ohlsson raise specific criticisms of perceptual symbolsystems that I will address later. Here I address their gen-eral belief that amodal symbol systems must be correct.Landau and Ohlsson both view perceptual symbol systemsas essentially a classic empiricist theory. Because they be-lieve that prior empiricist theories have been completelydiscredited, they believe that perceptual symbol systemscould not possibly work either.

First, these commentaries prove the point I made in thetarget article that perceptual approaches to knowledge are

often oversimplified for the sake of criticism (sect. 1.3). I donot have the sense that Landau and Ohlsson have appre-ciated this point, or have made much of an attempt to en-tertain anything but a simplistic perceptual view. I continueto believe that when more substantial and ambitious per-ceptual theories are entertained, they are not only plausi-ble and competitive, they are superior.

Second, the theory in the target article departs signifi-cantly from classic empiricist theories. Whereas prior the-ories focused on consciously experienced images, percep-tual symbol systems focus on neural states in sensory-motorsystems (sect. 2.1). To my mind, this is the most exciting fea-ture of this proposal, primarily because it leads one to thinkabout the perceptual view in provocative new ways.

Third, I explicitly stated that my theory is not a simpleempiricist view of knowledge, appealing to genetically reg-ulated mechanisms that structure brain development (sect.2.4.1). In particular, I proposed that genetically basedmechanisms for space, objects, movement, and emotion or-ganize perceptual symbols. Landau and Ohlsson seem tobelieve that nativism is the answer to the ills of empiricism.However, it is one thing to view nativism naively as apanacea for every difficult problem, and quite another tospecify the complex epigenetic processes that characterizethe genetic regulation of development (Elman et al. 1996).

Fourth, perceptual symbol systems go considerably fur-ther than previous theories in laying out a fully functionaltheory of concepts, largely because the criteria for such atheory were not known until modern times. To my knowl-edge, no prior perceptual theory of knowledge has accom-plished this. Although prior theories have included mecha-nisms that had potential for implementing these functions(sect. 1.3), they were typically not developed in these ways.

Finally, in embracing amodal symbol systems as the onlyreasonable alternative, Landau and Ohlsson fail to ad-dress the problems raised for amodal symbol systems insection 1.2. Without resolving these major problems, it isdifficult to see how anyone could have such complete con-fidence in a theory.

R2. Specifying perceptual symbol systems

The majority of the commentaries addressed specific as-pects of perceptual symbol systems. Some commentariesattempted to show that empirical findings or theoretical ar-guments support some aspect of the theory. Others devel-oped a particular aspect in a positive manner. Still othercommentaries attempted to undermine the theory by rais-ing problems for some aspect. I have organized these re-marks around each aspect of the theory addressed. Foreach, I have combined the relevant commentaries with thehope of clarifying and developing that part of the theory.

R2.1. Perception, perceptual symbols, and simulation.Quite remarkably, Aydede claims that there is no way to dis-tinguish perceptual and nonperceptual systems indepen-dently of modal and amodal symbols. Given the huge be-havioral and neural literatures on perception, this is a rathersurprising claim. A tremendous amount is known about theperceptual systems of the brain independent of anythinganyone could possibly say about conceptual systems.Clearly, we know a lot about perceptual abilities and theneural mechanisms that implement them. Identifying these

Response/Barsalou: Perceptual symbol systems

640 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

abilities and mechanisms independently of perceptual sym-bols is not only possible but has long since been accom-plished in the scientific disciplines that study perception. Indefining perceptual symbols, I simply used these well-rec-ognized findings. Thus, section 2.1 proposed that well-es-tablished sensory-motor mechanisms not only representedges, vertices, colors, and movements in perception butalso in conception. Perceptual symbols follow the well-trod-den paths of perception researchers in claiming that the rep-resentational mechanisms of perception are the representa-tional mechanisms of knowledge. What is perceptual aboutperceptual symbol systems is perception in the classic andwell-established sense. The exceptions are introspectivesymbols (sect. 2.3.1), whose neural underpinnings remainlargely unexplored (except for emotion and motivation).

Brewer similarly worries that perceptual symbols, neu-rally defined, are of little use because we currently have noknowledge of the neural circuits that underlie them. Again,however, we do have considerable knowledge of the neuralcircuits that underlie sensory-motor processing. Becauseperceptual symbols are supposed to use these same circuitsto some extent, we do know something about the neuralconfigurations that purportedly underlie them.

Although perceptual symbols use perceptual mecha-nisms heavily, this does not mean that conceptual and per-ceptual processing are identical. In the spirit of Lowen-thal’s commentary, important differences must certainlyexist. As noted in sections 2.1 and 4.3 of the target article,perceptual symbols probably rely more heavily on memorymechanisms than does standard perception. Furthermore,the thesis of section 2.4.7 is that a family of representationalprocesses capitalizes on a common set of representationalmechanism in sensory-motor regions, while varying in theother systems that use them. This assumption is also im-plicit throughout the subsections of section 4.1, which showhow basic cognitive abilities could draw on the same set ofrepresentational mechanisms. The point is not that theseabilities are identical, only that they share a common rep-resentational basis.

Mitchell & Clement claim that I never define “simula-tion” adequately. It is clear throughout the target article,however, that a simulation is the top-down activation of sen-sory-motor areas to reenact perceptual experience. As sec-tion 2.4 describes in some detail, associative areas reactivatesensory-motor representations to implement simulations.

Wells claims that the extraction of perceptual symbols(sect. 2.2) is not an improvement over the unspecifiedtransduction process associated with amodal symbols (sect.1.2). The improvement, as I see it, is that the extraction ofperceptual symbols is a well-specified process. It is easy tosee how selective attention could isolate and store a com-ponent of perception, which later functions as a perceptualsymbol. In contrast, we have no well-specified account ofhow an amodal symbol develops.

Finally, I would like to reiterate one further point aboutmy use of “perception.” The use of this term in “perceptualsymbol systems” is not meant to imply that movements andaction are irrelevant to knowledge. On the contrary, manyexamples of perceptual symbols based on movements andactions were provided throughout the target article. Rather,“perception” refers to the perception of experience fromwhich perceptual symbols are extracted, including the per-ception of movement and action. Thus, “perception” refersto the capture of symbolic content, not to the content itself.

For additional discussion of perceptual representations inknowledge, see Brewer and Pani (1983) and Pani (1996).

R2.2. Features and attention. One could take Wells’s con-cern about perceptual symbol extraction being uncon-strained to mean that it provides no leverage on the prob-lem of what can be a feature. Landau makes this criticismexplicitly. Notably, the amodal account that Landau appar-ently prefers places no constraints on features at all. As theintroduction to section 2 in the target article noted, amodaltheories have not resolved the feature issue themselves. Itis puzzling that what counts as evidence against the per-ceptual view is not also counted as evidence against theamodal view.

Furthermore, as Landau notes, perceptual views con-tain the seeds of a solution to this problem. Specifically, shenotes that classic empirical theories define conceptual fea-tures as features that arise from the interaction of the senseorgans with the physics of the environment. In other words,the features of conception are the features of perception.Clearly, not all sensory distinctions make their way to theconceptual level, for example, opponent color processingand center-surround inhibition. Yet many conceptual fea-tures do appear to have their roots in sensory systems, in-cluding the features for shape, color, size, and texture thatoccur frequently in reports of conceptual content. AlthoughLandau does not credit perceptual symbol systems withdrawing on perception for features, they certainly do, giventhe explicit mention of perceptually derived features in sec-tion 2.1.

Because of its complexity, I did not attempt to resolve thefeature problem in the target article. The features of con-ception clearly go beyond perceptual features, as I noted inNote 5. Most important, high-level goals and conceptualstructures capture aspects of perception that become fea-tures through selective attention. In most cases, the infor-mation selected is not a feature computed automatically insensory processing but a higher-order configuration of ba-sic sensory-motor units. It was for this reason that I citedSchyns et al. (1988) as showing that conceptual featuresclearly go beyond perceptual features.

The architecture of perceptual symbol systems is ideallysuited to creating new features that exceed sensory-motorfeatures. Because high-level frames and intuitive theoriesare represented as perceptual simulations (sect. 2.5 and4.1.2), they are well suited for interacting with perceptionto isolate critical features. Furthermore, the symbol forma-tion process in sect. 2.2 is well suited for creating new fea-tures through selective attention. A great deal clearly re-mains to be specified about how this all works. It is muchless clear how amodal symbol systems accomplish the con-struction of novel features, again because they lack an in-terface between perception and cognition.

Schwartz et al. argue that I fail to specify how attentionis controlled during the formation of perceptual symbols,such that perceptual symbols are as arbitrary as amodalsymbols. Again, however, perceptual systems provide nat-ural constraints on the formation of perceptual symbols.One prediction is that when empirical studies examine thisprocess, they will find that perceptual symbols exhibit aweak bias for following the structure of perception. By nomeans, however, must perceptual symbols follow only thisstructure. Because selective attention is so flexible, it canfocus on complex configurations of perceptual information

Response/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 641

to establish symbols that serve higher goals of the system.The open-endedness of the symbols that people use shouldnot be viewed as a weakness of perceptual symbol systems.Instead, it is something that any theory needs to explain,and something that perceptual symbol systems explainthrough the flexible use of selection attention (sect. 2.2). Itwould be much more problematic if the theory were sim-ply limited to symbols that reflect the computations of low-level sensory-motor processing.

Ultimately, Schwartz et al. have another agenda. Theyargue that schematic category representations result fromrecruitment and fractionation, not from selective attention.It is implicit in their view that holistic perceptual states arerecorded during learning, with differences across themcanceling out to produce schematic representations of per-ceptual symbols. Similarly, Gabora argues for the impor-tance of these mechanisms. Although I endorse thesemechanisms, I strongly disagree with the view that selectiveattention plays no role in learning. As discussed in sections2.2 and 4.1.3, selective attention has been strongly impli-cated in learning across decades of research in many areas.Furthermore, it is naive to believe that holistic recordingsof perception occur, as Hochberg argues eloquently (alsosee Hochberg 1998). Being highly flexible does not makeselective attention irrelevant to conceptual processing. In-stead, it is absolutely essential for creating a conceptual sys-tem (sects. 1.4 and 4.1.3).

Hochberg and Toomela each propose that active infor-mation-seeking and goal pursuit guide selective attentionand hence the acquisition of perceptual symbols. Glen-berg’s arguments about the importance of situated actionare also consistent with this view. I agree strongly that top-down goal-achievement often guides selective attention,such that perceptual information relevant to these pursuitsbecomes acquired as perceptual symbols. However, bot-tom-up mechanisms may produce perceptual symbols, aswell. For example, attention-grabbing onsets and changesin perception may attract processing and produce a per-ceptual symbol of the relevant content. Similarly, the per-ceptual layout of a physical stimulus may guide attentionand the subsequent extraction of perceptual symbols. Inthis spirit, Brewer notes that physical proximity better pre-dicts recall order for the objects in a room than does theirconceptual relatedness. Thus, bottom-up as well as top-down factors are probably important to selecting the con-tent of perceptual symbols. Furthermore, appealing to theimportance of top-down factors pushes the difficult issuesback a level. Attention is clearly under the control of goalsand action to a considerable extent, yet how do we charac-terize the mechanisms underlying goals and actions thatguide attentional selection?

R2.3. Abstraction, concepts, and simulators. I am thefirst to admit that specifying the mechanisms underlyingsimulators constitutes perhaps the most central challengeof the theory, and I noted this throughout the target article.Later I will have more to say about implementing simula-tors in specific mechanisms (sect. R5.3). Here I focus onsome misunderstandings about them.

Without citing any particular statement or section of thetarget article, Ohlsson claims that I equated selection withabstraction. According to him, I believe that selectivelystoring a wheel while perceiving a car amounts to creatingan abstraction of cars in general. Nowhere does the target

article state that selection is abstraction. On the contrary,this is a rather substantial and surprising distortion of myview. What I did equate roughly with selection was schema-tization, namely, the information surrounding focal contentis filtered out, leaving a schematic representation of thecomponent (sect. 2.2). However, I never claimed that aschematization is an abstraction, or that it constitutes a con-cept. Rather, I argued that the integration of manyschematic memories into a simulator is what constitutes aconcept (sect. 2.4). Actually, I never said much about ab-straction per se but focused instead on the development oftypes (sects. 2.4.3. and 3.2.1). If I were to characterize ab-straction in terms of perceptual symbol systems, I woulddefine it as the development of a simulator that reenacts thewide variety of forms a kind of thing takes in experience. Inother words, a simulator contains broad knowledge about akind of thing that goes beyond any particular instance.

Adams & Campbell claim that I characterize simulatorsin terms of the functions they perform, such as categoriza-tion and productivity. Most basically, however, I definedsimulators as bodies of knowledge that generate top-downreenactments or simulations of what a category is like (sect.2.4). Clearly, this is a functional specification, but it is notthe one that Adams & Campbell attribute to me. Adams &Campbell further complain that I fail to say anything aboutthe mechanisms that underlie simulators (so do Dennett& Viger and Schwartz et al.). In section 2.4, however, Idefined a simulator as a frame plus the mechanisms that op-erate on it to produce specific simulations. In section 2.5, Iprovided quite a bit of detail on the frames that underliesimulators. I acknowledged that this account is far fromcomplete, and I agree that developing a full-blown compu-tational theory is extremely important. For the record,though, I certainly did say something about the mecha-nisms that underlie simulators, and I continue to believethat these ideas provide useful guidance in developing spe-cific accounts (sect. R5.1).

Landau believes that simulations are not capable of cap-turing the underlying content or abstractions of concepts.If I understand correctly, she is making the same argumentAdams & Campbell made earlier that perceptual knowl-edge is epiphenomenal and not constitutive of concepts.In a sense, this is also similar to Ohlsson’s argument thatstoring part of a category instance does not represent therespective category. Again, however, the claim is that an en-tire simulator – not a specific perceptual representation –comes to stand for a category. As just discussed, a simulatorcaptures a wide variety of knowledge about the category,making it general, not specific. Furthermore, the abstrac-tion process described by Gabora and by Schwartz et al.enters into the frames that underlie simulators, therebymaking them generic (sects. R1.5 and R2.4). Finally, a sim-ulator stands in a causal relation to its physical category inthe world (sect. R1.2). On perceiving an instance of cate-gory, a simulator is causally activated, bringing broad cate-gory knowledge to bear on the instance (sects. 2.4.3 and3.2.1). For all these reasons, simulators function as con-cepts.

Finally, Siebel takes issue with my claim that bringingthe same simulator to bear across different simulations of acategory provides stability among them (sect. 2.4.5). Thus,I claimed that the different simulations a person constructsof birds are unified because the same simulator for birdproduced all of them. As Siebel correctly notes, each time

Response/Barsalou: Perceptual symbol systems

642 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

a simulator becomes active, the perceptual symbolsprocessed becomes stored or strengthened, thereby chang-ing the simulator’s content. Thus, the same simulator – interms of its content – cannot be brought to bear on differ-ent simulations to provide stability. “Same,” however, doesnot refer to content but to the body of knowledge thatstands in causal relation historically to a physical category.Stability results from the fact that one particular body ofknowledge typically becomes active when a particular cat-egory is conceptualized, even though that body of knowl-edge changes in content over time. Because all conceptual-izations can be linked to this particular body of knowledgehistorically, they gain stability.

R2.4. Conceptual essences and family resemblances.Lurking behind the concern that simulators cannot repre-sent concepts may well be the belief that concepts haveessences. When Ohlsson, Landau, and Adams & Camp-bell question whether perceptual representations captureunderlying constitutive content, I take them to be worryingabout this. Since Wittgenstein (1953), however, there havebeen deep reservations about whether concepts have nec-essary and sufficient features. For this reason, modern the-ories often portray essences as people’s naive belief thatnecessary and sufficient features define concepts, evenwhen they actually do not (e.g., Gleman & Diesendruck, inpress). Should a concept turn out to have defining features,though, a simulator could readily capture them. If a featureoccurs across all the instances of a category, and if selectiveattention always extracts it, the feature should become wellestablished in the frame that underlies the simulator (sect.2.5, Fig. 3; sect. R1.5; Gabora; Schwartz et al.). As a re-sult, the feature is almost certain to become active later insimulations of the category constructed. Simulators havethe ability to extract features that are common across theinstances of a category, should they exist.

Research on the content of categories indicates that fewfeatures if any are typically common to all members of a cat-egory (e.g., Malt 1994; Malt et al., in press; Rosch & Mervis1975). However, a small subset of features is likely to betrue of many category members, namely, features that arecharacteristic of the category but not defining. Should afamily resemblance structure of this sort exist in a category,perceptual symbol systems are quite capable of extractingit. Again, following the discussion of frames (sect. 2.5, Fig.3) and abstraction (sects. R1.5 and R2.3), the more often afeature is extracted during the perceptual processing of acategory, the better established it becomes in the category’sframe. To the extent that certain features constitute a sta-tistical regularity, the frame for the category will capturethis structure and manifest it across simulations con-structed later. The more a feature is processed perceptually,the more it should occur in category simulations.

Finally, Ohlsson seems to believe that a perceptual viewof concepts could only work if common perceptual featuresunderlie all instances of a concept. Nowhere did I claimthis, and there is no reason it must be true. As just de-scribed, the frame that underlies a simulator captures thestatistical distribution of features for a category, regardlessof whether it possesses common features. For a categorylacking common features, its simulator will produce simu-lations that do not share features with all other simulationsbut are related instead by a family resemblance structure.

Fauconnier raises the related point that a word may not

be associated with a single simulator, taking me to meanthat a simulator only produces simulations that share com-mon features. Again, however, there is no a priori reasonthat a simulator cannot produce disjunctive simulations.Because different perceptions may be associated with thesame word, different perceptual symbols may be stored dis-junctively in the frame of the associated simulator and maylater produce disjunctive simulations. As Fauconnier sug-gests, the content of a simulator is accessed selectively toproject only the information relevant in a particular context,with considerably different information capable of beingprojected selectively from occasion to occasion (sect. 2.4.3).

R2.5. Frames and productivity. Barsalou and Hale (1993)propose that all modern representation schemes evolvedfrom one of two origins: propositional logic or predicate cal-culus. Consider some of the differences between these twoformalisms. Most basically, propositional logic contains bi-nary variables that can be combined with simple connec-tives, such as and, or, and implies, and with simple opera-tors, such as not. One problem with propositional logic is itsinability to represent fundamentally important aspects ofhuman thought and knowledge, such as conceptual rela-tions, recursion, and bindings between arguments and val-ues. Predicate calculus remedied these problems throughadditional expressive power.

Barsalou and Hale show systematically how many mod-ern representation schemes evolved from propositionallogic. By allowing binary variables to take continuous forms,fuzzy logic developed. By replacing truth preservation withsimple statistical relations between variables, the constructof a feature list developed, leading to prototype and exem-plar models. By adding complicated activation and learningalgorithms to prototype models, connectionism followed.Notably, neural nets embody the spirit of propositionallogic because they implement simple continuous units un-der a binary interpretation, linked by simple connectivesthat represent co-occurrence. In contrast, consider repre-sentation schemes that evolved from predicate calculus.Classic frame and schema theories maintain conceptual re-lations, binding, and recursion, while relaxing the require-ment of truth preservation. Perceptual symbol systems sim-ilarly implement these functions (sect. 2.5.1), while relaxingthe requirement that representations be amodal or lan-guage-like.

It is almost universally accepted now that representationschemes lacking conceptual relations, binding, and recur-sion are inadequate. Ever since Fodor and Pylyshyn’s(1988) classic statement, connectionists and dynamic sys-tems theorists have been trying to find ways to implementthese functions in descendants of propositional logic. In-deed, the commentaries by Edelman & Breen, Mark-man & Dietrich, and Wells all essentially acknowledgethis point. The disagreement lies in how to implement thesefunctions.

Early connectionist formulations implemented classicpredicate calculus functions by superimposing vectors forsymbolic elements in predicate calculus expressions (e.g.,Pollack 1990; Smolensky 1990; van Gelder 1990). The psy-chological validity of these particular approaches, however,has never been compelling, striking many as arbitrary tech-nical attempts to introduce predicate calculus functionsinto connectionist nets. More plausible cognitive accountsthat rely heavily on temporal asynchrony have been sug-

Response/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 643

gested by Shastri and Ajjanagadde (1993) and Hummel andHolyoak (1997). My primary concern with this approach isthat it is amodal and therefore suffers from the problemsnoted in section 1.2.

Edelman & Breen suggest that spatial relations provideconsiderable potential for implementing conceptual rela-tions, binding, and recursion. Furthermore, they note thatthese functions achieve productivity through the combina-torial and recursive construction of simulations. As sections2.5 and 3.1 indicate, I agree. Perceptual symbol systemsprovide more than a means of representing individual con-cepts; they provide powerful means of representing rela-tions between them. My one addition to Edelman &Breen’s proposal would be to stress the importance of tem-poral relations – not just spatial relations – as the basis offrames and productivity. The discussion of ad hoc cate-gories in section 3.4.4 illustrates this point. One can thinkof a simulation as having both spatial and temporal dimen-sions. For example, a simulation of standing on a chair tochange a light bulb contains entities and events distributedover both space and time. As suggested in section 3.4.4, adhoc categories result from disjunctively specializing space-time regions of such simulations. Thus, the entity that isstood on to change a light bulb constitutes a space-time re-gion, which can be specialized with simulations of differentobjects. Similarly, the agent standing on the chair can bespecialized differently, as can the burned-out light bulb, thenew light bulb, the fixture, and so forth. Each region thatcan be specialized constitutes a potential attribute in theframe for this type of event. Note that such regions are notjust defined spatially, they are also defined temporally.Thus, the regions for the burned out bulb, the new bulb,and the chair all occupy different regions of time in the sim-ulation, not just different regions of space. Because thereare essentially an infinite number of space-time regions ina simulation, there are potentially an infinite number offrame attributes (Barsalou 1992; 1993).

Markman & Dietrich observe that frame structurearises in perceptual symbol systems through nativist mech-anisms. With appropriate acknowledgment of epigenesis(Elman et al. 1996) and eschewing genetic determinism, Iagree. Frame structure arises from the basic mechanisms ofperception. Indeed, I have often speculated informally thatpredicate calculus also originated in perception. Markman& Dietrich further speculate that language is not the originof frame structure. Conversely, one might speculate thatthe frame-like structure of language originated in percep-tion as well. Because perception has relations, binding, andrecursion, language evolved to express this structurethrough verbs, sentential roles, and embedded clause struc-ture, respectively. Finally, Markman & Dietrich suggestthat frame structure is not learned, contrary to how certainlearning theorists might construe its origins in neural nets.I agree that the basic potential for frames lies deep in theperceptual architecture of the brain. No doubt genetic reg-ulation plays an important role in producing the attentional,spatial, and temporal mechanisms that ultimately makeframes possible. Yet, I hasten to add that specific frames aremost certainly learned, and that they may well obey con-nectionist learning assumptions during their formation.Again, though, these connectionist structures are notamodal but are instead complex spatio-temporal organiza-tions of sensory-motor representations.

Finally, Wells agrees that frame structure is important.

Without providing any justification, though, he claims thatonly amodal symbol systems can implement productivity,not perceptual symbol systems. Perhaps perceptual symbolsystems do not exhibit exactly the same form of productiv-ity as amodal symbol systems, but to say that they do notimplement any at all begs the question. As section 3.1 illus-trated in considerable detail, the hierarchical compositionof simulations implements the combinatoric and recursiveproperties of productivity quite clearly. Wells then pro-ceeds to argue that productivity does not reside in the brainbut resides instead in interactions with the environment.This reminds me of the classic behaviorist move to putmemory in the environment as learning history, and I pre-dict that Wells’s move will be as successful.

I sympathize with Wells’s view that classic representa-tional schemes have serious flaws. As Hurford notes, how-ever, it is perhaps unwise to throw out the symbolic babywith the amodal bath water. There are many good reasonsfor believing that the brain is essentially a representationaldevice (Dietrich & Markman, in press; Prinz & Barsalou, inpress b). Some theorists have gone so far as arguing thatevolution selected representational abilities in humans toincrease their fitness (Donald 1991; 1993). Perceptual sym-bol systems attempt to maintain what is important aboutrepresentation while similarly attempting to maintain whatis important about connectionism and embodied cognition(sect. R6). The brain is a statistical, embodied, and repre-sentational device. It is our ability to represent situationsoffline, and to represent situations contrary to perception,that make us such amazing creatures.

R2.6. Analogy, metaphor, and complex simulations. Asnoted by Markman & Dietrich, the ability of perceptualsymbol systems to represent frames makes it possible to ex-plain phenomena that require structured representations,including analogy, similarity, and metaphor. Indurkhyafurther notes that, by virtue of being perceptual, framesprovide powerful means of explaining how truly novel fea-tures emerge during these phenomena. In a wonderful ex-ample, he illustrates how blending a perceptual simulationof sunlight with a perceptual simulation of the ocean’s sur-face the emergent feature of harp strings vibrating back andforth. Indurkhya argues compellingly that perceptual sym-bol systems explain the emergence of such features muchmore naturally than do amodal symbol systems. In thisspirit, he suggests that conceptual blending be called “per-ceptual blending” to highlight the importance of perceptualrepresentations, and he cites empirical evidence showingthat perception enters significantly into metaphor compre-hension.

Fauconnier notes that I fail to acknowledge the com-plexity of simulations in language and thought. He providesa compelling example of a child who uses sugar cubes andmatchbooks to simulate cars and buses driving down thestreet. Clearly, such examples involve complicated simula-tions that reside on multiple levels, and that map into oneanother in complex ways. Indeed, this particular examplepales in complexity next to Fauconnier and Turner’s (1998)examples in which people juxtapose perception of the cur-rent situation with past, future, and counterfactual situa-tions. In these examples, people must represent several sit-uations simultaneously for something they say to makesense. To my mind, the fact that the human conceptual sys-tem can represent such complicated states-of-affairs illus-

Response/Barsalou: Perceptual symbol systems

644 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

trates how truly remarkable it is. Brewer’s observation thatsimulations provide a good account of how scientists repre-sent models further illustrates this ability.

The evolution of the human frontal lobes provides oneway to think about people’s ability to construct multiplesimulations simultaneously and to map between them. In-creasingly large frontal lobes may have provided greater in-hibitory power to suppress perception and to representnonpresent situations (cf. Carlson et al. 1998; Donald 1991;1993; Glenberg et al. 1998). Taking this idea a little furtherprovides leverage in explaining humans’ ability to representmultiple nonpresent situations simultaneously. Not onlycan humans simulate absent situations, we can simulateseveral absent situations simultaneously and map betweenthem. The possession of a powerful inhibitory and controlsystem in the frontal lobes may well make this possible.

R2.7. Introspection, emotion, and metacognition. Twocommentators found my inclusion of introspective processesproblematic and unparsimonious (Newton, Toomela). Ul-timately, they worry about the mechanisms that perceive in-trospective events. In place of introspection, Newton sug-gests that we seek subtle proprioceptive events that groundintuitive understandings of introspection. For example, ex-periences of collateral discharge while executing move-ments might ground the intuitive concept of agency,whereas experiences of eye movements might ground theintuitive concept of seeing. Similarly, Toomela suggests thatintrospective phenomena arise through interactions ofmore basic systems, such as perception and language.

Although I am sympathetic to these proposals and couldimagine them being correct to some extent, I am not at allconvinced that it is either necessary or wise to explain all in-trospection in these ways. Most significantly, people clearlyexperience all sorts of internal events, including hunger, fa-tigue, and emotion. Finding ways to ground them in moreexternally-oriented events seems difficult, and denyingtheir internal experience seems highly counterintuitive.

Another problem for Newton and Toomela is explain-ing the central role that the representation of mental statesplays in modern developmental and comparative psychol-ogy. Research on theory of mind shows that people repre-sent what they and others are representing (or are not rep-resenting) (e.g., Hala & Carpendale 1997). A primaryargument to emerge from this literature is that represent-ing mental states is a central ability that distinguishes hu-mans from other species (Tomasello & Call 1997). If we doaway with introspection, how do we represent minds in atheory of mind? Also, how do we explain the wide variety ofmetacognitive phenomena that people exhibit (Char-land)?

Regarding the problem of what mechanisms perceiverepresenting, I disagree with Newton and Toomela thatthis leads to an infinite regress whereby additional mecha-nisms to perceive representations must be added to the ba-sic mechanisms that perceive the world. Rather, I suspectthat the brain uses one set of mechanisms to perceive boththe world and representations. Recall the basic premise ofperceptual symbol systems: Top-down simulations of sen-sory-motor systems represent knowledge, optionally pro-ducing conscious states in the process (sect. 2.1). Perhapsthe mechanisms that produce conscious experience of sen-sory-motor systems when driven by bottom-up sensory pro-cessing also produce consciousness of the same systems

when driven by top-down activation from associative areas(sect. 2.4.7). No new mechanisms are necessary. The onecritical requirement is knowing the source of the informa-tion that is driving conscious experience. In psychosis, thisawareness is lacking, but most of the time, a variety of cuesis typically sufficient, such as whether our eyes are closed(Glenberg et al. 1998), and the vividness of the experience(Johnson & Raye 1981).

I agree strongly with Charland that emotion is centralto perceptual symbol systems, and that emotion’s role in thisframework must be developed further. I also agree thatthere may well be specific brain circuits for processing par-ticular emotions, which later support conceptualizations ofthese emotions (e.g., circuits for fear and anxiety; Davis1998). Where I disagree with Charland is in his apparentclaim that emotion is a self-contained symbol system. I fur-ther disagree with his readings of Damasio (1994) andLazarus (1991) that perceptual images and appraisal mech-anisms belong to an autonomous symbol system for emo-tion. This is certainly not my reading of their work. On thecontrary, I strongly suspect that these cognitive aspects ofemotion are provided directly by cognitive – not emotion –mechanisms. Rather than containing its own cognitive sys-tem, I suspect that emotion mechanisms complement aseparate cognitive system. I hasten to add, however, thatthese two systems are so tightly coupled that dissociatingthem creates a distortion of how each one functions alone(Damasio 1994). Also, reenacting states in emotion systems– not amodal systems in a general knowledge store – con-stitutes the basis of representing emotions symbolically inconceptual processing.

Finally, Oehlmann presents two metacognitive phenom-ena and asks how perceptual symbol systems explain them.First, how do people know that they know the solution to aproblem without having to simulate the entire solution? Se-lective attention and schematization provide one account(sect. 2.2). On simulating a solution, selective attentionstores the initial and final states in working memory, drop-ping the intermediate states. By then switching back andforth between the initial and final states, an abbreviated ver-sion of the simulation becomes associated with the entiresimulation. On later perceiving the initial conditions forthese simulations, both become active, with the abbreviatedone finishing much sooner. If the final state in the abbrevi-ated simulation is sufficient to produce a response, waitingfor the complete simulation to finish is unnecessary. As thisexample illustrates, selective attention and schematicity pro-vide powerful editing functions on simulations, just as inproductivity, making it possible to rise above complete sim-ulations of experience (sect. 3.1). Again, a perceptual sym-bol system is not a recording system.

Oehlmann’s second problem is how perceptual symbolsystems explain performance on the false belief task. Al-though children can be shown to recognize that anotherperson has a different belief than they do, they neverthe-less forget this at times and attribute their own belief to theother person. Oehlmann claims that if beliefs were simplysimulated, this pattern of results should not occur. If chil-dren can simulate another person’s belief correctly on oneoccasion, why do they not simulate it correctly on all occa-sions? An obvious explanation is that children at this age arelimited in their ability to simulate other people’s mentalstates. Under optimal conditions, they have enough cogni-tive resources to run different simulations for themselves

Response/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 645

and others. Under less than optimal conditions, however,they run out of resources and simply run the same simula-tion for others as they do for themselves. As children growolder and their ability to simulate other minds automatizes,they almost always have enough resources to simulate dif-fering states of mind, and they rarely fall back on the pro-jective strategy. Indeed, this is similar to Carlson et al.’s(1998) proposal that frontal lobe development is the key re-source in performing these tasks. When task demands aretoo high, children do not have enough resources to inhibitsimulations of their own mental state when attempting tosimulate the mental states of others.

R2.8. Situated action. I agree with Glenberg that if a per-ceptual symbol system fails to support situated action, it isa limited and misguided enterprise. Whatever form the cog-nitive system takes, it most certainly evolved to support sit-uated action (e.g., Clark 1997; Glenberg 1997; MacWhin-ney 1998; Newton 1996). In the target article, I focusedprimarily on how perceptual symbol systems achieve clas-sic symbolic functions, given that this ability has not beenappreciated. In the process, I failed to explore how per-ceptual symbol systems support situated action. I have at-tempted to remedy this oversight in a more recent paper,arguing that perceptual simulation is an ideal mechanismfor monitoring and guiding situated action (Barsalou, inpress). Freksa et al. similarly argue that perceptual simu-lation is well suited for interfacing cognition with action inthe world. Because cognition and perception use a commonrepresentational system, cognition can be readily linkedwith perception and action. Internal models have sufficientperceptual fidelity to the physical world to provide accurateand efficient means of guiding interactions with it. Again,though, causal relations, not just resemblance, are impor-tant in establishing these relations (sects. 2.4.3, 3.2.8, andR1.2).

R2.9. Development. Two commentaries note that classicdevelopmental theories anticipate the importance of per-ceptual symbol systems in cognition. Mitchell & Clementremind us that Piaget had a lot to say about the develop-ment of cognition from sensory-motor processing. Too-mela reminds us that Vygotsky and Luria had a lot to say,as well. I agree completely and did indeed neglect thesetheorists in section 4.2, primarily because Nelson’s (1996)book, which I did cite, does a good job of describing theseprevious developmental theories as a springboard for herrecent theory.

McCune is concerned about my statement that “thesame basic form of conceptual representation remains con-stant across . . . development, and a radically new form isnot necessary.” In particular, she notes that children’s abil-ity to use representations symbolically develops consider-ably over the first two years of life. As children mature,frontal lobe development supports increasingly powerfulabilities to formulate and manipulate representations off-line from perception. I agree completely (sects. R2.6 andR2.7). In the quote above, however, I was not referring torepresentation in this sense of symbolic activity. Instead, Iwas arguing that a transition from a perceptual symbol sys-tem to an amodal symbol system does not occur over thecourse of development. Instead, a single perceptual symbolsystem develops, achieving critical symbolic milestonesalong the way. I suspect, however, that even the rudiments

of more advanced representational abilities are presentvery early. As I speculated in section 3.4.3, prenatal infantsmay compare expectations to sensory experiences as a pre-cursor to the operations that underlie the adult concepts oftruth and falsity. As McCune suggests, however, these earlyprecursors are certainly much more primitive than theirlater counterparts.

R.3. Abstract concepts

The topic raised most frequently by the commentators wasabstract concepts. As I noted in section 3.4, accounting forthese concepts in perceptual symbol systems is a contro-versial and challenging task.

R3.1. Abstract concepts have nothing to do with percep-tion. Arguing that abstract concepts rise above perception,Landauer and Ohlsson view perception as a nuisance andhindrance to representing these concepts. Neither, how-ever, tells us what abstract concepts are. If they are notabout events we perceive, then what are they about? Lack-ing a compelling account of their content, how can we claimto understand them? By “content” in this discussion, I willmean the cognitive representations of abstract concepts,because this is the type of content at issue here. By nomeans, however, do I intend to imply that physical referentsare unimportant. As described earlier for the causal theoryof concepts, I strongly agree that they can be central (sect.R1.2).

In section 3.4.2, I proposed a strategy for representingabstract concepts in perceptual symbol systems: (1) identifythe event sequences that frame an abstract concept; (2)specify the relevant information across modalities in thesesequences, including introspection; and (3) specify the fo-cal content in these sequences most relevant to defining theconcept. Most basically, this strategy requires identifyingthe entities and events to which an abstract concept refers.That is all. Indeed, it would seem to be the same strategythat one should follow in developing amodal accounts of ab-stract concepts, as well. Regardless of the specific approachtaken, it is essential to identify the content of an abstractconcept.

Once one identifies this content, the issue then arises ofhow to represent it. One possibility is to transduce it intoamodal symbols. Another possibility is to simulate it in thespirit of perceptual symbol systems. Regardless, it is essen-tial to identify the content of an abstract concept. Just pos-tulating the concept’s existence, or using a language-basedpredicate to represent it, fails to accomplish anything.There is nothing useful in claiming that TRUE (X) andFALSE (X) represent the abstract concepts of truth and fal-sity. Obviously, specifying the content that distinguishesthem is essential. Once this has been accomplished, myclaim is simply that simulations of this content are sufficientto represent it – there is no need to transduce it into amodalsymbols.

Another key issue concerns what types of content arepotentially relevant to representing abstract concepts.Throughout the target article, but especially in section3.4.2, I argued that it is impossible to represent abstractconcepts using content acquired solely through sensation ofthe physical world. On the contrary, it is essential to includecontent from introspection. Whereas concrete concepts are

Response/Barsalou: Perceptual symbol systems

646 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

typically concerned only with things in the world, abstractconcepts are about internal events, as well. Thus, I agreethat abstract concepts cannot be explained solely by per-ception of the external world, and my frequent argumenthas been that this additional content arises in introspection.Unlike Landauer and Ohlsson, I have specified wherethis content is supposed to originate. I further predict thatshould they attempt to characterize this content them-selves, they will ultimately find themselves appealing to in-trospective events. Most important, I predict that simula-tions of these events will be sufficient to represent themand that amodal symbols will not be necessary.

R3.2. Abstract concepts of nonexistent entities. Severalcommentators wondered how perceptual symbol systemscould represent something that a person has never experi-enced. If, as I just suggested, the representation of an ab-stract concept is the reenactment of its content, how can weever represent an abstract concept whose referents havenever been encountered? How do we represent the end oftime, life after death, infinite set, and time travel (Ohls-son)? How do we represent electromagnetic field and elec-tron (Toomela)?

It is instructive to consider concrete concepts that wehave never experienced, such as purple waterfall, cottoncat, and buttered chair. Although I have never experiencedreferents of these concepts, I have no trouble understand-ing them. Using the productive mechanisms of perceptualsymbol systems. I can combine components of past experi-ence to construct novel simulations of these entities (sect.3.1.2). I can simulate a normal waterfall and transform itscolor to purple; I can simulate a normal cat and transformits substance to cotton; I can simulate a normal chair andsimulate buttering it. If I can construct such simulations forconcrete concepts, why can I not construct them for ab-stract concepts?

For the end of time, I can begin by simulating the ends ofknown processes, such as the end of growth and the end ofwalking. In all such cases, a process occurring over a stretchof time stops. Applying the same schematic simulation totime yields an interpretation of the end of time, even thoughwe have not experienced it. Simple-mindedly, we mightimagine that all clocks stop running. With a little more so-phistication, we might imagine all change in the world end-ing, with everything freezing in place. Or, we might simplyimagine all matter disappearing into nothing. By combin-ing past experience productively, we can simulate eventsand entities that we have never experienced, regardless ofwhether they are concrete or abstract (sect. 3.1.2).

If space permitted, I would be happy to provide similaraccounts of the other examples raised. However, makingeven a half-hearted attempt to follow my proposed strategyfor representing abstract concepts may well be sufficientfor readers to convince themselves that it is quite possibleto represent nonexperienced concepts with productive sim-ulations (sect. 3.4.2).

What is perhaps most troubling about raising nonexperi-enced concepts as a problem for perceptual symbol systemsis the lack of awareness that these concepts constitute achallenge for amodal symbol systems, too. How do amodalapproaches represent the content of these concepts? Onceamodal theorists make the effort to identify this content, Iagain predict that perceptual symbols will provide a directand parsimonious account.

R3.3. Metaphor and abstract concepts. In reaction to myargument that metaphor does not form the primary basis ofabstract concepts (sect. 3.4.1), Gibbs & Berg respond thatmetaphor does not just elaborate and extend abstract con-cepts – it is central to their core. In making their case, theycite empirical studies showing that metaphors structure ab-stract concepts and produce entailments in their represen-tations. My intention was never to underestimate the im-portance of metaphor in abstract concepts, especially thosefor which people have no direct experience. I agree thatmetaphor is important. Instead, my intention was to redresscognitive linguists’ failure to consider the importance of di-rect experience in abstract concepts, and I remain con-vinced that direct experience is critically important. It re-mains an open empirical question just how much metaphorstructures abstract concepts, and I suspect that the natureof this influence varies widely from concept to concept. Ofthe greatest importance, perhaps, is establishing a detailedprocess model of how direct experience and metaphor in-teract to represent abstract concepts over the course oftheir acquisition.

R3.4. Situations and abstract concepts. Central to myproposed strategy for representing abstract concepts isidentifying the background event sequences that framethem (sect. 3.4). Perhaps the greatest omission in the tar-get article was my failure to cite empirical research on sit-uations that support this conjecture. Much research bySchwanenflugel and her colleagues shows that abstract con-cepts are harder to process than concrete concepts whenbackground situations are absent, but that both types ofconcepts are equally easy to process when background sit-uations are present (e.g., Schwanenflugel 1991; Schwanen-flugel et al. 1988; Schwanenflugel & Shoben 1983; Watten-maker & Shoben 1987). Wiemer-Hastings (1998) andWiemer-Hastings and Graesser (1998) similarly report thatsentential contexts are much more predictive of abstractwords than of concrete words, further indicating the im-portance of background situations for abstract concepts. Intheir commentary here, Wiemer-Hastings & Graesseroffer excellent examples of how abstract concepts dependon situations for meaning. Understanding why abstract con-cepts depend so heavily on background situations is likelyto be central for developing adequate accounts of them.

R3.5. Representing truth, falsity, and negation. As section3.4.3 noted, it was never my aim to provide complete ac-counts of these three concepts. Instead, I presented partialaccounts to illustrate a strategy for representing abstractconcepts in general (sect. 3.4.2), noting that more completeanalyses would be required later.

Landau complains that I did not really specify anythingabout the content of truth. This is indeed quite puzzling,given that the commentators I am about to discuss all be-lieved that I had in fact specified such content. The prob-lem, they thought, was that I had not specified it ade-quately. Two commentaries suggested that my account oftruth cannot be distinguished from match (Adams &Campbell), or from similar, comparable, and looks like(Mitchell & Clement). Following the proposed strategyfor representing abstract concepts in section 3.4.2, distin-guishing these concepts begins by examining the situationsin which they occur. In match, similar, comparable, andlooks like, the relevant situation typically involves compar-

Response/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 647

ing two entities or events in the same domain of experience.Thus, one might compare two birds in the world or twoimagined plans for a vacation. In none of these cases are twoontologically different things typically compared, such as animagined perception and an actual one. Furthermore, sub-tle aspects of comparison distinguish these four conceptsfrom one another. Thus, match implies that identical, ornearly identical, features exist between the compared ob-jects, whereas similar, comparable, and looks like imply par-tially overlapping features. Within the latter three concepts,similar allows just about any partial match, comparable im-plies partial matches on aligned dimensions (Gentner &Markman 1997), and looks like implies partial matches invision.

In contrast, the sense of truth in section 3.4.3 and Figure7a specifies that an agent attempts to map a mental simula-tion about a perceived situation into the perceived situa-tion, with the simulation being true if it provides an ade-quate account of the perceived situation. Clearly, thecomponents of this account differ from those just describedfor match, similar, comparable, and looks like. In truth, oneentity (a mental simulation) is purported to be about a sec-ond entity (a perceived situation). In none of the other fourconcepts do the two entities being compared reside in dif-ferent modalities, nor is one purported to be about theother. As we shall see, more content must be added to thisaccount of truth to make it adequate. Nevertheless, enoughalready exists to distinguish it from these other concepts.

Once one establishes such content, the key issue, ofcourse, is deciding how to represent it. Following theamodal view, we could attempt to formulate complicatedpredicate calculus or programming expressions to describeit. Alternatively, we could attempt to identify the direct ex-perience of this content in the relevant situations and arguethat simulations of these experiences represent the content.Thus, over many occasions of assessing truth, a simulatordevelops that can construct specific simulations of what itis like to carry out this procedure.

Siebel notes an omission in my account of truth in Fig-ure 7a. How does it explain errors in mapping simulationsto perceived situations? If I simulate a donkey and mistak-enly bind it to a perceived horse, do I believe it true that theperceived entity is a donkey? If I have no disconfirming ev-idence, yes! Of course, this does not mean that my simula-tion is actually true, it simply means that I have concludedincorrectly that it is. Indeed, people often believe that anexpectation is true, even before they compare it to the per-ceived world (Gilbert 1991).

Where the account of truth in Figure 7a must be ex-tended is to account for the discovery of disconfirming ev-idence. Following the proposed strategy for representingabstract concepts (sect. 3.4.2), simulations of disconfirma-tion must be added. Because many kinds of disconfirmationare possible, many event sequences are necessary. For ex-ample, further perception of the horse might activate addi-tional visual features, which in turn activate the simulatorfor horse. After comparing simulations for horse and don-key to the perceived entity, I might decide that the originaldonkey simulation does not fit as well as the subsequenthorse simulation, therefore changing what I believe to betrue. Alternatively, another agent with more expertise thanme might claim that the perceived entity is a horse, so thatI attribute falsity to my original simulation and attributetruth to a new simulation from my horse simulator. By in-

corporating schematic simulations of disconfirmation intothe simulator for truth, the account in Figure 7a can be ex-tended to handle disconfirmation. It is simply a matter ofimplementing the strategy in section 3.4.2 to discover therelevant content.

Ohlsson notes another omission in my account of truth.Rather than applying my proposed strategy to see if it yieldsthe necessary content, however, he simply concludes that itcannot. The omission he notes is that lack of a fit does notnecessarily mean a simulation is false. In his example, I seea cat on a mat in my office. While I am turned away, some-one pulls the mat outside my office with the cat on it. WhenI turn around, the cat and the mat are gone. Ohlsson claimsthat my account of falsity in Figure 7b implies that it is falsethat the cat is on the mat. Actually, though, this is not whatmy account would say. As I stated repeatedly in the targetarticle, a simulation is purported to be about a perceived sit-uation (sect. 3.4.3). Thus, in Ohlsson’s example, my simula-tion of a cat on a mat is purported to be about my office.When this simulation no longer matches the perceived sit-uation, one reasonable conclusion is that it is false that thereis a cat on a mat in my office, not Ohlsson’s conclusion thatit is false that the cat is on the mat. If the critical questionwere whether the cat is on the mat anywhere, then I wouldhave to compare a simulation of a cat on a mat to many per-ceived situations until I found one containing the mat.

This adds another important layer of content to my ac-counts of truth and falsity in Figure 7: On some occasionsin which these concepts are used, it may be necessary tocompare a simulation to more than one perceived situa-tion. Only after failing to find a satisfactory fit in any rele-vant situation does a conclusion about falsity follow, as-suming that a match in a single situation suffices for truth(i.e., in a universally quantified claim, a match would benecessary in every situation). The same solution handlesOhlsson’s other example about aliens. We do not concludethat there are no aliens in the universe after examining onlyone situation. Instead, we have to examine many possiblesituations before reaching this conclusion. Again, however,applying the strategy in section 3.4.2 yields the necessarycontent. By examining the appropriate situations, the rel-evant content can be discovered and added to the respec-tive simulators.

Fauconnier notes that a near miss is a better example offalsity than a far miss. On perceiving a balloon above a cloud,for example, a simulation of a balloon under a cloud is a bet-ter example of a false simulation than is a simulation of atypewriter on a desk. Such a prediction is borne out by anincreasingly large literature which shows that high similaritycounterintuitively produces high dissimilarity (Gentner &Markman 1997). The more alignable two representationsare, the more similar they are, and the more different theyare. Thus, I suspect that Fauconnier’s prediction has moreto do with how people compute fits between representationsthan with my accounts of truth and falsity per se. His pre-diction not only applies to these concepts but to all of the re-lated concepts discussed earlier, such as match, similar, com-parable, and looks alike.

Finally, Wiemer-Hastings & Graesser explore mypoint in section 3.4.3 that truth is a polysemous concept. Intheir commentary, they explore a variety of senses that I didnot attempt to cover in Figure 7a. Besides highlighting thefact that most abstract concepts are highly polysemous, theyraise the further issue that any theory of knowledge must

Response/Barsalou: Perceptual symbol systems

648 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

differentiate the senses of an abstract concept. Using theconcept of idea, Wiemer-Hastings & Graesser illustratehow situations may well provide leverage on this task. I sus-pect that the same strategy is also likely to provide leverageon the different senses of truth that they raise, such as thetruth of a single utterance, the truth of all a person’s utter-ances, and scientists trying to discover the truth. As just de-scribed for falsity, differences in how simulations are as-sessed against perceived situations may distinguish the firsttwo senses. In the first sense, a single simulation for oneparticular utterance must be compared to a single per-ceived situation. In the second sense, a simulation for eachclaim a person makes must be compared to each respectivesituation. To account for the scientific sense of truth, a com-pletely new set of situations from the scientific enterprise isrequired, including the formulation of hypotheses, theconducting of research, the assessment of the hypothesesagainst data, and so forth. Only within this particular set ofexperiences does the scientific sense of truth make sense.Again, specifying the meaning of an abstract concept re-quires searching for the critical content in the relevantbackground situations.

R3.6. How do you represent X? Various commentatorswondered how perceptual symbols could represent all sortsof other abstract concepts. These commentaries fall intotwo groups. First, there was the commentator who followedmy strategy in section 3.4.2 for representing abstract con-cepts and discovered that it does indeed provide leverageon representing them. Second, there were the commenta-tors who complained that perceptual symbol systems fail torepresent particular abstract concepts, yet who apparentlydid not try this strategy. At least these commentators do notreport that this strategy failed on being tried.

First, consider the commentator who tried the strategy.Hurford notes that the target article failed to provide anaccount of the concept individual. On examining relevantsituations that contain familiar individuals, Hurford in-duced an account of this concept: A unique individual ex-ists whenever we fail to perceive anything exactly like it si-multaneously in a given situation. Thus, we conclude thatour mother is a unique individual because we never per-ceive anyone exactly like her simultaneously. For the samereason, we conclude that the sun and moon are unique in-dividuals. As Hurford further notes, mistakes about themultiple identities of the same individual follow naturallyfrom this account. Because we perceive Clark Kent and Su-perman as each being unique, we never realize that they arethe same individual (the same is true of the Morning Starand the Evening Star). Only after viewing the transition ofone into the other do we come to believe that they are thesame individual. Like my account of truth, Hurford’s ac-count of individual may need further development. Re-gardless, it illustrates the potential of the strategy in section3.4.2 for discovering and representing the content of ab-stract concepts.

Finally, consider the commentators who doubted thatperceptual symbol systems could represent particular ab-stract concepts, apparently without trying the strategy insection 3.4.2. Adams & Campbell expressed skepticismabout chiliagons and myriagons; Brewer expressed skepti-cism about entropy, democracy, evolution, and because;Mitchell & Clement expressed skepticism about can,might, electricity, ignorant, and thing; Ohlsson expressed

skepticism about health care system, middle class, recentelection, and internet. Again, if there were no shortage ofspace, I would be happy to illustrate how the proposedstrategy in section 3.4.2 might represent these concepts.Again, the content of an abstract concept must be specified,regardless of whether it is to be represented perceptuallyor amodally, and the proposed strategy is a useful means ofidentifying it. Furthermore, I remain convinced that simu-lations of this content are sufficient to represent it, and thatamodal redescriptions are unnecessary.

R4. Language

Another frequently raised topic was the relation betweenlanguage and perceptual symbol systems.

R4.1. Linguistic forms per se do not carry meaning. I con-tinue to be amazed by how often sophisticated researchersbelieve that language per se carries meaning. Most com-monly, this belief has to do with internal speech. As some-one listens to imagined speech in introspection, the speechis presumed to carry meaning as the person “thinks in lan-guage.” As Fauconnier notes, however, language does notcarry meaning directly. Instead, language is a powerfulmeans of controlling the online construction of conceptu-alizations (sect. 2.7). One can hear a foreign language thou-sands of times, but unless the words become linked to theirreferents in experience, they remain meaningless. Thewords do not inherently carry meaning.

In arguing that we think in language when processing ab-stract concepts, Jorion strikes me as being under the illu-sion that language carries meaning (a similar criticism ofLandauer was presented earlier in sect. R1.4). In a seriesof examples, Jorion argues that concepts are representedin words, not in percepts. The fact that some people arehighly conscious of internal speech, though, does not entailthat words are the causal forces in their thinking. As mostcognitive scientists agree, the bulk of the action in cognitionis unconscious, and a lot more is happening than consciousinner speech. If language is so central to thought, how canaphasics think, and how can Lowenthal’s subjects learn?How can people be thinking of something and not know theword for it? Where do all the inferences in comprehensioncome from that are not expressed directly in linguisticforms?

R4.2. Language in knowledge acquisition. I hasten to addthat language most certainly plays critical roles in the ac-quisition and use of knowledge. For example, much re-search on conceptual development has shown that wordsprovide important signals for developing new concepts(e.g., Gelman et al. 1998; Gelman & Tardif 1988; Markman1989). On hearing a new word, children often infer that itrefers to a new set of entities that share important com-monalities. Hurford similarly notes how linguistic ele-ments may signal the presence of individuals in domains ofdiscourse, perhaps assisting in the acquisition of individual.Toomela similarly suggests that language is central toforming abstract concepts. Finally, Langacker’s examplesof construal illustrate how the word associated with a per-ceived entity determines its conceptualization, with differ-ent words producing different conceptualizations (sects.2.4.7 and 3.2.6).

Lowenthal provides an example to the contrary, show-

Response/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 649

ing that people can acquire new knowledge without lan-guage. Even when lesions in language areas of the brain dis-rupt linguistic processing, people still learn new conceptsand skills. As I have argued throughout this reply, suchdemonstrations strongly question the grounding of knowl-edge in language. Again, it by no means follows that lan-guage is normally irrelevant.

R4.3. Language in human evolution. In section 4.2., I sug-gested that human intelligence may have diverged fromnonhuman primates with the evolution of language, citingDonald (1991; 1993). Gabora, however, cites a passagefrom Donald in which he states that some sort of concep-tual foundation, which may have been mimetic culture,must have preceded speech. In other words, the divergencemust have occurred after the evolution of some more basicrepresentational ability, not after the evolution of language.I stand corrected. Gabora has indeed described Donald’scorrect position. Ultimately, though, my point was that hu-mans evolved the means to control joint simulations aboutnonpresent situations. By developing the ability to repre-sent nonpresent situations jointly in the past and future, hu-mans increased their fitness substantially. Using the body toact out nonpresent situations (mimesis) may well have beenthe first stage of this evolution, with language building on itlater. Again, my point was that developing the ability to sim-ulate nonpresent situations constituted a major develop-ment in human evolution.

R4.4. Cognitive linguistics. As noted by Fauconnier andLangacker, cognitive linguistics and perceptual symbolsystems share many common assumptions. Most impor-tantly, they assume that conceptualizations are grounded inthe experiential systems of the brain and body. What I triedto show in the target article is that both approaches belongto a much larger tradition that stretches back over two mil-lennia (sects. 1.1 and 1.3). Embodied approaches to cogni-tion are hardly novel and only appear so in the context ofthe twentieth century. What is new and exciting is the rein-vention of this approach in the contexts of modern cogni-tive science and neuroscience. Contrary to Landau andOhlsson’s claim that perceptual symbol systems offer noth-ing new, it and other current formulations depart from ear-lier formulations through the incorporation of modern find-ings, insights, and tools.

Quite a few people have suggested to me informally thatcognitive linguistics and perceptual symbol systems arecomplementary, lying at different levels of analysis. Al-though cognitive linguists develop accounts of semanticsand grammar in terms of experiential constructs, these ac-counts often fail to map clearly into well-established cogni-tive and neural mechanisms. One goal of perceptual sym-bol systems was to provide an account of how a conceptualsystem could be grounded in these mechanisms. Showingthat concepts can be grounded in perception and move-ment not only supports the assumptions of cognitive lin-guists, it provides a layer of theoretical machinery that maybe useful to their accounts. Conversely, cognitive linguists’analyses of myriad linguistic structures strongly suggest var-ious conceptual structures that we are well advised to lookfor in cognition and neuroscience. As usual, the interplaybetween levels is not only stimulating but probably neces-sary for success in achieving the ultimate goals of thesefields.

R4.5. Perceptual simulation in comprehension. Becauseamodal theories of knowledge have dominated accounts oflanguage comprehension, evidence that perceptual simula-tion underlies comprehension instead is significant. If themeanings of texts are grounded in simulations, one of thestrongest arguments for the amodal view is in danger.Zwaan et al. review a variety of findings which suggest thatsimulations do indeed underlie comprehension. Zwaan et al.also suggest a variety of future projects that could explorethis issue further. In section 4.1.6 of the target article, a va-riety of additional studies are cited that support embodiedcomprehension, and Barsalou (in press) provides an evolu-tionary account of why comprehension might have becomegrounded in perceptual simulation. Clearly, much additionalresearch is needed to address this hypothesis thoroughly,but existing evidence suggests that it may well be true.

R5. Computational accounts

As I mentioned throughout the target article, developingcomputational accounts of perceptual symbol systems isimportant (e.g., sects. 2 and 5). A number of commentatorsagreed, and had a variety of things to say about this issue.

R5.1. Functional specifications versus mechanisms.Several commentators suggested that the target article pro-vided only functional specifications of perceptual symbolsystems and failed to say anything at all about the underly-ing mechanisms (Adams & Campbell; Dennett & Viger;Schwartz et al.). As I acknowledged in the target article, Idid not provide a full-blown computational account of per-ceptual symbol systems. Nevertheless, I certainly did havesomething to say about mechanisms. Throughout the targetarticle, I discussed the viability of Damasio’s (1989) con-vergence zones, arguing that they provide an ideal mecha-nism for implementing perceptual symbols (sects. 2.1, 2.2,2.3, 2.4.6, 2.4.7, and 4.3). In a convergence zone, one layerof units represents information in a sensory-motor domain,and a second layer in an association area captures patternsof activation in the first layer. By activating a pattern of unitsin the associative area, a sensory-motor representation isreenacted. The mechanics of such systems are basic lore inconnectionism, and in citing convergence zones frequentlythroughout the target article, I was pointing to an obviousand well-known mechanism for implementing perceptualsymbols.

During discussions of symbol formation (sects. 2.2 and2.3), I suggested a second line of mechanisms that are cen-tral to perceptual symbol systems. Specifically, I suggestedthat the well-known mechanisms of attention and memoryextract and store components of experience for later use assymbols. Although I did not provide detailed accounts ofhow these mechanisms work, I did indeed incorporatethem centrally. Furthermore, it would be possible to drawon detailed accounts in the literature to develop my pro-posed account of how perceptual symbols are extractedfrom experience.

The discussions of frames similarly proposed mecha-nisms that underlie the formation of simulators. In section2.5 and Figure 3, I sketched an account of the associativemechanisms that underlie frames. In section 2.5.1, I de-fined the components of frames and described how theycould be implemented in perceptual symbol systems.

Response/Barsalou: Perceptual symbol systems

650 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

In my opinion, I developed the mechanisms of conver-gence zones, symbol extraction, and frames enough to showthat they can potentially achieve the criteria of a fully func-tional conceptual system. By combining attention, memory,and convergence zones, I illustrated how elemental per-ceptual symbols could become established in memory. Byshowing how these elemental symbols combine to formframes and simulators, I illustrated how concepts and typescould be represented. By showing how simulators combinewith each other and with perceived situations, I illustratedhow they could explain productivity and propositions. Thebasic strategy of my argument was to show how preliminarysketches of convergence zones, symbol extraction, andframes hold promise for constructing a fully functional con-ceptual system.

I was the first to note that these mechanisms must bespecified further before this argument is fully compelling(sect. 5). Nevertheless, I defend my decision to work at arelatively high level initially. Building a detailed implemen-tation that ultimately constitutes a significant contributionrequires this sort of preliminary analysis. If we can firstagree on what an implementation should do, and on whatbasic mechanisms it should have, we will save a lot of timeand energy. For this reason, I urge patience before firing upthe compiler, as well as appreciation for discussions of whatimplementations should accomplish.

R5.2. Connectionism. Several commentators came awaywith the mistaken impression that I am not a fan of con-nectionism (Gabora; Schwartz et al.). These commenta-tors appeared to take my criticism of amodal neural nets insection 1.2 as a criticism of connectionism in general. Again,however, my criticism was of those particular systems inwhich a hidden layer of conceptual units acts as an amodalsymbol system, being separate from and arbitrarily con-nected to a layer of perceptual units. As I stated in that sec-tion, my criticism does not apply to all connectionist theo-ries. Furthermore, in section 2.2.1, I described explicitlyhow perceptual symbols are dynamical attractors in neuralnets. Throughout section 2.5, I similarly described how themechanisms of frames operate according to associative anddynamical principles. Thus, I find it difficult to understandhow I could be construed as not recognizing the virtues ofconnectionism for implementing perceptual symbol sys-tems. I do! Again, I strongly endorse connectionist schemesin which a common population of units represents infor-mation in perception and knowledge, and I view Damasio’s(1989) convergence zones as an appropriate architecturefor implementing this. Trehub (1977; 1991) suggests an-other architecture that may be useful for implementingperceptual symbol systems, as well.

R5.3. Implementing perceptual symbol systems. In thisfinal section, I suggest four aspects of perceptual symbolsystems that strike me as particularly important and chal-lenging to implement. First, and perhaps foremost, imple-menting mechanisms that represent space and time is ab-solutely essential. Intuitively, these mechanisms implementa virtual stage or platform on which simulations run. It isthe “virtual space” or Kantian manifold in which simulatedevents unfold over time. Perhaps it is essentially the samesystem that represents the perception of actual events inthe world as they unfold. Regardless, perceptual symbolsystems cannot be implemented adequately until it is pos-

sible to run simulations either in isolation or simultaneouslywith perception. Note that such a system need not be topo-graphically organized with respect to space and time, al-though it could be. Functional representations of space andtime may well be sufficient.

As several commentators suggested, representing spaceis essential to implementing structured representations andproductivity (Edelman & Breen; Markman & Diet-rich). As I noted earlier (sect. R2.5), representing time isno less important. Figuring out how to implement spaceand time will not only be critical for running simulations, itwill also be critical for implementing classic symbolic func-tions. To create structured representations, it will be nec-essary to define space-time regions that can be specialized.

Second, it is essential to develop detailed mechanisms forimplementing frames (Adams & Campbell; Dennett &Viger; Markman & Dietrich). Again, the formulation insection 2.5 and Figure 3 sketches an initial approach. As Inoted in Note 14, however, no account of space was pro-vided, much less an account of time. Until mechanisms forrepresenting space and time are implemented, it will be dif-ficult, if not impossible, to implement a viable account offrames. In particular, it will not be possible to implementthe basic frame properties of conceptual relations, bind-ings, and recursion, which rely on the specialization ofspace-time regions (sects. 2.5.1, 3.1, 3.2, 3.4.4, and R2.5).Once a system for representing space and time is imple-mented, it can be used to integrate the perceptual symbolsextracted from different instances of a category into a com-mon frame, as Figure 3 illustrated.

Third, once a system for representing frames is in place,it will be critical to specify how a particular simulation is as-sembled from the information in a frame. Again, mecha-nisms for representing space and time are likely to be im-portant. Because a space-time region typically contains onlya single entity or event, this constraint blocks most infor-mation in a frame from becoming active. Figuring out howto implement this constraint will be important, as will fig-uring out how to control the sketchiness of simulations. Inhighly schematic simulations, many space-time regions maybe blank or filled minimally, whereas in highly detailed sim-ulations, most space-time regions may be filled richly. Fi-nally, it will be necessary to figure out how to maintain co-herence within a simulation, so that incompatible contentdoes not typically become active simultaneously across dif-ferent space-time regions. The standard mechanisms ofconstraint satisfaction are likely to be useful.

Finally, the control of attention will be absolutely critical(Hochberg, Toomela). To extract components of experi-ence during the symbol formation process, it will be neces-sary to specify the focus of attention (sect. 2.2). To imple-ment productivity during the construction of hierarchicalsimulations, specifying the control of attention will again benecessary so that space-time regions can be specialized se-lectively (sects. 3.1 and 3.4.4). To implement the formationof propositions, attention will have to focus on regions ofperception so that the relevant simulators can becomebound to them (sect. 3.2). As described in sections 1.4 and4.1.3, the ability to attend selectively to space-time regionsfrees a theory of representation from the constraints of arecording system and allows it to become a conceptual sys-tem.

I am highly sympathetic to Hochberg’s and Toomela’sproposal that situated action is likely to be central in guid-

Response/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 651

ing attention. However, mechanisms for representingspace and time are perhaps more basic. In the absence ofspatio-temporal reference systems, it is impossible to com-pute an attentional trajectory. Guiding attention to achievea goal presupposes the ability to guide attention throughspace and time. Thus, for still another reason, developingspace-time mechanisms is essential to implementing per-ceptual symbol systems. Once these mechanisms are inplace, it will be possible to formulate mechanisms thatguide attention selectively over space-time regions toprocess them.

Perhaps I am naive about computational modeling, butthe four aspects of perceptual symbol systems whose im-plementation I just outlined strike me as particularly chal-lenging. To my knowledge, nothing “off the shelf” is avail-able for fully implementing these functions, nor are theylikely to be implemented in a few weeks’ work. So pleaseforgive me for not yet having implemented the theory in thetarget article! Perhaps some readers will agree that sketch-ing the theory’s basic mechanisms and illustrating their po-tential for achieving a fully functional conceptual systemconstitutes a useful contribution.

R6. Conclusion

The way I see it, three basic approaches to knowledge existin modern cognitive science and neuroscience: (1) classicrepresentational approaches based on amodal symbols, (2)statistical and dynamical approaches such as connectionismand neural nets, and (3) embodied approaches such as clas-sic empiricism, cognitive linguistics, and situated action. IfI were to say that there is value in all three of these ap-proaches, I might be viewed as simply being diplomatic. Onthe contrary, I believe that each of these approaches has dis-covered something fundamentally important about humanknowledge, which the other two have not. Classic amodalapproaches have discovered the importance of structuredrepresentations, productivity, and propositions. Statisticalapproaches have discovered the importance of generaliza-tion, partial matches, adaptation, frequency effects, andpattern completion. Embodied approaches have discov-ered the importance of grounding knowledge in sensory-motor mechanisms and the significance of environmentsand action for cognition. What I have tried to do in formu-lating perceptual symbol systems is to integrate the positivecontributions of all three approaches. Regardless ofwhether my particular formulation succeeds, I predict thatwhatever approach ultimately does succeed will similarlyattempt to integrate representation, statistical processing,and embodiment.

ACKNOWLEDGMENTThis work was supported by National Science Foundation GrantsSBR-9421326 and SBR-9796200.

References

Letters “a” and “r” appearing before authors’ initials refer to target articleand response, respectively.

Adams, F. & Aizawa, K. (1994) Fodorian semantics. In: Mental representation, ed.S. Stich & T. Warfield. Blackwell. [FA]

Ahn, W. & Bailenson, J. (1996) Causal attribution as a search for underlying

mechanisms: An explanation of the conjunction fallacy and the discountingprinciple. Cognitive Psychology 31:82–123. [aLWB]

Ahn, W., Kalish, C. W., Medin, D. L. & Gelman, S. A. (1995) The role ofcovariation versus mechanism information in causal attribution. Cognition54:299–352. [aLWB]

Aizawa, K. (1997) Explaining systematicity. Mind and Language 12:115–36. [FA]Alpern, M. (1972) Eye movements. In: Visual psychophysics, ed. D. Jameson &

L. M. Hurvich. Springer-Verlag. [AT]Anderson, J. R. (1978) Arguments concerning representations for mental imagery.

Psychological Review 85:249–77. [aLWB](1993) Rules of the mind. Erlbaum. [aLWB]

Anderson, J. R. & Bower, G. H. (1973) Human associative memory. Winston.[aLWB]

Aristotle (1961) De anima, Books II and III, trans. D. W. Hamlyn. OxfordUniversity Press. (Original work published in 4th century B.C.). [aLWB]

Baddeley, A. D. (1986) Working memory. Clarendon. [aLWB]Bailey, D., Feldman, J., Narayanan, S. & Lakoff, G. (1997) Embodied lexical

development. Proceedings of the Nineteenth Annual Meeting of the CognitiveScience Society. Erlbaum. [aLWB]

Baillargeon, R. (1995) A model of physical reasoning in infancy. In: Advances ininfancy research, vol. 9. Ablex. [aLWB]

Barkowsky, T. & Freksa, C. (1997) Cognitive requirements on making andinterpreting maps. In: Spatial information theory: A theoretical basis for GIS,ed. S. Hirtle & A. Frank. Springer. [CF]

Barsalou, L. W. (1983) Ad hoc categories. Memory and Cognition 11:211–27.[aLWB]

(1985) Ideals, central tendency, and frequency of instantiation as determinantsof graded structure in categories. Journal of Experimental Psychology:Learning, Memory, and Cognition 11:629–54. [aLWB]

(1987) The instability of graded structure in concepts. In: Concepts andconceptual development: Ecological and intellectual factors in categorization,ed. U. Neisser. Cambridge University Press. [aLWB]

(1989) Intra-concept similarity and its implications for inter-concept similarity.In: Similarity and analogical reasoning, ed. S. Vosniadou & A. Ortony.Cambridge University Press. [aLWB]

(1991) Deriving categories to achieve goals. In: The psychology of learning andmotivation: Advances in research and theory, vol. 27. Academic Press.[aLWB]

(1992) Frames, concepts, and conceptual fields. In: Frames, fields, andcontrasts: New essays in lexical and semantic organization, ed. A. Lehrer &E. F. Kittay. Erlbaum. [arLWB, ABM]

(1993) Flexibility, structure, and linguistic vagary in concepts: Manifestations ofa compositional system of perceptual symbols. In: Theories of memories, ed.A. C. Collins, S. E. Gathercole & M. A. Conway. Erlbaum. [arLWB, WFB]

(1995) Storage side effects: Studying processing to understand learning. In:Goal-driven learning, ed. A. Ram & D. Leake. MIT Press. [aLWB]

(in press) Language comprehension: Archival memory or preparation forsituated action? Discourse Processes. [rLWB]

Barsalou, L. W. & Hale, C. R. (1993) Components of conceptual representation:From feature lists to recursive frames. In: Categories and concepts:Theoretical views and inductive data analysis, ed. I. Van Mechelen, J.Hampton, R. Michalski & P. Theuns. Academic Press. [arLWB]

Barsalou, L. W., Huttenlocher, J. & Lamberts, K. (1998) Basing categorization onindividuals and events. Cognitive Psychology 36:203–72. [aLWB]

Barsalou, L. W. & Prinz, J. J. (1997) Mundane creativity in perceptual symbolsystems. In: Conceptual structures and processes: Emergence, discovery, andchange, ed. T. B. Ward, S. M. Smith & J. Vaid. American PsychologicalAssociation. [aLWB]

Barsalou, L. W., Solomon, K. O. & Wu, L. L. (in press) Perceptual simulation inconceptual tasks. In: Cultural, typological, and psychological perspectives incognitive linguistics: The proceedings of the 4th Conference of theInternational Cognitive Linguistics Association, vol. 3, ed. M. K. Hiraga, C.Sinha & S. Wilcox. John Benjamins. [arLWB]

Barsalou, L. W., Yeh, W., Luka, B. J., Olseth, K. L., Mix, K. S. & Wu, L.-L. (1993)Concepts and meaning. In: Chicago Linguistics Society 29: Papers from theparasessions on conceptual representations, vol. 2, ed. K. Beals, G. Cooke, D.Kathman, K. E. McCullough, S. Kita & D. Testen. Chicago LinguisticsSociety. [aLWB]

Barwise, J. & Etchemendy, J. (1990) Information, infons, and inference. In:Situation theory and its application, ed. R. Cooper, K. Mukai & J. Perry.University of Chicago Press. [aLWB]

(1991) Visual information and valid reasoning. In: Visualization in mathematics,ed. W. Zimmerman & S. Cunningham. Mathematical Association of America.[aLWB]

Barwise, J. & Perry, J. (1983) Situations and attitudes. MIT Press. [aLWB]Bassok, M. (1997) Object-based reasoning. In: The psychology of learning and

motivation, vol. 37, ed. D. L. Medin. Academic Press. [aLWB]Bates, E., Thal, D., Trauner, D., Fenson, J., Aram, D., Eisele, J. & Nass, R. (1997)

Response/Barsalou: Perceptual symbol systems

652 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

From first words to grammar in children with focal brain injury.Developmental Neuropsychology 13(3): 275–343. [LM]

Bear, M. F., Connors, B. W. & Paradiso, M. A. (1996) Neuroscience: Exploring thebrain. Williams & Wilkins. [aLWB]

Berendt, B., Barkowsky, T., Freksa, C. & Kelter, S. (1998) Spatial representationwith aspect maps. In: Spatial cognition - an interdisciplinary approach torepresenting and processing spatial knowledge, ed. C. Freksa, C. Habel &K. F. Wender. Springer. [CF]

Berkeley, G. (1710/1982) A treatise concerning the principles of human knowledge.Hackett. (Original work published in 1710). [aLWB]

Biederman, I. (1987) Recognition-by-components: A theory of human imageunderstanding. Psychological Review 94:115–47. [aLWB, ABM]

Biederman, I. & Gerhardstein, P. C. (1993) Recognizing depth-rotated objects:Evidence and conditions for three-dimensional viewpoint invariance. Journalof Experimental Psychology: Human Perception and Performance 19:1162–82. [aLWB]

Black, J. B., Turner, T. J. & Bower, G. H. (1979) Point of view in narrativecomprehension, memory, and production. Journal of Verbal Learning andVerbal Behavior 18:187–98. [aLWB]

Black, M. (1962a) Metaphor. In: Models and metaphors, ed. M. Black. CornellUniversity Press. [BI]

(1962b) Models and archetypes. In: Models and metaphors, ed. M. Black.Cornell University Press. [WFB]

(1979) More about metaphor. In: Metaphor and thought, ed. A. Ortony.Cambridge University Press. [BI]

Block, N. (1983) Mental pictures and cognitive science. Philosophical Review93:499–542. [aLWB]

Bloom, P. & Markson, L. (1998) Intention and analogy in children’s naming ofpictorial representations. Psychological Science 9:200–204. [aLWB]

Borsook, D., Becerra, L., Fishman, S., Edwards, A., Jennings, C. L., Stojanovic,M., Papinicolas, L., Ramachandran, V. S., Gonzalez, R. G. & Breiter, H.(1998) Acute plasticity in the human somatosensory cortex followingamputation. Neuroreport 9:1013–17. [SE]

Bottini, G., Corcoran, R., Sterzi, R., Paulesu, E., Schenone, P., Scarpa, P.,Frackowiak, R. S. J. & Frith, C. D. (1994) The role of the right hemisphere inthe interpretation of figurative aspects of language: A positron emissiontomography activation study. Brain 117:1241–53. [BI]

Braitenberg, V. (1978) Cortical architectonics: General and areal. In: Architectonicsof the cerebral cortex, ed. M. A. Brazier & H. Petsche. Raven Press. [DAS]

Brandimonte, M. A., Schooler, J. W. & Gabbino, P. (1997) Attenuating verbalovershadowing through color retrieval cues. Journal of ExperimentalPsychology: Learning, Memory, and Cognition 23:915–31. [aLWB]

Bransford, J. D. & Johnson, M. K. (1973) Considerations of some problems ofcomprehension. In: Visual information processing, ed. W. G. Chase. AcademicPress. [aLWB]

Brewer, W. F. (1975) Memory for ideas: Synonym substitution. Memory andCognition 3:458–64. [WFB]

(in press) Scientific theories and naive theories as forms of mentalrepresentation: Psychologism revived. Science and Education. [WFB]

Brewer, W. F. & Pani, J. R. (1983) The structure of human memory. In: Thepsychology of learning and motivation: Advances in research and theory, vol.17, ed. G. H. Bower. Academic Press. [rLWB]

(1996) Reports of mental imagery in retrieval from long-term memory.Consciousness and Cognition 5:265–87. [WFB]

Brewer, W. F. & Treyens, J. C. (1981) Role of schemata in memory for places.Cognitive Psychology 13:207–30. [WFB]

Brooks, L. R. (1978) Nonanalytic concept formation and memory for instances. In:Cognition and categorization, ed. E. Rosch & B. B. Lloyd. Erlbaum.[aLWB]

Bruner, J. S. (1966) On cognitive growth. In: Studies in cognitive growth, ed. J. S.Bruner, R. R. Olver & P. M. Greenfield. Wiley. [FL]

Buckner, R. L., Petersen, S. E., Ojemann, J. G., Miezin, F. M., Squire, L. R. &Raichle, M. E. (1995) Functional anatomical studies of explicit and implicitmemory retrieval tasks. Journal of Neuroscience 15:12–29. [aLWB]

Burbaud, P. (1988) Traitement numérique et latéralisation corticale (IRMf). Firstmeeting of the Société d’anatomie fonctionnelle cérébrale, Paris, 1988(personal communication). [FL]

Burbaud, P., Degreze, P., Lafon, P., Franconi, J.-M., Bouligand, B., Bioulac, B.,Caille, J.-M. & Allard, M. (1995) Lateralization of prefrontal activation duringinternal mental calculation: A functional magnetic resonance imaging study.APStracts 2:0263N. Journal of Neurophysiology 74(5):2194–2200. [FL]

Burgess, C. & Chiarello, C. (1996) Neurocognitive mechanisms underlyingmetaphor comprehension and other figurative language. Metaphor andSymbolic Activity 11(1):67–84. [BI]

Burgess, C. & Lund, K. (1997) Modelling parsing constraints with high-dimensional context space. Language and Cognitive Processes 12:177–210.[arLWB]

Cacciari, C. & Levorato, M. C. (1994) “See,” “look,” and “glimpse”: What makes

the difference? Paper presented at the Meeting of the Psychonomic Society,St. Louis, MO. [aLWB]

Cann, R. (1993) Formal semantics. Cambridge University Press. [JRH]Carey, S. (1985) Conceptual change in childhood. MIT Press. [aLWB]Carlson, S. M., Moses, L. J. & Hix, H. R. (1998) The role of inhibitory processes in

young children’s difficulties with deception and false belief. ChildDevelopment 69:672–91. [rLWB]

Carnap, R. (1958) Introduction to symbolic logic and its applications. Dover.[JRH]

Carroll, L. (1960) The annotated Alice: Alice’s adventures in wonderland andThrough the looking glass. Bramhall House. [aLWB]

Chalmers, D. J., French, R. M. & Hofstader, D. R. (1992) High-level perception,representation, and analogy: A critique of artificial intelligence methodology.Journal of Experimental and Theoretical Artificial Intelligence 4:189–211.[BI]

Chambers, D. & Reisberg, D. (1992) What an image depicts depends on what animage means. Cognitive Psychology 24:145–74. [aLWB]

Charland, L. C. (1995) Emotion as a natural kind: Towards a computationalfoundation for emotion theory. Philosophical Psychology 8(1):59–84. [LCC]

(1996) Feeling and representing: Computational theory and the modularity ofaffect. Synthese 105:273–301. [LCC]

(1997) Reconciling cognitive and perceptual theories of emotion. Philosophy ofScience 64:555–79. [LCC]

Cheng, P. W. & Novick, L. R. (1992) Covariation in natural causal induction.Psychological Review 99:365–82. [aLWB]

Chomsky, N. (1957) Syntactic structures. Mouton. [arLWB](1975) Reflections on language. Pantheon Books. [FL]

Churchland, P. S. (1986) Neurophilosophy: Toward a unified science of the mind/brain. MIT Press. [aLWB]

Clark, A. (1997) Being there: Putting brain, body, and world together again. MITPress. [arLWB]

Cohen, L. B. (1991) Infant attention: An information processing approach. In:Newborn attention: Biological constraints and the influence of experience, ed.M. J. Salomon Weiss & P. R. Zelazo. Ablex. [aLWB]

Cohen, P. R., Atkin, M. S., Oates, T. & Beal, C. R. (1997) Neo: Learningconceptual knowledge by interacting with an environment. Proceedings of theFirst International Conference on Autonomous Agents. ACM Press. [aLWB]

Compton, B. J. (1995) Retrieval strategies in memory-based automaticity. Doctoraldissertation, University of Illinois, Champaign-Urbana. [aLWB]

Conway, M. A. (1990) Autobiographical memory: An introduction. OpenUniversity Press. [aLWB]

Cornsweet, T. N. (1970) Visual perception. Academic Press. [AT]Coulson, S. (1997) Semantic leaps: Frame-shifting and conceptual blending.

Doctoral dissertation, University of California, San Diego. [GF]Cowey, A. & Stoerig, P. (1991) The neurobiology of blindsight. Trends in

Neuroscience 14:140–45. [aLWB]Craik, F. I. M. & Lockhart, R. S. (1972) Levels of processing: A framework for

memory research. Journal of Verbal Learning and Verbal Behavior 11:671–84. [aLWB]

Craik, K. (1943) The nature of explanation. Cambridge University Press. [aLWB]Crammond, D. J. (1997) Motor imagery: Never in your wildest dreams. Trends in

Neuroscience 20:54–57. [aLWB]Craver-Lemley, C. & Reeves, A. (1997) Effects of imagery on vernier acuity under

conditions of induced depth. Journal of Experimental Psychology: HumanPerception and Performance 23:3–13. [aLWB]

Cummins, R. (1996) Representations, targets, and attitudes. MIT Press. [aLWB]Damasio, A. R. (1989) Time-locked multiregional retroactivation: A systems-level

proposal for the neural substrates of recall and recognition. Cognition 33:25–62. [arLWB]

(1994) Descartes’ error. Grosset/Putnam. [arLWB, LCC]Damasio, A. R. & Damasio, H. (1994) Cortical systems for retrieval of concrete

knowledge: The convergence zone framework. In: Large-scale neuronaltheories of the brain, ed. C. Koch & J. L. Davis. MIT Press. [aLWB]

D’Andrade, R. (1987) A folk model of the mind. In: Cultural models in languageand thought, ed. D. Holland & N. Quinn. Cambridge University Press.[aLWB]

Danesi, M. (1998) The “dimensionality principle” and semiotic analysis. SignSystems Studies 26:42–60. [AT]

Davies, M. & Stone, T., eds. (1995) Mental simulation. Basil Blackwell. [aLWB]Davis, M. (1998) Are different parts of the extended amygdala involved in fear

versus anxiety? Biological Psychiatry 44:1239–47. [rLWB]DeJong, G. & Mooney, R. (1986) Explanation-based learning: An alternative view.

Machine Learning 1:145–76. [aLWB]Dennett, D. C. (1969) The nature of images and the introspective trap. In:

Dennett, D. C. Content and consciousness. Routledge. [aLWB]Dennett, D. C. & Kinsbourne, M. (1992) Time and the observer - the where and

when of consciousness in the brain. Behavioral and Brain Sciences 15:183–247. [aLWB]

References/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 653

Dent-Read, C. & Szokolszky, A. (1993) Where do metaphors come from?Metaphor and Symbolic Activity 8(3):227–42. [BI]

DeRenzi, E. & Spinnler, H. (1967) Impaired performance on color tasks inpatients with hemispheric lesions. Cortex 3:194–217. [aLWB]

Deschaumes-Molinaro, C., Dittmer, A. & Vernet-Maury, E. (1992) Autonomicnervous system response patterns correlate with mental imagery. Physiologyand Behavior 51:1021–27. [aLWB]

De Sousa, R. (1987) The rationality of emotion. Bradford Books, MIT Press. [LCC]Diamond, A., Werker, J. F. & Lalonde, M. A. (1994) Toward understanding

commonalities in the development of object search, detour navigation,categorization, and speech perception. In: Human behavior and thedeveloping brain, ed. G. Dawson & K. W. Fischer. Guilford. [LM]

Dietrich, E. & Markman, A. B., eds. (in press) Cognitive dynamics: Conceptualchange in humans and machines. Erlbaum. [rLWB]

Donald, M. (1991) Origins of the modern mind: Three stages in the evolution ofculture and cognition. Harvard University Press. [arLWB, LG, AJW]

(1993) Précis of Origins of the modern mind: Three stages in the evolution ofculture and cognition. Behavioral and Brain Sciences 16:739–91. [arLWB]

Dretske, F. (1981) Knowledge and the flow of information. MIT Press/BradfordBooks. [FA, MA]

(1988) Explaining behavior. MIT Press/Bradford Books. [FA](1995) Naturalizing the mind. MIT Press. [arLWB]

Droz, R. & Rahmy, M. (1974) Lire Piaget. Charles Dessart. [FL]Edelman, G. M. (1987) Neural Darwinism: The theory of neuronal group selection.

Basic Books. [DAS](1992) Bright air, brilliant fire: On the matter of the mind. Basic Books.

[aLWB]Edelman, S. (1994) Biological constraints and the representation of structure in

vision and language. Psycoloquy 5(57). ftp://ftp.princeton.edu/pub/harnad/Psycoloquy/ 1994.volume.5/psyc.94.5.57.language-network.3.edelman. [SE]

(1998) Representation is representation of similarities. Behavioral and BrainSciences 21(4):449–98. [aLWB]

Ekman, P. (1992) An argument for basic emotions. Cognition and Emotion 6:169–200. [LCC]

Elman, J. L., Bates, E. A., Johnson, M. H., Karmiloff-Smith, A., Parisi, D. &Plunkett, K. (1996) Rethinking innateness: A connectionist perspective ondevelopment. MIT Press. [arLWB]

Epicurus (1994) The Epicurus reader: Selected writings and testimonia, trans. B.Inwood & L. P. Gerson. (Original work published in the 4th century B. C.)Hackett. [aLWB]

Farah, M. J. (1988) Is visual imagery really visual? Overlooked evidence fromneuropsychology. Psychological Review 95:307–17. [aLWB]

(1995) The neural bases of mental imagery. In: The cognitive neurosciences, ed.M. S. Gazzaniga. MIT Press. [aLWB]

Farah, M. J. & Feinberg, T. E. (1997) Perception and awareness. In: Behavioralneurology and neuropsychology, ed. T. E. Feinberg & M. J. Farah. McGraw-Hill. [aLWB]

Fauconnier, G. (1985) Mental spaces: Aspects of meaning construction in naturallanguage. MIT Press. [aLWB, RWL]

(1997) Mappings in thought and language. Cambridge University Press.[aLWB, GF, RWL]

Fauconnier, G. & Sweetser, E., eds. (1996) Spaces, worlds, and grammar.University of Chicago Press. [GF, RWL]

Fauconnier, G. & Turner, M. (1998) Conceptual integration networks. CognitiveScience 22(2):133–87. [arLWB, GF, RWL]

Fillmore, C. J. (1985) Frames and the semantics of understanding. Quaderni diSemantica 6:222–55. [aLWB]

Fincher-Kiefer, R. (1998) Visuospatial working memory in mental modelconstruction. Abstracts of the Psychonomic Society 3:79. [RAZ]

Finke, R. A. (1989) Principles of mental imagery. MIT Press. [aLWB]Flavell, J. (1976) Metacognitive aspects of problem solving. In: The nature of

intelligence, ed. L. Resnick. Erlbaum. [RO]Fodor, J. A. (1975) The language of thought. Harvard University Press. [aLWB,

DCD](1981a) Representations: Philosophical essays on the foundations of cognitive

science. MIT Press. [ABM](1981b) The present status of the innateness controversy. In: Representations,

J. A. Fodor. MIT Press. [BL](1983) The modularity of mind: An essay on faculty psychology. Bradford Books,

MIT Press. [aLWB](1987) Psychosemantics: The problem of meaning in the philosophy of mind.

MIT Press. [MA](1990) A theory of content (I and II). In: A theory of content and other essays,

J. A. Fodor. MIT Press. [MA]Fodor, J. A. & LePore, E. (1992) Holism: A shopper’s guide. Blackwell. [aLWB]Fodor, J. A. & McLaughlin, B. (1990) Connectionism and the problem of

systematicity: Why Smolensky’s solution doesn’t work. Cognition 35(2):183–204. [ABM]

Fodor, J. A. & Pylyshyn, Z. W. (1988) Connectionism and cognitive architecture: Acritical analysis. Cognition 28:3–71. [FA, arLWB, ABM]

Fox, N. A. & Bell, M. A. (1993) Frontal functioning in cognitive and emotionalbehaviors during infancy: Effects of maturation and experience. In: NATOASI Series: D. Developmental neurocognition: Speech and face processing inthe first year of life, ed. S. de Schonen, P. Jusczyk, P. McNeilage & J. Morton.Kluwer. [LM]

Frege, G. (1892/1952) On sense and reference. In: Philosophical writings ofGottlieb Frege, ed. P. T. Geach & M. Black. (Original work published in 1892).Blackwell. [aLWB]

Freksa, C. (1991) Qualitative spatial reasoning. In: Cognitive and linguistic aspectsof geographic space, ed. D. M. Mark & A. U. Frank. Kluwer. [CF]

(1997) Spatial and temporal structures in cognitive processes. In: Foundations ofcomputer science: Potential - theory - cognition, ed. C. Freksa, M. Jantzen &R. Valk. Springer. [CF]

Freksa, C. & Barkowsky, T. (1996) On the relation between spatial concepts andgeographic objects. In: Geographic objects with indeterminate boundaries, ed.P. Burrough & A. Frank. Taylor & Francis. [CF]

Freyd, J. J. (1987) Dynamic mental representations. Psychological Review 94:427–38. [aLWB]

Frith, C. & Dolan, R. J. (1997) Brain mechanisms associated with top-downprocesses in perception. Philosophical Transactions of the Royal Society ofLondon, Series B: Biological Sciences 352:1221–30. [aLWB]

Gabora, L. (1998) Autocatalytic closure in a cognitive system: A tentative scenariofor the evolution of culture. Psycoloquy 9(67). www.cogsci.soton.ac.uk.psyc/newpsy?9.67 ftp://ftp.princeton.edu/pub/harnard/ Psycoloquy/1998.volume.9/page.98.9.XX.-.l.gabora. [LG]

(1999) Weaving, bending, patching, mending the fabric of reality: A cognitivescience perspective on worldview inconsistency. Foundations of Science 3(2).www.vub.ac.be/clea/liane/fabric of reality/html. [LG]

Gainotti, G., Silveri, M. C., Daniele, A. & Giustolisi, L. (1995) Neuroanatomicalcorrelates of category-specific semantic disorders: A critical survey. Memory3:247–64. [aLWB]

Garner, W. R. (1974) The processing of information and structure. Wiley. [aLWB](1978) Aspects of a stimulus: Features, dimensions, and configurations. In:

Cognition and categorization, ed. E. Rosch & B. B. Lloyd. Erlbaum.[aLWB]

Gazzaniga, M. S. (1988) Brain modularity: Towards a philosophy of consciousexperience. In: Consciousness in contemporary science, ed. A. J. Marcel & E.Bisiach. Clarendon Press. [aLWB]

Gazzaniga, M. S., Ivry, R. B. & Mangun, G. R. (1998) Cognitive neuroscience: Thebiology of the mind. Norton. [aLWB]

Geach, P. T. (1957) Mental acts. Routledge. [aLWB]Gelman, S. A., Coley, J. D., Rosengran, K. S., Hartman, E. & Pappas, A. (1998)

Beyond labeling: The role of maternal input in the acquisition of richlystructured categories. Monographs of the Society for Research in ChildDevelopment 63, Serial No. 253. [rLWB]

Gelman, S. A. & Diesendruck, G. (in press) What’s in a concept? Context,variability, and psychological essentialism. In: Theoretical perspectives in theconcept of representation, ed. I. E. Siegel. Erlbaum. [rLWB]

Gelman, S. A. & Tardiff, T. (1998) A cross-linguistic comparison of generic nounphrases in English and Mandarin. Cognition 66:215–48. [rLWB]

Gentner, D. (1983) Structure-mapping: A theoretical framework for analogy.Cognitive Science 7:155–70. [LG, ABM]

Gentner, D. & Markman, A. B. (1997) Structural alignment in analogy andsimilarity. American Psychologist 52:45–56. [rLWB, LG, ABM]

Gentner, D. & Stevens, A. L., eds. (1983) Mental models. Erlbaum. [aLWB]Gernsbacher, M. A., Varner, K. R. & Faust, M. E. (1990) Investigating differences

in general comprehension skill. Journal of Experimental Psychology:Learning, Memory, and Cognition 16:430–45. [aLWB]

Gibbs, R. W., Jr. (1983) Do people always process the literal meanings of indirectrequests? Journal of Experimental Psychology: Learning, Memory, andCognition 9:524–33. [aLWB]

(1992) What do idioms really mean? Journal of Memory and Language 31:485–506. [RWG]

(1994) The poetics of mind: Figurative thought, language, and understanding.Cambridge University Press. [aLWB, GF, RWG]

Gibson, J. J. (1979) The ecological approach to visual perception. HoughtonMifflin. [aLWB]

Gilbert, D. T. (1991) How mental systems believe. American Psychologist 46:107–19. [rLWB]

Gilden, D., Blake, R. & Hurst, G. (1995) Neural adaptation of imaginary visualmotion. Cognitive Psychology 28:1–16. [aLWB]

Gineste, M.-D., Indurkhya, B. & Scart-Lhomme, V. (1997) Mental representationsin understanding metaphors. Notes et Documents Limsi No. 97–02, LIMSI-CNRS, BP 133, F-91403, Orsay, Cedex, France. [BI]

Givón, T. (1978) Negation in language: Pragmatics, function, ontology. In: Syntaxand semantics 9: Pragmatics, ed. P. Cole. Academic Press. [aLWB]

References/Barsalou: Perceptual symbol systems

654 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

Glaser, W. R. (1992) Picture naming. Cognition 42:61–105. [arLWB]Glasgow, J. I. (1993) The imagery debate revisited: A computational perspective.

Computational Intelligence 9:309–33. [aLWB]Glasgow, J. I., Narayanan, H. & Chandrasekaran, B., eds. (1995) Diagrammatic

reasoning: Computational and cognitive perspectives. MIT Press. [CF]Glenberg, A. M. (1997) What memory is for. Behavioral and Brain Sciences 20:1–

55. [arLWB, AMG]Glenberg, A. M., Meyer, M. & Lindem, K. (1987) Mental models contribute to

foregrounding during text comprehension. Journal of Memory and Language26:69–83. [aLWB]

Glenberg, A. M. & Robertson, D. A. (in press) Indexical understanding ofinstructions. Discourse Processes. [AMG]

Glenberg, A. M., Robertson, D. A., Jansen, J. L. & Johnson, M. C. (1998a) Notpropositions. Journal of Cognitive Systems (under review). [aLWB]

Glenberg, A. M., Robertson, D. A. & Members of the Honors Seminar inCognitive Psychology, 1997–98 (1998b) Symbol grounding and meaning: Acomparison of high-dimensional and embodied theories of meaning. Journalof Memory and Language (under review). [aLWB, AMG]

Glenberg, A.M., Schroeder, J. L. & Robertson, D. A. (1998) Averting the gazedisengages the environment and facilitates remembering. Memory andCognition 26:651–58. [rLWB]

Goldstone, R. (1994) Influences of categorization on perceptual discrimination.Journal of Experimental Psychology: General 123:178–200. [aLWB]

Goldstone, R. & Barsalou, L. W. (1998) Reuniting cognition and perception: Theperceptual bases of rules and similarity. Cognition 65:231–62. [aLWB]

Goodman, N. (1976) Languages of art. Hackett. [aLWB, BI](1978) Ways of worldmaking. Hackett. [BI]

Gopnik, A. & Slaughter, V. (1991) Young children’s understanding of changes intheir mental states. Child Development 62:98–110. [RO]

Gopnik, A. & Wellman, H. M. (1994) The theory theory. In: Mapping the mind.Domain specificity in cognition and culture, ed. L. A. Hirschfeld & S. A.Gelman. Cambridge University Press. [RO]

Gordon, R. M.. (1987) The structure of emotion: Investigations in cognitivephilosophy. Cambridge University Press. [LCC]

Gordon, W. J. J. (1961) Synectics: The development of creative capacity. Harper &Row. [BI]

(1965) The metaphorical way of knowing. In: Education of vision, ed. G. Kepes.George Braziller. [BI]

Gould, S. J. (1991) Exaptation: A crucial tool for an evolutionary psychology.Journal of Social Issues 47:43–65. [aLWB]

Grady, J. (1997) Theories are buildings revisited (metaphor). Cognitive Linguistics8:267–90. [RWG]

Graesser, A. C. & Clark, L. F. (1985) Structures and procedures of implicitknowledge. Ablex. [KW-H]

Grossberg, S. (1987) Competitive learning: From interactive activation to adaptiveresonance. Cognitive Science 11:23–63. [DAS]

Hadamard, J. (1949) The psychology of invention in the mathematical field. DoverBooks. [aLWB]

Haith, M. M. (1994) Visual expectations as the first step toward the developmentof future-oriented processes. In: The John D. and Catherine T. MacArthurFoundation series on mental health and development, ed. M. M. Haith, J. B.Benson & R. J. Roberts, Jr. University of Chicago Press. [aLWB]

Hala, S. & Carpendale, J. (1977) All in the mind: Children’s understanding ofmental life. In: The development of social cognition, ed. S. Hala. PsychologyPress. [rLWB]

Halff, H. M., Ortony, A. & Anderson, R. C. (1976) A context-sensitiverepresentation of word meanings. Memory and Cognition 4:378–83.[aLWB]

Hampton, J. A. (1981) An investigation of the nature of abstract concepts. Memoryand Cognition 9:149–56. [KW-H]

Harnad, S. (1987) Category induction and representation. In: Categoricalperception: The groundwork of cognition, ed. S. Harnad. CambridgeUniversity Press. [arLWB]

(1990) The symbol grounding problem. Physica D 42:335–46. [arLWB, SE]Harré, R. (1970) Principles of scientific thinking. University of Chicago Press.

[WFB]Haugeland, J. (1985) Artificial intelligence: The very idea of it. MIT Press. [aLWB]

(1991) Representational genera. In: Philosophy and connectionist theory, ed. W.Ramsey, S. P. Stitch & D. E. Rumelhart. Erlbaum. [aLWB]

Hausman, C. R. (1984) A discourse on novelty and creation. SUNY Press. [BI](1989) Metaphor and art: Interactionism and reference in verbal and nonverbal

art. Cambridge University Press. [BI]Heal, J. (1996) Simulation and cognitive penetrability. Mind and Language 11:44–

67. [aLWB]Hebb, D. O. (1949) The organization of behavior: A neurophysiological approach.

Wiley. [JH, DAS]Heit, E. & Barsalou, L. W. (1996) The instantiation principle in natural categories.

Memory 4:413–41. [aLWB]

Helmholtz, H. von (1962) Treatise in physiological optics, vol. III, trans. J. P.Southall. Dover. (Original work published in 1866). [JH]

Herskovits, A. (1997) Language, spatial cognition, and vision. In: Temporal andspatial reasoning, ed. O. Stock. Kluwer. [aLWB]

Hespos, S. J. & Rochat, P. (1997) Dynamic mental representation in infancy.Cognition 64:153–88. [aLWB]

Hesse, M. (1967) Models and analogy in science. In: Encyclopedia of philosophy,vol. 5, ed. P. Edwards. Macmillan. [WFB]

Hillis, A. E. & Caramazza, A. (1995) Cognitive and neural mechanisms underlyingvisual and semantic processing: Implications from “optic aphasia.” Journal ofCognitive Neuroscience 7:457–78. [aLWB]

Hochberg, J. (1968) In the mind’s eye. In: Contemporary theory and research invisual perception, ed. R. N. Haber. Holt, Rinehart & Winston. [JH]

(1970) Attention, organization, and consciousness. In: Attention: Contemporarytheory and analysis, ed. D. I. Mostofsky. Appleton-Century-Crofts. [JH]

(1998) Gestalt theory and its legacy: Organization in eye and brain, in attentionand mental representation. In: Perception and cognition at century’s end:Handbook of perception and cognition, 2nd edition, ed. J. Hochberg.Academic Press. [rLWB, JH]

Hochberg, J. & Brooks, V. (1996) The perception of motion pictures. In: Handbookof perception and cognition: Cognitive ecology, ed. M. Friedman & E.Carterette. Academic Press. [JH]

Höffding, H. (1891) Outlines of psychology. Macmillan. [arLWB]Hoffman, J. E. & Subramaniam, B. (1995) Saccadic eye movements and selective

visual attention. Perception and Psychophysics 57:787–95. [JH]Hofstader, D. (1995) Fluid concepts and creative analogies. Basic Books. [GF]Houk, J. C., Buckingham, J. T. & Barto, A. G. (1996) Models of cerebellum and

motor learning. Behavioral and Brain Sciences 19(3):368–83. [NN]Hume, D. (1739/1978) A treatise on human nature, 2nd edition. Oxford University

Press. [aLWB]Hummel, J. E. & Holyoak, K. J. (1997) Distributed representations of structure: A

theory of analogical access and mapping. Psychological Review 104:427–66.[rLWB]

Huttenlocher, J. (1973) Language and thought. In: Communication, language, andmeaning: Psychological perspectives, ed. G. A. Miller. Basic Books. [aLWB]

(1976) Language and intelligence. In: The nature of intelligence, ed. L. B.Resnick. Erlbaum. [aLWB]

Huttenlocher, J., Jordan, N. C. & Levine, S. C. (1994) A mental model for earlyarithmetic. Journal of Experimental Psychology: General 123:284–96.[aLWB]

Huttenlocher, P. R. (1994) Synaptogenesis in human cerebral cortex. In: Humanbehavior and the developing brain, ed. G. Dawson & K. W. Fischer. Guilford.[LM]

Indurkhya, B. (1992) Metaphor and cognition: An interactionist approach. Kluwer.[GF, BI]

(1998) On creation of features and change of representation. Journal of JapaneseCognitive Science Society 5(2):43–56. [BI]

Intraub, H., Gottesman, C. V. & Bills, A. J. (1998) Effects of perceiving andimagining scenes on memory for pictures. Journal of ExperimentalPsychology: Learning, Memory, and Cognition 24:186–201. [aLWB]

Intraub, H. & Hoffman, J. E. (1992) Reading and visual memory: Rememberingscenes that were never seen. American Journal of Psychology 105:101–14.[aLWB]

Irwin, D. E. (1996) Integrating information across saccadic eye movements.Current Directions in Psychological Science 5:94–100. [JH]

Jackendoff, R. (1987) On beyond zebra: The relation of linguistic and visualinformation. Cognition 26:89–114. [aLWB]

Jacoby, L. (1983) Remembering the data: Analyzing interactive processes inreading. Journal of Verbal Learning and Verbal Behavior 22:485–508.[aLWB]

Jacoby, L. L., Kelley, C. M., Brown, J. & Jasechko, J. (1989) Becoming famousovernight: Limits on the ability to avoid unconscious influences of the past.Journal of Personality and Social Psychology 56:326–38. [aLWB]

James, W. (1890) Principles of psychology. Holt. [RO]Jeannerod, M. (1994) The representing brain: Neural correlates of motor intention

and imagery. Behavioral and Brain Sciences 17:187–245. [aLWB](1995) Mental imagery in the motor context. Neuropsychologia 33:1419–32.

[aLWB](1997) The cognitive science of action. Blackwell. [NN]

Johnson, M. (1987) The body in the mind: The bodily basis of meaning,imagination, and reason. University of Chicago Press. [aLWB, RWL]

Johnson, M. K. & Raye, C. L. (1981) Reality monitoring. Psychological Review88:67–85. [arLWB]

Johnson-Laird, P. N. (1983) Mental models: Towards a cognitive science oflanguage, inference, and consciousness. Harvard University Press. [aLWB,LCC, RAZ]

Jones, S. S. & Smith, L. B. (1993) The place of perception in children’s concepts.Cognitive Development 8:113–39. [aLWB]

References/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 655

Jonides, J. & Smith, E. E. (1997) The architecture of working memory. In:Cognitive neuroscience, ed. M. D. Rugg. MIT Press. [aLWB]

Jorion, P. (1990) Principes des systèmes intelligents. Collection “Sciencescognitives.” Masson. (Summary in English: http://cogprints.soton.ac.uk/abs/comp/199806039). [PJMJ]

(1999) Le secret de la chambre chinoise. L’Homme 150:1–26. [PJMJ]Just, M. A. & Carpenter, P. A. (1987) The psychology of reading and language

comprehension. Allyn & Bacon. [aLWB]Kahnemann, D. & Tversky, A. (1982) The simulation heuristic. In: Judgement

under uncertainty: Heuristics and biases, ed. D. Kahnemann, P. Slovic & A.Tversky. Cambridge University Press. [aLWB]

Kanizsa, G. (1979) Organization in vision: Essays in Gestalt perception. PraegerPress. [aLWB]

Kant, E. (1787/1965) The critique of pure reason, trans. N. K. Smith. St. Martin’sPress. [aLWB]

Kaplan, S., Sonntag, M. & Chown, E. (1991) Tracing recurrent activity in cognitiveelements (TRACE): A model of temporal dynamics in a cell assembly.Connection Science 3:179–206. [DAS]

Kaplan, S., Weaver, M. & French, R. (1990) Active symbols and internal models:Towards a cognitive connectionism. A I and Society 4:51–71. [DAS]

Kauffman, S. A. (1993) Origins of order. Oxford University Press. [LG]Keil, F. C. (1989) Concepts, kinds, and cognitive development. MIT Press.

[aLWB]Kellman, P. (1993) Kinematic foundations of infant visual perception. In: Visual

perception and cognition in infancy, ed. C. Granrud. Erlbaum. [LM]Kintsch, W. (1974) The representation of meaning in memory. Erlbaum. [aLWB]Klatzky, R. L., Pellegrino, J. W., McCloskey, B. P. & Doherty, S. (1989) Can you

squeeze a tomato? The role of motor representations in semantic sensibilityjudgments. Journal of Memory and Language 28:56–77. [RAZ]

Koestler, A. (1964) The act of creation. (2nd Danube edition, 1976). Hutchinsonsof London. [BI]

Kolers, P. A. & Roediger, H. L., III. (1984) Procedures of mind. Journal of VerbalLearning and Verbal Behavior 23:425–49. [aLWB]

Kosslyn, S. M. (1976) Can imagery be distinguished from other forms of internalrepresentation? Evidence from studies of information retrieval time. Memoryand Cognition 4:291–97. [aLWB]

(1980) Image and mind. Harvard University Press. [aLWB, DCD](1987) Seeing and imaging in the cerebral hemispheres: A computational

approach. Psychological Review 94:148–75. [aLWB](1994) Image and brain. MIT Press. [aLWB]

Kosslyn, S. M., Cave, C. B., Provost, D. A. & von Gierke, S. M. (1988) Sequentialprocesses in image generation. Cognitive Psychology 20:319–43. [aLWB]

Kosslyn, S. M., Thompson, W. L., Kim, I. J. & Alpert, N. M. (1995) Topographicalrepresentations of mental images in primary visual cortex. Nature 378:496–98. [aLWB]

Kowler, E., Anderson, E., Dosher, B. & Blasser, E. (1995) The role of attention inthe programming of saccades. Vision Research 35:1897–916. [JH]

Lakoff, G. (1987) Women, fire, and dangerous things: What categories reveal aboutthe mind. University of Chicago Press. [aLWB, GF, RWL]

(1988) Cognitive semantics. In: Meaning and mental representations, ed. U.Eco, M. Santambrogio & P. Violi. Indiana University Press. [aLWB]

(1990) The invariance hypothesis: Is abstract reason based on image-schemas?Cognitive Linguistics 1:39–74. [RWL]

Lakoff, G. & Johnson, M. (1980) Metaphors we live by. University of ChicagoPress. [aLWB, RWL]

(1999) Philosophy in the flesh. Basic Books. [GF]Lakoff, G. & Turner, M. (1989) More than cool reason: A field guide to poetic

metaphor. University of Chicago Press. [aLWB, RWL]Landauer, T. K. & Dumais, S. T. (1997) A solution to Plato’s problem: The Latent

Semantic Analysis theory of acquisition, induction, and representation ofknowledge. Psychological Review 104:211–40. [aLWB, AMG, TKL]

Landauer, T. K., Foltz, P. W. & Laham, D. (1998a) An introduction to LatentSemantic Analysis. Discourse Processes 25:259–84. [TKL]

Landauer, T. K., Laham, D. & Foltz, P. W. (1998b) Learning human-likeknowledge by singular value decomposition: A progress report. Advances inNeural Information Processing Systems 10:45–51. [TKL]

Lang, P. J. (1979) A bio-informational theory of emotional imagery.Psychophysiology 16:495–512. [RAZ]

Langacker, R. W. (1986) An introduction to cognitive grammar. Cognitive Science10:1–40. [aLWB]

(1987) Foundations of cognitive grammar, vol. 1. Theoretical prerequisites.Stanford University Press. [aLWB, GF, RWL]

(1990) Concept, image, and symbol: The cognitive basis of grammar. Mouton deGruyter. [RWL]

(1991) Foundations of cognitive grammar, vol. 2. Descriptive application.Stanford University Press. [aLWB, GF, RWL]

(1993) Universals of construal. Proceedings of the Annual Meeting of the AnnualMeeting of the Berkeley Linguistics Society 19:447–63. [RWL]

(1995) Viewing in cognition and grammar. In: Alternative linguistics: Descriptiveand theoretical modes, ed. P. W. Davis. John Benjamins. [RWL]

(1997) Constituency, dependency, and conceptual grouping. CognitiveLinguistics 8:1–32. [aLWB]

(1998) Conceptualization, symbolization, and grammar. In: The new psychologyof language: Cognitive and functional approaches to language structure, ed.M. Tomasello. Erlbaum. [RWL]

Larbig, W., Montoya, P., Flor, H., Bilow, H., Weller, S. & Birbaumer, N. (1996)Evidence for a change in neural processing in phantom limb pain patients.Pain 67:275–83. [SE]

Lassaline, M. E. & Logan, G. D. (1993) Memory-based automaticity in thediscrimination of visual numerosity. Journal of Experimental Psychology:Learning, Memory, and Cognition 19:561–81. [aLWB]

Lazarus, R. (1991) Emotion and adaptation. Oxford University Press. [rLWB,LCC]

LeDoux, J. E. (1996) The emotional brain: The mysterious underpinnings ofemotional life. Simon & Schuster. [aLWB, LCC]

Lehrer, A. (1974) Semantic fields and lexical structure. American Elsevier.[aLWB]

Lehrer, A. & Kittay, E. F., eds. (1992) Frames, fields, and contrasts: New essays inlexical and semantic organization. Erlbaum. [aLWB]

Lehrer, K. (1989) Thomas Reid. Routledge. [aLWB]Leslie, A. M. (1987) Pretence and representation: The origins of “theory of mind.”

Psychological Review 94:412–26. [RO](1994) ToMM, ToBy, and Agency: Core architecture and domain specificity. In:

Mapping the mind: Domain specificity in cognition and culture, ed. L. A.Hirschfeld & S. A. Gelman. Cambridge University Press. [RO]

Leslie, A. M. & German, T. P. (1995) Knowledge and ability in “theory of mind”:One- eyed overview of a debate. In: Mental simulation. Evaluations andapplications, ed. M. Davies & T. Stone. Blackwell. [RO]

Levin, B. (1995) English verb classes and alternations. University of Chicago Press.[aLWB]

Levine, D. N., Warach, J. & Farah, M. J. (1985) Two visual systems in mentalimagery: Dissociation of “What: and “Where” in imagery disorders due tobilateral posterior cerebral lesions. Neurology 35:1010–18. [aLWB]

Libet, B. (1982) Brain stimulation in the study of neuronal functions for conscioussensory experiences. Human Neurobiology 1:235–42. [aLWB]

(1985) Unconscious cerebral initiative and the role of conscious will in voluntaryaction. Behavioral and Brain Sciences 8:529–66. [aLWB]

Locke, J. (1690/1959) An essay concerning human understanding, vols. I and II.1st edition. Dover. (Original work published in 1690). [aLWB]

Logan, G. (1988) Toward an instance theory of automatization. PsychologicalReview 95:492–527. [aLWB]

(1995) Linguistic and conceptual control of visual spatial attention. CognitivePsychology 28:103–74. [aLWB]

Logan, G. D. & Etherton, J. L. (1994) What is learned during automatization? Therole of attention in constructing an instance. Journal of ExperimentalPsychology: Learning, Memory, and Cognition 20:1022–50. [aLWB]

Logan, G. D., Taylor, S. W. & Etherton, J. L. (1996) Attention in the acquisitionand expression of automaticity. Journal of Experimental Psychology: Learning,Memory, and Cognition 22:620–38. [aLWB]

Lotman, Y. M. (1983) Asimmetrija i dialog. [Asymmetry and a dialogue]. Trudypo Znakovym Sistemam 16:15–30. [AT]

(1984) O semiosfere. [On a semiosphere]. Trudy po Znakovym Sistemam17:5–23. [AT]

(1989) O roli sluchainykh faktorov v literaturnoi evoljutsii. [On the role ofchance factors in the evolution of literature]. Trudy po Znakovym Sistemam23:39–48. [AT]

Lowenthal, F. (1978) Logic of natural language and games at primary school.Revue de Phonètique Appliquée 46/47:133–40. [FL]

(1986) Non-verbal communication devices: Their relevance, their use and themental processes involved. In: Pragmatics and education, ed. F. Lowenthal &F. Vandamme. Plenum Press. [FL]

(1992) Formalism for clinical observations. In: Advances in nonverbalcommunication, ed. F. Poyatos. John Benjamins. [FL]

Lowenthal, F. & Saerens, J. (1982) Utilisation de formalismes logiques pourl’examen d’enfants aphasiques. Acta Neurologica Belgica 16:275–80. [FL]

(1986) Evolution of an aphasic child after the introduction of NVCDs. In:Pragmatics and education, ed. F. Lowenthal & F. Vandamme. Plenum Press.[FL]

Luria, A. R. (1969) Vyshije korkovyje funktsii celoveka i ih narushenija prilokal’nyh porazenijah mozga. [Higher cortical functions in man and theirdisturbances in local brain lesions]. Moscow University Press. [AT]

(1973) Osnovy neiropsikhologii. [Foundations of neuropsychology]. MoscowUniversity Press. [AT]

(1979) Jazyk i soznanije. [Language and consciousness]. Moscow UniversityPress. [AT]

Lyons, W. (1980) Emotion. Cambridge University Press. [LCC]

References/Barsalou: Perceptual symbol systems

656 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

MacDonald, M. C. & Just, M. A. (1989) Changes in activation level with negation.Journal of Experimental Psychology: Learning, Memory, and Cognition15:633–42. [RAZ]

MacWhinney, B. (1998) The emergence of language. Erlbaum. [arLWB]Magnuson, J. S. & Nusbaum, H. C. (1993) Talker differences and perceptual

normalization. Journal of the Acoustical Society of America 93:2371. [aLWB]Malt, B. C. (1994) Water is not H20. Cognitive Psychology 27:41–70. [rLWB]Malt, B. C., Sloman, S. A., Gennari, S., Shi, M. & Wang, Y. (1999) Knowing versus

naming: Similarity and the linguistic categorization of artifacts. Journal ofMemory and Language 40:230–62. [rLWB]

Mandelblit, N. (1997) Creativity and schematicity in grammar and translation:The cognitive mechanisms of blending. Doctoral dissertation, CognitiveScience, University of California, San Diego. [GF]

Mandler, G.. (1975) Mind and body: Psychology of emotion and stress. W. W.Norton. [aLWB, LCC]

Mandler, J. M. (1992) How to build a baby: II. Conceptual primitives.Psychological Review 99:587–604. [aLWB]

Marcel, A. J. (1983a) Conscious and unconscious perception: Experiments on visualmasking and word recognition. Cognitive Psychology 15:197–237. [aLWB]

(1983b) Conscious and unconscious perception: An approach to the relationsbetween phenomenal experience and perceptual processes. CognitivePsychology 15:238–300. [aLWB]

Marcus, G. F. (1998) Can connectionism save constructivism? Cognition 66:153–82. [ABM]

Markman, A. B. (1999) Knowledge representation. Erlbaum. [ABM]Markman, A. B. & Gentner, D. (1993) Splitting the differences: A structural

alignment view. Journal of Memory and Language 32:517–35. [KW-H]Markman, E. M. (1989) Categorization and naming in children. MIT Press.

[arLWB]Markman, K. D., Gavinski, I., Sherman, S. J. & McMullen, M. N. (1993) The

mental simulation of better and worse possible worlds. Journal ofExperimental Social Psychology 29:87–109. [aLWB]

Marr, D. (1982) Vision. W. H. Freeman. [ABM]Marschark, M., Katz, A. & Paivio, A. (1983) Dimensions of metaphors. Journal of

Psycholinguistic Research 12:17–40. [BI]Marslen-Wilson, W. D. & Tyler, L. K. (1980) The temporal structure of spoken

language understanding. Cognition 8:1–71. [aLWB]Martin, A., Haxby, J. V., Lalonde, F. M., Wiggs, C. L. & Ungerleider, L. G. (1995)

Discrete cortical regions associated with knowledge of color and knowledge ofaction. Science 270:102–105. [aLWB]

Martin, A., Wiggs, C. L., Ungerleider, L. G. & Haxby, J. V. (1996) Neural correlatesof category-specific knowledge. Nature 379:649–52. [aLWB]

Martin, M. & Jones, G. V. (1998) Generalizing everyday memory: Signs andhandedness. Memory and Cognition 26:193–200. [aLWB]

Mauro, A. (1990) Observation de deux enfants polyhandicapés à l’aide demanipulations concrètes de systèmes formels. Post-graduat en sciencespsychopédagogiques, Université de Mons-Hainaut. [FL]

McClelland, J. L., Rumelhart, D. E. & the PDP Research Group (1986) Paralleldistributed processing: Explorations in the microstructure of cognition: Vol. 2.Psychological and biological models. MIT Press. [aLWB]

McCloskey, M. (1983) Naive theories of motion. In: Mental models, ed. D.Gentner & A. L. Stevens. Erlbaum. [aLWB]

McCune, L. (1995) A normative study of representational play at the transition tolanguage. Developmental Psychology 31:198–206. [LM]

McCune-Nicolich, L. (1981) The cognitive basis of relational words. Journal ofChild Language 8:15–36. [LM]

McDermott, D. (1987) A critique of pure reason. Computational Intelligence3:151–60. [aLWB]

McLaughlin, B. (1997) Classical constituents in Smolensky’s ICS architecture. In:Structures and norms in science, ed. M. L. Dalla Chiara, K. Doets, D.Mundici & J. van Benthem. Kluwer Academic Publishers. [ABM]

McNamara, T. P. & Sternberg, R. J. (1983) Mental models of word meaning.Journal of Verbal Learning and Verbal Behavior 22:449–74. [KW-H]

Medin, D. L. & Ross, B. H. (1989) The specific character of abstract thought:Categorization, problem-solving, and induction. In: Advances in thepsychology of human intelligence, vol. 5, ed. R. J. Sternberg. Erlbaum.[aLWB]

Medin, D. L. & Schaffer, M. (1978) A context theory of classification learning.Psychological Review 85:207–38. [aLWB]

Melara, R. D. & Marks, L. E. (1990) Dimensional interactions in languageprocessing: Investigating directions and levels of crosstalk. Journal ofExperimental Psychology: Learning, Memory, and Cognition 16:539–54.[aLWB]

Melzack, R., Israel, R., Lacroix, R. & Schultz, G. (1997) Phantom limbs in peoplewith congenital limb deficiency or amputation in early childhood. Brain120:1603–20. [SE]

Miikkulainen, R. (1993) Subsymbolic natural language processing: An integratedmodel of scripts, lexicon, and memory. MIT Press. [SE]

Mill, J. S. (1965) On the permanent possibility of sensation. An examination of SirWilliam Hamilton’s philosophy. In: A source book in the history of psychology,ed. R. J. Herrnstein & E. G. Boring. Harvard University Press. [JH]

Miller, G. A. & Johnson-Laird, P. N. (1976) Language and perception. HarvardUniversity Press. [aLWB]

Millikan, R. G. (1998) A common structure for concepts of individuals, stuffs, andreal kinds: More Mama, more milk, and more mouse. Behavioral and BrainSciences 21:55–100. [arLWB]

Mills, D. L., Coffey-Corina, S. & Neville, H. J. (1994) Variability in cerebralorganization during primary language acquisition. In: Human behavior andthe developing brain, ed. G. Dawson & K. W. Fischer. Guilford Press. [LM]

Milner, A. D. & Goodale, M. A. (1995) The visual brain in action. OxfordUniversity Press. [aLWB]

Miner, A. & Reder, L. M. (1994) A new look at feeling-of-knowing: Itsmetacognitive role in regulating question answering. In: Metacognition:Knowing about knowing, ed. J. Metcalfe & A. P. Shimamura. Bradford Books/MIT Press. [RO]

Minsky, M. L. (1977) A framework for representing knowledge. In: The psychologyof computer vision, ed. P. H. Winston. McGraw-Hill. [aLWB]

(1985) The society of mind. Simon & Schuster. [aLWB]Mitchell, R. W. (1994) The evolution of primate cognition: Simulation, self-

knowledge, and knowledge of other minds. In: Hominid culture in primateperspective, ed. D. Quiatt & J. Itani. University of Colorado Press. [RWM]

Mitchell, T. M., Keller, R. M. & Kedar-Cabelli, S. T. (1986) Explanation-basedgeneralization: A unifying view. Machine Learning 1:47–80. [aLWB]

Mitchell, W. T. (1990) The logic of architecture: Design, computation, andcognition. MIT Press. [aLWB]

Montague, R. (1973) The proper treatment of quantification in ordinary language.In: Approaches to natural language, ed. J. Hintikka. Reidel. [JRH]

Morris, C. D., Bransford, J. D. & Franks, J. J. (1977) Levels of processing versustest-appropriate strategies. Journal of Verbal Learning and Verbal Behavior16:519–33. [aLWB]

Morrow, D. G. & Clark, H. H. (1988) Interpreting words in spatial descriptions.Language and Cognitive Processes 3:275–91. [RAZ]

Morrow, D. G., Greenspan, S. L. & Bower, G. H. (1987) Accessibility and situationmodels in narrative comprehension. Journal of Memory and Language26:165–87. [aLWB]

Muhlnickel, W., Elbert, T., Taub, E. & Flor, H. (1998) Reorganization of auditorycortex in tinnitus. Proceedings of the National Academy of Science 95:10340–43. [SE]

Murphy, G. L. (1996) On metaphoric representation. Cognition 60:173–204.[aLWB]

Murphy, G. L. & Medin, D. L. (1985) The role of theories in conceptualcoherence. Psychological Review 92:289–316. [aLWB]

Narayanan, S. (1997) KARMA: Knowledge-based active representations formetaphor and aspect. Ph. D. dissertation, Computer Science Division,University of California, Berkeley. [GF]

Neisser, U. (1967) Cognitive psychology. Prentice-Hall. [arLWB](1976) Cognition and reality. Freeman. [aLWB]

Neisser, U. & Becklen, R. (1975) Selective looking: Attending to visually specifiedevents. Cognitive Psychology 7:480–94. [JH]

Nelson, D. L., Walling, J. R. & McEvoy, C. L. (1979) Doubts about depth. Journalof Experimental Psychology: Human Learning and Memory 5:24–44.[aLWB]

Nelson, K. (1996) Language in cognitive development: Emergence of the mediatedmind. Cambridge University Press. [arLWB]

Nelson, T. O. & Narens, L. (1990) Metamemory: A theoretical framework and newfindings. In: The psychology of learning and motivation, vol. 26, ed. G. Bower.Academic Press. [RO]

Nersessian, N. J. (1992) In the theoretician’s laboratory: Thought experimenting asmental modeling. Proceedings of the Philosophical Association of America2:291–301. [aLWB]

Newell, A. (1990) Unified theories of cognition. Harvard University Press.[aLWB]

Newell, A. & Simon, H. A. (1972) Human problem solving. Prentice-Hall.[aLWB]

Newton, N. (1996) Foundations of understanding. John Benjamins. [arLWB,AMG]

Nisbett, R. & Wilson, T. (1977) Telling more than we can know: Verbal reports onmental processes. Psychological Review 84:231–59. [aLWB]

Norman, D. A. (1976) Memory and attention: An introduction to humaninformation processing. Wiley. [aLWB]

Norman, D. A., Rumelhart, D. E. & the LNR Research Group (1975) Explorationsin cognition. Freeman. [aLWB]

Nosofsky, R. M. (1984) Choice, similarity, and the context theory of classification.Journal of Experimental Psychology: Learning, Memory, and Cognition10:104–14. [aLWB]

Nueckles, M. & Jantezko, D. (1997) The role of semantic similarity in the

References/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 657

comprehension of metaphor. In: Proceedings of the Nineteenth AnnualConference of the Cognitive Science Society, ed. M. Shafto & P. Langle.Erlbaum. [BI]

Nusbaum, H. C. & Morin, T. M. (1992) Paying attention to differences amongtalkers. In: Speech perception, production, and linguistic structure, ed. Y.Tohkura, Y. Sagisaka & E. Vatikiotis-Bateson. OHM Publishing. [aLWB]

Oaksford, M. & Chater, N. (1994) A rational analysis of the selection task asoptimal data selection. Psychological Review 101:608–31. [aLWB]

Ohlsson, S. (1993) Abstract schemas. Educational Psychologist 28:51–66. [SO]Ohlsson, S. & Lehtinen, E. (1997) The role of abstraction in the assembly of

complex ideas. International Journal of Educational Research 27:37–48.[SO]

O’Regan, J. K. (1992) Solving the real mysteries of visual perception: The world asan outside memory. Canadian Journal of Psychology 46:461–88. [SE]

Paivio, A. (1971/1979) Imagery and verbal processes. 1971 edition, Holt, Rinehart& Winston. [aLWB] 1979 edition, Erlbaum. [BI]

(1986) Mental representations: A dual coding approach. Oxford University Press.[aLWB]

Palm, G. (1982) Neural assemblies: An alternative approach to artificialintelligence. Springer-Verlag. [DAS]

Palmer, S. E. (1978) Fundamental aspects of cognitive representation. In:Cognition and categorization, ed. E. Rosch & B. Lloyd. Erlbaum. [CF]

Pani, J. R. (1996) Mental imagery as the adaptationist views it. Consciousness andCognition 5:288–326. [rLWB, WFB]

Panksepp, J. (1982) Toward a general psychobiological theory of emotions.Behavioral and Brain Sciences 5(3):407–69. [LCC]

(1998) Affective neuroscience. Oxford University Press. [LCC]Papert, S. (1980) Mindstorms: Children, computers, and powerful ideas. Basic

Books. [FL]Parsons, L. M. (1987a) Imagined spatial transformations of one’s body. Journal of

Experimental Psychology: General 116:172–91. [aLWB](1987b) Imagined spatial transformations of one’s hands and feet. Cognitive

Psychology 19:178–241. [aLWB]Peacocke, C. (1992) A study of concepts. MIT Press. [aLWB]Pennington, N. & Hastie, R. (1992) Explaining the evidence: Tests of the story

model for juror decision making. Journal of Personality and Social Psychology62:189–206. [aLWB]

Pessoa, L., Thompson, E. & Noë, A. (1998) Finding out about filling-in: A guide toperceptual completion for visual science and the philosophy of perception.Behavioral and Brain Sciences 21(6):723–802. [aLWB]

Peterson, M. A. & Gibson, B. S. (1993) Shape recognition inputs to figure-groundorganization in three-dimensional displays. Cognitive Psychology 3:383–429.[aLWB]

Piaget, J. (1954) The construction of reality in the child. Basic Books. [LM](1945/1962) Play, dreams, and imitation in childhood. W. W. Norton. [LM,

RWM](1947/1972) The psychology of intelligence. Littlefield, Adams & Co. [RWM]

Pollack, J. (1990) Recursive distributed representations. Artificial Intelligence46:77–105. [rLWB]

Posner, M. I. (1995) Attention in cognitive neuroscience: An overview. In: Thecognitive neurosciences, ed. M. S. Gazzaniga. MIT Press. [aLWB]

Potter, M. C. & Faulconer, B. A. (1975) Time to understand pictures and words.Nature 253:437–38. [aLWB]

Potter, M. C., Kroll, J. F., Yachzel, B., Carpenter, E. & Sherman, J. (1986) Picturesin sentences: Understanding without words. Journal of ExperimentalPsychology: General 115:281–94. [aLWB]

Potter, M. C., Valian, V. V. & Faulconer, B. A. (1977) Representation of a sentenceand its pragmatic implications: Verbal, imagistic, or abstract? Journal of VerbalLearning and Verbal Behavior 16:1–12. [aLWB]

Premack, D. & Woodruff, G. (1978) Does the chimpanzee have a theory of mind?Behavioral and Brain Sciences 4:515–26. [RO]

Price, H. H. (1953) Thinking and experience. Hutchinson’s Universal Library.[aLWB]

Prinz, J. J. (1997) Perceptual cognition: An essay on the semantics of thought.Doctoral dissertation, University of Chicago. [aLWB]

Prinz, J. J. & Barsalou, L. W. (in press a) Acquisition and productivity in perceptualsymbol systems: An account of mundane creativity. In: Creativity andcomputation, ed. D. M. Peterson & T. Dartnall. MIT Press. [aLWB]

(in press b) Steering a course for embodied representation. In: Cognitivedynamics: Conceptual change in humans and machines, ed. E. Dietrich.Erlbaum.

Pulvermüller, F. (1999) Words in the brain’s language. Behavioral and BrainSciences 22(2)253–336. [aLWB, DAS]

Putnam, H. (1960) Minds and machines. In: Dimensions of mind, ed. S. Hook.Collier Books. [aLWB]

Pylyshyn, Z. W. (1973) What the mind’s eye tells the mind’s brain: A critique ofmental imagery. Psychological Bulletin 80:1–24. [aLWB]

(1978) Imagery and artificial intelligence. In: Perception and cognition: Issues in

the foundations of psychology, ed. C. W. Savage. University of MinnesotaPress. [aLWB]

(1981) The imagery debate: Analogue media versus tacit knowledge.Psychological Review 88:16–45. [aLWB]

(1984) Computation and cognition. MIT Press. [aLWB](1999) Is vision continuous with cognition? The case for cognitive

impenetrability of visual perception. Behavioral and Brain Sciences 22(3).[aLWB]

Quine, W. V. O. (1953) Two dogmas of empiricism. In: From a logical point of view.Harvard University Press. [aLWB]

(1960) Word and object. MIT Press. [KW-H]Quinn, P. C. & Eimas, P. D. (1996) Perceptual organization and categorization in

young infants. In: Advances in infancy research, vol. 10, ed. C. Rovee-Collier& L. P. Lipsitt. Ablex. [BL]

Ramachandran, V. S. & Hirstein, W. (1998) The perception of phantom limbs. TheD. O. Hebb lecture. Brain 121:1603–30. [SE]

Ramachandran, V. S. & Rogers-Ramachandran, D. (1996) Synaesthesia in phantomlimbs induced with mirrors. Proceedings of the Royal Society of London B263(1369):377–86. [SE]

Reder, L. M. & Ritter, F. E. (1992) What determines initial feeling of knowing?Familiarity with question terms, not with the answer. Journal of ExperimentalPsychology: Learning, Memory, and Cognition 18:435–52. [RO]

Reed, C. L. & Vinson, N. G. (1996) Conceptual effects on representationalmomentum. Journal of Experimental Psychology: Human Perception andPerformance 22:839–50. [aLWB]

Regier, T. (1996) The human semantic potential: Spatial language and constrainedconnectionism. MIT Press. [aLWB]

Reid, T. (1785/1969) Essays on the intellectual powers of man. MIT Press.(Original work published in 1785). [aLWB]

(1764/1970) An inquiry into the human mind on the principles of commonsense. University of Chicago Press. (Original work published in 1764).[aLWB]

Rinck, M., Hahnel, A., Bower, G. H. & Glowalla, U. (1997) The metrics of spatialsituation models. Journal of Experimental Psychology: Learning, Memory, andCognition 23:622–37. [aLWB]

Rips, L. J. (1989) Similarity, typicality, and categorization. In: Similarity andanalogical reasoning, ed. S. Vosniadou & A. Ortony. Cambridge UniversityPress. [aLWB]

Roediger, H. L., III. & McDermott, K. B. (1993) Implicit memory in normalhuman subjects. In: Handbook of neuropsychology, vol. 8, ed. F. Boller & J.Grafman. Elsevier. [aLWB]

Romney, A. K., Weller, S. C. & Batchelder, W. H. (1986) Culture as consensus: Atheory of culture and informant accuracy. American Anthropologist 88:318–38. [aLWB]

Rosch, E. & Mervis, C. B. (1975) Family resemblances: Studies in the internalstructure of categories. Cognitive Psychology 7:573–605. [rLWB]

Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M. & Boyes-Braem, P. (1976)Basic objects in natural categories. Cognitive Psychology 8:382–439.[aLWB]

Rosenstein, M. & Cohen, P. R. (1998) Concepts from time series. In: Proceedingsof the Fifteenth National Conference on Artificial Intelligence. AAAI Press.[aLWB]

Rösler, F., Heil, M. & Hennighausen, E. (1995) Distinct cortical activation patternsduring long-term memory retrieval of verbal, spatial, and color information.Journal of Cognitive Neuroscience 7:51–65. [aLWB]

Ross, B. H. (1996) Category learning as problem solving. In: The psychology oflearning and motivation: Advances in research and theory, vol. 35, ed. D. L.Medin. Academic Press. [aLWB]

Ross, B. H., Perkins, S. J. & Tenpenny, P. L. (1990) Reminding-based categorylearning. Cognitive Psychology 22:460–92. [aLWB]

Roth, J. D. & Kosslyn, S. M. (1988) Construction of the third dimension in mentalimagery. Cognitive Psychology 20:344–61. [aLWB]

Rumelhart, D. E., McClelland, J. L. & the PDP Research Group (1986) Paralleldistributed processing: Explorations in the microstructure of cognition: Vol. 1.Foundations. MIT Press. [aLWB]

Rumelhart, D. E. & Norman, D. A. (1988) Representation in memory. In: Stevens’handbook of experimental psychology: Vol. 2. Learning and cognition, ed.R. C. Atkinson, R. J. Herrnstein, G. Lindzey & R. D. Luce. Wiley. [aLWB]

Rumelhart, D. E. & Ortony, A. (1978) The representation of knowledge inmemory. In: Schooling and the acquisition of knowledge, ed. R. C. Anderson,R. J. Spiro & W. E. Montague. Erlbaum. [aLWB]

Rushworth, M. F. S. & Owen, A. M. (1998) The functional organisation of thelateral frontal cortex: Conjecture or conjuncture in the electrophysiologyliterature? Trends in Cognitive Sciences 2:46–53. [aLWB]

Russell, B. (1919a) Introduction to mathematical philosophy. George Allen &Unwin. [aLWB]

(1919b) On propositions: What they are and how they mean. Aristotelian SocietySupplementary 2:1–43. (Reprinted in: The collected papers of Bertrand

References/Barsalou: Perceptual symbol systems

658 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4

Russell, vol. 8: The philosophy of logical atomism and other essays, 1914–19,ed. J. G. Slater. George Allen & Unwin.) [aLWB]

Ryle, G. (1949) The concept of mind. Barnes and Noble. [aLWB]Sachs, J. D. S. (1967) Recognition memory for syntactic and semantic aspects of

connected discourse. Perception and Psychophysics 2:437–42. [aLWB](1974) Memory in reading and listening to discourse. Memory and Cognition

2:95–100. [aLWB]Samuel, A. G. (1981) Phonemic restoration: Insights from a new methodology.

Journal of Experimental Psychology: General 110:474–94. [aLWB](1987) Lexical uniqueness effects on phonemic restoration. Journal of Memory

and Language 26:36–56. [aLWB](1997) Lexical activation produces potent phonemic percepts. Cognitive

Psychology 32:97–127. [aLWB]Sartre, J. (1948) The psychology of imagination. Philosophical Library. [LM]Scaife, M. & Rogers, Y. (1996) External cognition: How do graphical

representations work? International Journal of Human-Computer Studies45:185–213. [CF]

Schacter, D. L. (1995) Implicit memory: A new frontier for neuroscience. In: Thecognitive neurosciences, ed. M. S. Gazzaniga. MIT Press. [aLWB]

Schacter, D. L., McAndrews, M. P. & Moscovitch, M. (1988) Access toconsciousness: Dissociations between implicit and explicit knowledge inneuropsychological syndromes. In: Thought without language, ed. L.Weiskrantz. Oxford University Press. [aLWB]

Schank, R. C. (1982) Dynamic memory. Cambridge University Press. [ABM]Schank, R. C. & Abelson, R. P. (1977) Scripts, plans, goals, and understanding: An

inquiry into human knowledge structures. Erlbaum. [aLWB]Schneider, W. & Shiffrin, R. M. (1977) Controlled and automatic human

information processing: I. Detection, search and attention. PsychologicalReview 84:1–66. [aLWB]

Schön, D. A. (1963) Displacement of concepts. Humanities Press. [BI](1979) Generative metaphor: A perspective on problem-setting in social policy.

In: Metaphor and thought, ed. A. Ortony. Cambridge University Press. [BI]Schunn, C. D., Reder, L. M., Nhouyvanisvong, A., Richards, D. R. & Stroffolino,

P. J. (1997) To calculate or not calculate: A source activation confusion (SAC)model of problem-familiarity’s role in strategy selection. Journal ofExperimental Psychology: Learning, Memory, and Cognition 23:1–27. [RO]

Schwab, E. C. (1981) Auditory and phonetic processing for tone analogs ofspeech. Doctoral dissertation, State University of New York at Buffalo.[aLWB]

Schwanenflugel, P. J. (1991) Why are abstract concepts hard to understand? In:The psychology of word meaning, ed. P. J. Schwanenflugel. Erlbaum.[rLWB, KW-H]

Schwanenflugel, P. J., Fabricius, W. V., Noyes, C. R., Bigler, K. D. & Alexander,J. M. (1994) The organization of mental verbs and folk theories of knowing.Journal of Memory and Language 33:376–95. [aLWB]

Schwanenflugel, P. J., Harnishfeger, K. K. & Stowe, R. W. (1988) Contextavailability and lexical decisions for abstract and concrete words. Journal ofMemory and Language 27:499–520. [rLWB]

Schwanenflugel, P. J., Martin, M. & Takahashi, T. (in press) The organization ofverbs of knowing: Evidence for cultural commonality and variation in theoryof mind. Memory and Cognition. [aLWB]

Schwanenflugel, P. J. & Shoben, E. J. (1983) Differential context effects in thecomprehension of abstract and concrete verbal materials. Journal ofExperimental Psychology: Learning, Memory, and Cognition 9:82–102.[rLWB, KW-H]

Schwartz, R. (1981) Imagery - there’s more to images than meets the eye. In:Imagery, ed. N. Block. MIT Press. [aLWB]

Schyns, P. G., Goldstone, R. L. & Thibaut, J. P. (1998) The development offeatures in object concepts. Behavioral and Brain Sciences 21:1–54.[arLWB]

Searle, J. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences3:417–24. [aLWB]

(1983) Intentionality. Cambridge University Press. [FA]Seifert, L. S. (1997) Activating representations in permanent memory: Different

benefits for pictures and words. Journal of Experimental Psychology:Learning, Memory, and Cognition 23:1106–21. [aLWB]

Shannon, B. (1987) The non-abstractness of mental representations. New Ideas inPsychology 5:117–26. [aLWB]

Shastri, L. & Ajjanagadde, V. (1993) From simple associations to systematicreasoning: A connectionist representation of rules, variables, and dynamicbindings using temporal asynchrony. Behavioral and Brain Sciences 16:417–94. [rLWB, ABM]

Shastri, L. & Grannes, D. (1996) A connectionist treatment of negation andinconsistency. Proceedings of the 18th Annual Conference of the CognitiveScience Society, University of California at San Diego. Erlbaum. [GF]

Shaver, P., Schwartz, J., Kirson, D. & O’Connor, C. (1987) Emotion knowledge:Further exploration of a prototype approach. Journal of Personality and SocialPsychology 52:1061–86. [aLWB]

Shepard, R. N. & Cooper, L. A. (1982) Mental images and their transformations.Cambridge University Press. [aLWB]

Shepard, R. N. & Metzler, J. (1971) Mental rotation of three-dimensional objects.Science 171:701–703. [aLWB]

Shevell, S. K. & He, J. C. (1995) Interocular difference in Rayleigh matches ofcolor normals. In: Colour vision deficiences XII, ed. B. Dunn. KluwerAcademic Press. [aLWB]

Shiffrar, M. & Freyd, J. J. (1990) Apparent motion of the human body.Psychological Science 4:257–64. [aLWB]

(1993) Timing and apparent motion path choice with human body photographs.Psychological Science 6:379–84. [aLWB]

Shiffrin, R. M. (1988) Attention. In: Stevens’ handbook of experimental psychology:Vol. 2. Learning and cognition, ed. R. C. Atkinson, R. J. Herrnstein, G.Lindzey & R. D. Luce. Wiley. [aLWB]

Shiffrin, R. M. & Schneider, W. (1977) Controlled and automatic informationprocessing: II. Perceptual learning, automatic attending, and a general theory.Psychological Review 84:127–90. [aLWB]

Smith, E. E. & Medin, D. L. (1981) Categories and concepts. Harvard UniversityPress. [aLWB]

Smith, L. B. & Heise, D. (1992) Perceptual similarity and conceptual structure. In:Advances in psychology - percepts, concepts, and categories: Therepresentation and processing of information, ed. B. Burns. Elsevier.[aLWB]

Smith, L. B., Jones, S. S. & Landau, B. (1992) Count nouns, adjectives, andperceptual properties in children’s novel word interpretation. DevelopmentalPsychology 28:273–86. [aLWB]

Smith, V. C., Pokorny, J. & Starr, S. J. (1976) Variability of color mixture data - 1.Interobserver variability in the unit coordinates. Vision Research 16:1087–94.[aLWB]

Smolensky, P. (1990) Tensor product variable binding and the representation ofsymbolic structures in connectionist systems. Artificial Intelligence 46:159–216. [rLWB, ABM]

(1995) Reply: Constituent structure and explanation in an integratedconnectionist/symbolic cognitive architecture. In: Connectionism: Debates onpsychological explanation, ed. C. Macdonald & G. Macdonald. Blackwell.[ABM]

Snippe, H. P. & Koenderink, J. J. (1992) Discrimination thresholds for channel-coded systems. Biological Cybernetics 66:543–51. [SE]

Snodgrass, J. G. (1984) Concepts and their surface representations. Journal ofVerbal Learning and Verbal Behavior 23:3–22. [aLWB]

Snodgrass, J. G. & Vanderwart, M. (1980) A standardized set of 260 pictures:Norms for name agreement, image agreement, familiarity, and visualcomplexity. Journal of Experimental Psychology: Human Learning andMemory 6:174–215. [aLWB]

Solomon, K. O. (1997) The spontaneous use of perceptual representationsduring conceptual processing. Doctoral dissertation, University of Chicago.[aLWB]

Solomon, K. O. & Barsalou, L. W. (1999a) Grounding property verification inperceptual simulation. (Under review). [arLWB]

(1999b) Representing properties locally: Evidence for the perceptual groundingof concepts. (Under review). [arLWB]

Solomon, R. (1976) The passions: The myth and nature of human emotion. AnchorPress, Doubleday. [LCC]

Spalding, T. L. & Ross, B. H. (1994) Comparison-based learning: Effects ofcomparing instances during category learning. Journal of ExperimentalPsychology: Learning, Memory, and Cognition 20:1251–63. [aLWB]

Spelke, E. S., Breinlinger, K., Macomber, J. & Jacobson, K. (1992) Origins ofknowledge. Psychological Review 99:605–32. [aLWB]

Sperling, G. (1960) The information available in brief visual presentations.Psychological Monographs 74, Whole no. 498. [JH]

Squire, L. R., Knowlton, B. & Musen, G. (1993) The structure and organization ofmemory. Annual Review of Psychology 44:453–95. [aLWB]

Stein, N. L. & Levine, L. J. (1990) Making sense out of emotion: Therepresentation and use of goal-structured knowledge. In: Psychological andbiological approaches to emotion, ed. N. L. Stein, B. Leventhal & T. Trabasso.Erlbaum. [aLWB]

Sweetser, E. (1990) From etymology to pragmatics: Metaphorical and culturalaspects of semantics. Cambridge University Press. [aLWB]

Talmy, L. (1983) How language structures space. In: Spatial orientation: Theory,research, and application, ed. H. Pick & L. Acredelo. Plenum Press.[aLWB]

(1985) Lexicalization patterns: Semantic structure in lexical forms. In: Languagetopology and syntactic description, vol. III: Grammatical categories and thelexicon, ed. T. Shopen. Cambridge University Press. [LM]

(1996) Fictive motion in language and “ception.” In: Language and space, ed. P.Bloom, M. Peterson, L. Nadel & M. Garrett. MIT Press. [GF, RWL]

(1988) Force dynamics in language and cognition. Cognitive Science 12:49–100.[aLWB]

References/Barsalou: Perceptual symbol systems

BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4 659

Tarr, M. J. & Pinker, S. (1989) Mental rotation and orientation-dependence inshape recognition. Cognitive Psychology 21:233–82. [aLWB]

Tetlock, P. & Belkin, A., eds. (1996) Counterfactual thought experiments in worldpolitics: Logical, methodological, and psychological perspectives. PrincetonUniversity Press. [GF]

Thagard, P. (1992) Conceptual revolutions. Princeton University Press. [aLWB]The Neural Theory of Language Project. (n.d.) http://www.icsi.berkeley.edu/

NTL/. [GF]Theios, J. & Amhrein, P. C. (1989) Theoretical analysis of the cognitive processing

of lexical and pictorial stimuli: Reading, naming, and visual and conceptualcomparisons. Psychological Review 96:5–24. [aLWB]

Tomasello, M. (1992) First verbs: A case study of early grammatical development.Cambridge University Press. [aLWB]

Tomasello, M., ed. (1998) The new psychology of language: Cognitive andfunctional approaches to language structure. Erlbaum. [GF]

Tomasello, M. & Call, J. (1997) Primate cognition. Oxford University Press.[rLWB]

Tomasello, M., Kruger, A. & Ratner, H. (1993) Cultural learning. Behavioral andBrain Sciences 16:495–552. [aLWB]

Toomela, A. (1996) How culture transforms mind: A process of internalization.Culture and Psychology 2:285–305. [AT]

Tootel, R. B. H., Silverman, M. S., Switkes, E. & De Valois, R. L. (1982)Deoxyglucose analysis of retinotopic organization in primate striate cortex.Science 218:902–904. [aLWB]

Tourangeau, R. & Ripe, L. (1991) Understanding and appreciating metaphors.Cognition 11:203–44. [BI]

Trabasso, T. R. & Bower, G. H. (1968) Attention in learning: Theory and research.Wiley. [aLWB]

Trehub, A. (1977) Neuronal models for cognitive processes: Networks for learning,perception, and imagination. Journal of Theoretical Biology 65:141–69.[rLWB]

(1991) The cognitive brain. MIT Press. [rLWB]Treisman, A. M. (1969) Strategies and models of selective attention. Psychological

Review 76:282–99. [aLWB](1993) The perception of features and objects. In: Attention: Selection,

awareness, and control: A tribute to Donald Broadbent, ed. A. Baddeley & L.Weiskrantz. Clarendon Press. [aLWB]

Treisman, A. M. & Geffen, G. (1967) Selective attention: Perception or response?Quarterly Journal of Experimental Psychology 19:1–17. [JH]

Turner, M. (1987) Death is the mother of beauty: Mind, metaphor, criticism.University of Chicago Press. [RWL]

(1996) The literary mind. Oxford University Press. [aLWB]Turner, M. & Fauconnier, G. (1995) Conceptual integration and formal expression.

Metaphor and Symbolic Activity 10(3):183–204. [BI](1998) Conceptual integration in counterfactuals. In: Discourse and cognition,

ed. J.-P. Koenig. CSLI Publications. [GF]Tversky, B. & Hemenway, K. (1985) Objects, parts, and categories. Journal of

Experimental Psychology: General 113:169–93. [aLWB]Tye, M. (1991) The imagery debate. MIT Press. [aLWB]

(1993) Image indeterminacy. In: Spatial representation, ed. N. Eilen, R.McCarthy & B. Brewer. Blackwell. [aLWB]

Tzourio, N. & Mellet, E. (1998) Imagerie des fonctions cognitives. Direction desSciences du Vivant, lettre d’information scientifique 16. URL: http://www-dsv.cea.fr/dsv/ceabio/16/tzourio16—fr.htm. [FL]

Ullman, S. (1996) High-level vision. MIT Press. [ABM]Ungerleider, L. G. & Mishkin, M. (1982) Two cortical visual systems. In: Analysis

of visual behavior, ed. D. J. Ingle, M. A. Goodale & R. W. J. Mansfield. MITPress. [aLWB]

Van Dijk, T. A. & Kintsch, W. (1983) Strategies of discourse comprehension.Academic. [aLWB]

Van Essen, D. C. (1985) Functional organization of primate visual cortex. In:Cerebral cortex, ed. A. Peters & E. G. Jones. Plenum. [aLWB]

Van Gelder, T. (1990) Compositionality: A connectionist variation on a classicaltheme. Cognitive Science 14:355–84. [rLWB]

Vaughn, H. G. & Kurtzberg, D. (1992) Electrophysiologic indices of human brainmaturation and cognitive development. In: Developmental behavioralneurosciences: The Minnesota Symposium, vol. 24, ed. M. R. Gunnar & C. A.Nelson. Erlbaum. [LM]

Velmans, M. (1991) Is human information processing conscious? Behavioral andBrain Sciences 14:651–726. [aLWB]

Von Eckardt, B. & Potter, M. C. (1985) Clauses and the semantic representation ofwords. Memory and Cognition 13:371–76. [aLWB]

Vosniadou, S. & Brewer, W. F. (1992) Mental models of the earth: A study ofconceptual change in childhood. Cognitive Psychology 24:535–85. [WFB]

(1994) Mental models of the day/night cycle. Cognitive Science 18:123–83.[WFB]

Vygotsky, L. S. (1925/1982) Soznanije kak problema psikhologii povedenija.

[Consciousness as a problem for a psychology of behaviour]. In: Sobranijesochinenii, Tom 1 [Collected works, vol. 1], ed. L. S. Vygotsky. Pedagogika.[AT]

(1934/1996) Myshlenije i rech. [Thinking and speech]. Labirint. [AT](1962) Thought and language. MIT Press. [FL]

Warren, R. M. (1970) Perceptual restoration of missing speech sounds. Science167:392–93. [aLWB]

Warrington, E. K. & Shallice, T. (1984) Category-specific semantic impairments.Brain 107:829–54. [aLWB]

Watson, J. B. (1913) Psychology as the behaviorist sees it. Psychological Review20:158–77. [aLWB]

Wattenmaker, W. D. & Shoben, E. J. (1987) Context and the recallability ofconcrete and abstract sentences. Journal of Experimental Psychology:Learning, Memory, and Cognition 13:140–50. [rLWB]

Weiskrantz, L. (1986) Blindsight: A case study and implications. Oxford UniversityPress. [aLWB]

Weisstein, N. & Harris, C. S. (1974) Visual detection of line segments: An object-superiority effect. Science 186:752–55. [aLWB]

(1980) Masking and unmasking of distributed representations in the visualsystem. In: Visual coding and adaptability, ed. C. S. Harris. Erlbaum.[aLWB]

Weisstein, N., Montalvo, F. S. & Ozog, G. (1972) Differential adaptation togratings blocked by cubes and gratings blocked by hexagons: A test of theneural symbolic activity hypothesis. Psychonomic Science 27:89–91. [aLWB]

Weisstein, N. & Wong, E. (1986) Figure-ground organization and the spatial andtemporal responses of the visual system. In: Pattern recognition by humansand machines: Vol. 2. Visual perception, ed. E. C. Schwab & H. C. Nusbaum.Academic Press. [aLWB]

Wellman, H. & Bartsch, K. (1988) Young children’s reasoning about beliefs.Cognition 30:239–77. [RO]

Wellman, H. M. & Gelman, S. A. (1988) Children’s understanding of the non-obvious. In: Advances in the psychology of human intelligence, ed. R.Sternberg. Erlbaum. [aLWB]

Wells, A. J. (1998) Turing’s analysis of computation and theories of cognitivearchitecture. Cognitive Science 22(3):269–94. [AJW]

Wells, G. L. & Gavinski, I. (1989) Mental simulation of causality. Journal ofPersonality and Social Psychology 56:161–69. [aLWB]

Werner, H. & Kaplan, B. (1963) Symbol formation. Wiley. [LM]Wiemer-Hastings, K. (1998) Abstract noun classification: Using a neural network to

match word context and word meaning. Behavior Research Methods,Instruments and Computers 30:264–71. [rLWB]

Wiemer-Hastings, K. & Graesser, A. C. (1998) Contextual representation ofabstract nouns: A neural network approach. Proceedings of the 20th AnnualConference of the Cognitive Science Society:1036–42. Elrbaum. [rLWB]

Wilson, S. G., Rinck, M., McNamara, T. P., Bower, G. H. & Morrow, D. G. (1993)Mental models and narrative comprehension: Some qualifications. Journal ofMemory and Language 32:141–54. [aLWB]

Wimmer, H. & Perner, J. (1983) Beliefs about beliefs: Representation andconstraining function of wrong beliefs in young children’s understanding ofdeception. Cognition 13:103–28. [RO]

Winograd, T. & Flores, F. (1987) Understanding computers and cognition: A newfoundation for design. Addison-Wesley. [aLWB]

Wittgenstein, L. (1953) Philosophical investigations. Macmillan. [arLWB]Wolfe, J. (1998) How quickly they forget: A modest alternative to blinks and

blindness. Abstracts of the Psychonomic Society 3:53. [JH]Wu, L. L. (1995) Perceptual representation in conceptual combination. Doctoral

dissertation, University of Chicago. [aLWB]Wu, L. L. & Barsalou, L. W. (1999) Grounding property listing in perceptual

simulation (under review). [arLWB]Yeh, W. & Barsalou, L. W. (1999a) A review of situation effects in cognition (in

preparation). [aLWB](1999b) Situation effects in conceptual processing (in preparation). [aLWB]

Younger, B. A. & Cohen, L. B. (1985) How infants form categories. In: Thepsychology of learning and motivation, vol. 19, ed. G. H. Bower. AcademicPress. [aLWB]

Zajonc, R. (1980) Feeling and thinking: Preferences need no inferences. AmericanPsychologist 39:151–75. [LCC]

Zatorre, R. J., Halpern, A. R., Perry, D. W., Meyer, E. & Evans, A. C. (1996)Hearing in the mind’s ear: A PET investigation of musical imagery andperception. Journal of Cognitive Neuroscience 8:29–46. [aLWB]

Zeki, S. (1993) A vision of the brain. Blackwell Scientific. [aLWB]Zwaan, R. A. (1996) Processing narrative time shifts. Journal of Experimental

Psychology: Learning, Memory, and Cognition 22:1196–207. [RAZ](in press) Embodied cognition, perceptual symbols, and situation models.

Discourse Processes. [RAZ]Zwaan, R. A. & Radvansky, G. A. (1998) Situation models in language

comprehension and memory. Psychological Bulletin 123:162–85. [RAZ]

References/Barsalou: Perceptual symbol systems

660 BEHAVIORAL AND BRAIN SCIENCES (1999) 22:4