advanced review attractor networks rolls attractor networks 2010.pdf · advanced review attractor...

16
Advanced Review Attractor networks Edmund T. Rolls An attractor network is a network of neurons with excitatory interconnections that can settle into a stable pattern of firing. This article shows how attractor networks in the cerebral cortex are important for long-term memory, short-term memory, attention, and decision making. The article then shows how the random firing of neurons can influence the stability of these networks by introducing stochastic noise, and how these effects are involved in probabilistic decision making, and implicated in some disorders of cortical function such as poor short-term memory and attention, schizophrenia, and obsessive-compulsive disorder. 2009 John Wiley & Sons, Ltd. WIREs Cogn Sci 2010 1 119–134 A n attractor network is a network of neurons with excitatory interconnections that can settle into a stable pattern of firing. 1–4 This article shows how attractor networks in the cerebral cortex are important for long-term memory, short-term memory, attention, and decision making. The article then shows how the random firing of neurons can influence the stability of these networks by introducing stochastic noise, and how these effects are involved in probabilistic decision making, and implicated in some disorders of cortical function such as poor short-term memory and attention, schizophrenia, and obsessive–compulsive disorder. Each memory pattern stored in an attractor network by associative synaptic modification consists of a subset of the neurons firing. These patterns could correspond to memories, perceptual representations or thoughts. ATTRACTOR NETWORK ARCHITECTURE, AND THE STORAGE OF MEMORIES The architecture of an attractor or autoassociation network is shown in Figure 1. External inputs e i activate the neurons in the network, and produce firing y i , where i refer to the i’th neuron. The neurons are connected by recurrent collateral synapses w ij , where j refers to the j’th synapse on a neuron. By these synapses an input pattern on e i is associated with itself, and thus the network is referred to as an autoassociation network. Because there is positive Additional Supporting Information may be found in the online version of this article. Correspondence to: [email protected] Oxford Centre for Computational Neuroscience, Oxford, UK. DOI: 10.1002/wcs.1 feedback via the recurrent collateral connections, the network can sustain persistent firing. These synaptic connections are assumed to build up by an associative (Hebbian) learning mechanism 5 (according to which the more two neurons are simultaneously active the stronger the neural connection becomes). The associative learning rule for the change in the synaptic weight is as shown in Eq. (1) δ w ij = k · y i · y j (1) where k is a constant, y i is the activation of the dendrite (the postsynaptic term), y j is the presynaptic firing rate, and δ w ij is the change of synaptic weight. The inhibitory interneurons are not shown. They receive inputs from the pyramidal cells, and make negative feedback connections onto the pyramidal cells to control their activity. In order for biologically plausible autoassocia- tive networks to store information efficiently, het- erosynaptic long-term depression (LTD) (as well as long-term potentiation) is required. 4,6–10 This type of LTD helps to remove the correlations between the training patterns that arise because the neurons have positive-only firing rates. The effect of the LTD can be to enable the effect of the mean presynaptic firing rate to be subtracted from the patterns. 4,6,7,9,10 RECALL During recall, the external input e i is applied, and produces output firing, operating through the nonlin- ear activation function described below. The firing is fed back by the recurrent collateral axons shown in Figure 1 to produce activation of each output neuron through the modified synapses on each output neuron. The activation h i produced by the recurrent collateral Volume 1, January/February 2010 2009 John Wiley & Sons, Ltd. 119

Upload: others

Post on 24-Oct-2020

15 views

Category:

Documents


0 download

TRANSCRIPT

  • Advanced Review

    Attractor networksEdmund T. Rolls∗

    An attractor network is a network of neurons with excitatory interconnections thatcan settle into a stable pattern of firing. This article shows how attractor networksin the cerebral cortex are important for long-term memory, short-term memory,attention, and decision making. The article then shows how the random firing ofneurons can influence the stability of these networks by introducing stochasticnoise, and how these effects are involved in probabilistic decision making, andimplicated in some disorders of cortical function such as poor short-term memoryand attention, schizophrenia, and obsessive-compulsive disorder. 2009 John Wiley& Sons, Ltd. WIREs Cogn Sci 2010 1 119–134

    An attractor network is a network of neurons withexcitatory interconnections that can settle intoa stable pattern of firing.1–4 This article shows howattractor networks in the cerebral cortex are importantfor long-term memory, short-term memory, attention,and decision making. The article then shows how therandom firing of neurons can influence the stabilityof these networks by introducing stochastic noise,and how these effects are involved in probabilisticdecision making, and implicated in some disorders ofcortical function such as poor short-term memory andattention, schizophrenia, and obsessive–compulsivedisorder. Each memory pattern stored in an attractornetwork by associative synaptic modification consistsof a subset of the neurons firing. These patterns couldcorrespond to memories, perceptual representationsor thoughts.

    ATTRACTOR NETWORKARCHITECTURE, AND THE STORAGEOF MEMORIESThe architecture of an attractor or autoassociationnetwork is shown in Figure 1. External inputs eiactivate the neurons in the network, and producefiring yi, where i refer to the i’th neuron. The neuronsare connected by recurrent collateral synapses wij,where j refers to the j’th synapse on a neuron. Bythese synapses an input pattern on ei is associatedwith itself, and thus the network is referred to as anautoassociation network. Because there is positive

    Additional Supporting Information may be found in the onlineversion of this article.∗Correspondence to: [email protected] Centre for Computational Neuroscience, Oxford, UK.

    DOI: 10.1002/wcs.1

    feedback via the recurrent collateral connections,the network can sustain persistent firing. Thesesynaptic connections are assumed to build up by anassociative (Hebbian) learning mechanism5 (accordingto which the more two neurons are simultaneouslyactive the stronger the neural connection becomes).The associative learning rule for the change in thesynaptic weight is as shown in Eq. (1)

    δwij = k · yi · yj (1)

    where k is a constant, yi is the activation of thedendrite (the postsynaptic term), yj is the presynapticfiring rate, and δwij is the change of synaptic weight.The inhibitory interneurons are not shown. Theyreceive inputs from the pyramidal cells, and makenegative feedback connections onto the pyramidalcells to control their activity.

    In order for biologically plausible autoassocia-tive networks to store information efficiently, het-erosynaptic long-term depression (LTD) (as well aslong-term potentiation) is required.4,6–10 This type ofLTD helps to remove the correlations between thetraining patterns that arise because the neurons havepositive-only firing rates. The effect of the LTD canbe to enable the effect of the mean presynaptic firingrate to be subtracted from the patterns.4,6,7,9,10

    RECALLDuring recall, the external input ei is applied, andproduces output firing, operating through the nonlin-ear activation function described below. The firing isfed back by the recurrent collateral axons shown inFigure 1 to produce activation of each output neuronthrough the modified synapses on each output neuron.The activation hi produced by the recurrent collateral

    Volume 1, January /February 2010 2009 John Wi ley & Sons, L td. 119

  • Advanced Review wires.wiley.com/cogsci

    hi = dendritic activation

    yi = output firing

    Output

    ei

    External input

    wijyj

    FIGURE 1 | The architecture of an autoassociative or attractorneural network (see text).

    effect on the ith neuron is the sum of the activa-tions produced in proportion to the firing rate of eachaxon yj operating through each modified synapse wij,that is,

    hi = �j yj wij (2)

    where �j indicates that the sum is over the C inputaxons to each neuron, indexed by j. This is a dotor inner product computation between the inputfiring vector yj (j = 1, C) and the synaptic weightvector wij (j = 1, C) on neuron i, and it is becausethis is a vector similarity operation, closely relatedto a correlation, between the input vector and thesynaptic weight vector that many of the properties ofattractor networks arise, including completion of amemory when only a partial retrieval cue is applied.4

    The output firing yi is a nonlinear function of theactivation produced by the recurrent collateral effect(internal recall) and by the external input ei:

    yi = f(hi : +ei) (3)

    The activation function should be nonlinear, and maybe, for example, binary threshold, linear threshold,sigmoid, etc. The threshold at which the activationfunction operates is set in part by the effect ofthe inhibitory neurons in the network (not shownin Figure 1). The threshold prevents the positivefeedback inherent in the operation of attractornetworks from leading to runaway neuronal firing;and allows optimal retrieval of a memory withoutinterference from other memories stored in thesynaptic weights.2,4

    The recall state (which could be used toimplement short-term memory, or memory recall) in

    Spontaneous Persistent Rate

    Pot

    entia

    l

    FIGURE 2 | Energy landscape of an attractor network. There are twotypes of stable fixed point: a spontaneous state with a low firing rate,and one or more persistent states with high firing rates in which theneurons keep firing. Each one of the high firing rate attractor states canimplement a different memory.

    an attractor network can be thought of as the localminimum in an energy landscape,1 where the energywould be defined as

    E = −12

    ∑i, j

    wij(yi− < y >)(yj− < y >) (4)

    where yi is the firing of neuron i, and < y > indicatesthe average firing rate. The intuition here is that if bothyi and yj are above their average rates, and are excitingeach other through a strong synapse, then the firingwill tend to be stable and maintained, resulting in alow energy state that is stable. Although this energyanalysis applies formally only with a fully connectednetwork with symmetric synaptic strengths betweenneurons (which would be produced by as associativelearning rule), it has been shown that the same generalproperties apply if the connectivity is diluted andbecomes asymmetric.7,9,11–13

    Autoassociation attractor systems have twotypes of stable fixed point: a spontaneous state witha low firing rate, and one or more persistent stateswith high firing rates in which the neurons keepfiring (Figure 2). Each one of the high firing rateattractor states can implement a different memory.When the system is moved to a position in the spaceby an external retrieval cue stimulus, it will moveto the closest stable attractor state. The area in thespace within which the system will move to a stableattractor state is called its basin of attraction. This isthe process involved in completion of a whole memoryfrom a partial retrieval cue.

    Properties of Attractor Networks

    CompletionAn important and useful property of these attractornetworks is that they complete an incomplete inputvector, allowing recall of a whole memory from asmall fraction of it. The memory recalled in responseto a fragment is that stored in the memory that isclosest in pattern similarity (as measured by the dot

    120 2009 John Wi ley & Sons, L td. Volume 1, January /February 2010

  • WIREs Cognitive Science Attractor networks

    product, or correlation). Because the recall is iterativeand progressive, the recall can be perfect.

    Short-Term MemoryAn autoassociation or attractor memory is usefulnot only as a long-term memory, in for example, thememory for particular past episodes (see below), butcan also be used as a short-term memory, in whichiterative processing round the recurrent collateralloop keeps a representation active until another inputcue is received, and this is widely used in the brain,and indeed is a prototypical property of cerebralneocortex (see below).

    Graceful Degradation or Fault ToleranceIf the synaptic weight vector wi on each neuron hassynapses missing (e.g., during development), or losessynapses (e.g., with brain damage or ageing), then theactivation hi is still reasonable, because hi is the dotproduct (correlation) of the input firing rate vectorand the weight vector. The same argument applies ifwhole input axons are lost. If an output neuron islost, then the network cannot itself compensate forthis, but the next network in the brain is likely tobe able to generalise or complete if its input vectorhas some elements missing, as would be the case ifsome output neurons of a preceding autoassociationnetwork were damaged.

    Storage Capacity, and the Sparseness of theRepresentationHopfield, using the approach of statistical mechanics,showed that in a fully connected attractor networkwith fully distributed binary representations (e.g., forany one pattern, half the neurons in the high firingstate of 1, and the other half in the low firing stateof 0 or −1), the number of stable attractor states,corresponding to the number of memories that can besuccessfully retrieved, is approximately 0.14C, whereC is the number of connections on each neuron fromthe recurrent collateral connections.1–3

    We (Treves and Rolls) have performed quanti-tative analyses of the storage and retrieval processesin attractor networks.7,9,11,12 We have extendedprevious formal models of autoassociative memory[see Ref 2] by analysing a network with gradedresponse units, so as to represent more realisticallythe continuously variable rates at which neuronsfire, and with incomplete connectivity.7,11 We havefound that in general the maximum number pmax offiring patterns that can be (individually) retrieved isproportional to the number CRC of (associatively)modifiable recurrent collateral synapses per neuron,by a factor that increases roughly with the inverse of

    the sparseness a of the neuronal representation.a Theneuronal population sparseness a of the representa-tion can be measured by extending the binary notionof the proportion of neurons that are firing to anyone stimulus or event as

    a = ∑

    i=1, n(ri/N) 2/

    ∑i=1, n

    (r2i /N

    ) (5)

    where ri is the firing rate of the i’th neuron in the setof N neurons. The sparseness ranges from 1/N, whenonly one of the neurons responds to a particularstimulus (a local or grandmother cell representation),to a value of 1.0, attained when all the neurons areresponding to a given stimulus. Approximately,

    pmax ∼= CRC

    a ln(1/a)k (6)

    where k is a factor that depends weakly on the detailedstructure of the rate distribution, on the connectivitypattern, etc., but is roughly in the order of 0.2–0.3.7

    For example, for CRC = 12,000 (the number ofrecurrent collateral synapses on a hippocampal CA3neuron in the rat14) and a = 0.02, pmax is calculatedto be approximately 36,000. This analysis emphasisesthe utility of having a sparse representation in the hip-pocampus, for this enables many different memoriesto be stored.15 The sparseness a in this equation isstrictly the population sparseness.7,16 The populationsparseness ap would be measured by measuring thedistribution of firing rates of all neurons to a singlestimulus at a single time. The single neuron sparsenessor selectivity as would be measured by the distributionof firing rates to a set of stimuli, which would take along time. The selectivity or sparseness as of a singleneuron measured across a set of stimuli often takesa similar value to the population sparseness a in thebrain, and does so if the tuning profiles of the neuronsto the set of stimuli are uncorrelated.16 These conceptsare elucidated by Franco et al.16 These quantitativeanalyses have been confirmed numerically.13

    The Dynamics of the Recurrent AttractorNetwork—Fast RecallThe analysis described above of the capacity of arecurrent network considered steady state conditionsof the firing rates of the neurons. The question arisesof how quickly the recurrent network would settleinto its final state. If these settling processes took inthe order of hundreds of ms, they would be much tooslow to contribute usefully to cortical activity, whetherin the hippocampus or the neocortex.4,10,17–19

    It has been shown that if the neurons are treatednot as McCulloch–Pitts neurons which are simply

    Volume 1, January /February 2010 2009 John Wi ley & Sons, L td. 121

  • Advanced Review wires.wiley.com/cogsci

    ‘updated’ at each iteration, or cycle of time steps (andassume the active state if the threshold is exceeded),but instead are analysed and modelled as ‘integrate-and-fire’ neurons in real continuous time, then thenetwork can effectively ‘relax’ into its recall statevery rapidly, in one or two time constants of thesynapses.9,10,20,21 This corresponds to perhaps 20 msin the brain. One factor in this rapid dynamics ofautoassociative networks with brain-like ‘integrate-and-fire’ membrane and synaptic properties is thatwith some spontaneous activity, some of the neuronsin the network are close to threshold already beforethe recall cue is applied, and hence some of theneurons are very quickly pushed by the recall cueinto firing, so that information starts to be exchangedvery rapidly (within 1–2 ms of brain time) through themodified synapses by the neurons in the network. Theprogressive exchange of information starting early onwithin what would otherwise be thought of as aniteration period (of perhaps 20 ms, corresponding toa neuronal firing rate of 50 spikes/s), is the mechanismaccounting for rapid recall in an autoassociativeneuronal network made biologically realistic in thisway. Further analysis of the fast dynamics of thesenetworks if they are implemented in a biologicallyplausible way with ‘integrate-and-fire’ neurons, isprovided in Section 7.7 of Rolls and Deco,10 inAppendix A5 of Rolls and Treves,9 by Treves,20 byPanzeri et al.,19 and by Rolls.4

    Continuous Attractor NetworksSo far, we have considered attractor networks inwhich each memory pattern stored in the networkis a discrete pattern. An attractor network trainedwith patterns that are continuous with each othercan maintain the firing of its neurons to represent anylocation along a continuous physical dimension suchas spatial position, head direction, etc. and is termeda Continuous Attractor neural network. It has thesame architecture as a discrete attractor network, butuses the excitatory recurrent collateral connectionsbetween the neurons to reflect the distance between theneurons in the state space (e.g., head direction space,or the place of the animal in an environment). Thesenetworks can maintain the bubble of neural activityconstant for long periods wherever it is started to rep-resent the current state (head direction, position, etc.)of the animal, and are likely to be involved in manyaspects of spatial processing and memory, includingspatial vision.4,22–33 Global inhibition is used to keepthe number of neurons in a bubble or packet ofactively firing neurons relatively constant, and to helpto ensure that (in typical applications) there is onlyone activity packet (but see31). Attractor networks can

    operate with both continuous and discrete patterns,and this is likely to be important in episodic memory,in which typically a spatial position (e.g., a place) anddiscrete object-related information are components.34

    Attractor Networks for Short-TermMemoryPyramidal neurons in the cerebral cortex have arelatively high density of excitatory connections toeach other within a local area of 1–3 mm.4,35,36

    These local recurrent collateral excitatory connectionsprovide a positive-feedback mechanism (which iskept under control by gamma-aminobutyric acid(GABA) inhibitory interneurons) that enables a setof neurons to maintain their activity for many secondsto implement a short-term memory.37 Each memoryis formed by the set of the neurons in the localcortical network that were coactive when the memorywas formed, resulting in strengthened excitatoryconnections between that set of neurons through theprocess of long-term potentiation, which is a propertyof these recurrent collateral connections.

    Attractor networks appear to operate in theprefrontal cortex, an area that is important in atten-tion and short-term memory, as shown, for example,by firing in the delay period of a short-term memorytask.4,38–43 Short-term memory is the ability to holdinformation on-line during a short time period.39,43

    It has been proposed that whereas it is a prop-erty of all cortical areas that they have an ability tomaintain neuronal activity by the attractor propertiesimplemented by the recurrent collateral connections,the prefrontal cortex has a special role in short-termmemory because it can act as an off-line store as fol-lows (see Figure 3).4 First, we note that a perceptualbrain area such as the inferior temporal cortex mustrespond to every new incoming set of objects in theworld so that we can see them, and this is inconsis-tent with maintaining their firing in an attractor statethat represents an object or objects seen seconds ago.For this reason, for a short-term memory to be main-tained during periods in which new stimuli are to beperceived, there must be separate networks for the per-ceptual and short-term memory functions, and indeedtwo coupled networks, one in the inferior temporalvisual cortex for perceptual functions, and another inthe prefrontal cortex for maintaining the short-termmemory, for example, when intervening stimuli arebeing shown, provide a precise model of the interac-tion of perceptual and short-term memory systems44,45

    (see Figure 3). This model shows how a prefrontalcortex attractor (autoassociation) network could betriggered by a sample visual stimulus represented in theinferior temporal visual cortex in a delayed match to

    122 2009 John Wi ley & Sons, L td. Volume 1, January /February 2010

  • WIREs Cognitive Science Attractor networks

    FIGURE 3 | A short-term memoryautoassociation network in the prefrontal cortexcould hold active a working memoryrepresentation by maintaining its firing in anattractor state. The prefrontal module would beloaded with the to-be-remembered stimulus bythe posterior module (in the temporal or parietalcortex) in which the incoming stimuli arerepresented. Backprojections from the prefrontalshort-term memory module to the posteriormodule would enable the working memory tobe unloaded, to for example, influence ongoingperception (see text). RC, recurrent collateralconnections.

    RC

    Prefrontal cortex (PF)Inferior temporal cortex (IT)

    RC

    Output

    Input

    sample task, and could keep this attractor active dur-ing a memory interval in which intervening stimuli areshown. Then when the sample stimulus reappears inthe task as a match stimulus, the inferior temporal cor-tex module shows a large response to the match stimu-lus, because it is activated both by the visual incomingmatch stimulus, and by the consistent backprojectedmemory of the sample stimulus still being representedin the prefrontal cortex memory module (see Figure 3).The prefrontal attractor can be stimulated into activ-ity by the first stimulus when it is inactive, but once inits high firing rate attractor state, it is relatively stablebecause of the internal positive feedback, and is notlikely to be disturbed by further incoming stimuli. Theinternal recurrent connections must be stronger thanthe feedforward and feedback connections betweenthe two cortical areas for this to work.4,44,45

    This computational model makes it clear that inorder for ongoing perception to occur unhinderedimplemented by posterior cortex (parietal andtemporal lobe) networks, there must be a separateset of modules that is capable of maintaining arepresentation over intervening stimuli. This is thefundamental understanding offered for the evolutionand functions of the dorsolateral prefrontal cortex,and it is this ability to provide multiple separate short-term attractor memories that provides I suggest thebasis for its functions in planning.4

    The impairments of attention induced byprefrontal cortex damage may be accounted for inlarge part by an impairment in the ability to holdthe object of attention stably and without distractionin the short-term memory systems in the prefrontalcortex.4,38,46

    Attractor Networks Involved in AttentionShort-term memory, and thus attractor networks, arefundamental to top–down attention in the sense thatwhatever requires attention (e.g., a spatial location)has to be maintained in a short-term memory. Theshort-term memory then biases competition betweenthe multiple bottom–up items in the stimulus input.The result is an advantage in the neuronal competitionbetween the multiple inputs for the item that receivestop–down bias from the short-term memory.4,10,47,48

    The overall network architecture within the brainby which this is realised is illustrated in Figure 4,in which the prefrontal cortex acts as the short-termmemory, which via the top–down backprojections canbias competition in the perceptual areas such as theinferior temporal visual cortex and parietal cortex toimplement object and spatial attention.4,10,48,49

    Attractor Networks Formed by Forwardand Backward Connections BetweenCortical AreasAlthough one usually thinks of attractors as beingformed in the cerebral neocortex by the recurrentcollateral connections within a local area of cerebralcortex, the forward and backward connectionsbetween two cortical areas can also potentially forman attractor network as can be seen from Figure 3,provided that the forward and backward synapsesare associatively modifiable, as seems likely. Theforward and backward connections between corticalareas in a hierarchy can thus potentially contributeto the attractor properties of connected cortical areas.An interesting implication is that when a decisionis taken, by mechanisms described later, a number

    Volume 1, January /February 2010 2009 John Wi ley & Sons, L td. 123

  • Advanced Review wires.wiley.com/cogsci

    V4 MT

    s

    s

    s

    s

    sd

    d

    d

    d

    d

    V2 sd

    Prefrontal

    Cortex v

    Prefrontal

    Cortex d

    V1s

    d

    Input

    Parietal

    dTemporal

    Inferior

    s

    FIGURE 4 | The overall architecture of amodel of object and spatial processing andattention, including the prefrontal corticalareas that provide the short-term memoryrequired to hold the object or spatial targetof attention active. Forward connections areindicated by solid lines; backprojections,which could implement top–downprocessing, by dashed lines; and recurrentconnections within an area by dotted lines.The triangles represent pyramidal cell bodies,with the thick vertical line above them thedendritic trees. The cortical layers in whichthe cells are concentrated are indicated by s(superficial, layers 2 and 3) and d (deep,layers 5 and 6). The prefrontal cortical areasmost strongly reciprocally connected to theinferior temporal cortex ‘what’ processingstream are labelled v to indicate that they arein the more ventral part of the lateralprefrontal cortex, area 46, close to theinferior convexity in macaques. The prefrontalcortical areas most strongly reciprocallyconnected to the parietal visual cortical‘where’ processing stream are labelled d toindicate that they are in the more dorsal partof the lateral prefrontal cortex, area 46, inand close to the banks of the principal sulcusin macaques (after Rolls and Deco10).

    of connected cortical areas may contribute to thesettling process into an attractor state, and thus to thedecision.4

    HIERARCHICALLY CONNECTEDATTRACTOR NETWORKS: ACTIONSELECTION IN THE PREFRONTALCORTEX

    A series of attractor networks can be connectedtogether by forward and backward projections, andinteresting properties can arise if the forward connec-tions are stronger than the backward connections. Onesuch scenario is illustrated in Figure 5, a model of theprefrontal cortex in which a set of neurons closer tothe sensory input can be activated by inputs from othercortical areas. These ‘sensory pools’ or populations of

    neurons can implement continuing firing and thus ashort-term memory of the sensory stimuli, as describedabove. However, these sensory populations projectforward to further populations which represent dif-ferent combinations of sensory inputs, the associativepools in Figure 5. These intermediate pools have short-term memory properties in their own right, but alsoconnect forward to a set of neurons with more motor-related properties, labelled premotor pools in Figure 5.This hierarchical attractor system can if triggered by asensory input select an action or motor output as activ-ity cascades through the system, and can also accountfor the maintenance of neuronal activity during delayperiods. Even more, top–down inputs shown as com-ing from rule attractor pools in Figure 5 can bias theintermediate combination-responding pools of neu-rons to determine which action is selected by a sensory

    124 2009 John Wi ley & Sons, L td. Volume 1, January /February 2010

  • WIREs Cognitive Science Attractor networks

    L R

    Sensory pools

    Object based context

    External input

    Premotor pools

    PFC

    Inhibitory pool

    Object pools Spatial pools

    Associative pools

    Nonspecific neurons

    Space based context

    Output

    Contextbias

    Rule pools

    Reversal trial (absence of reward, orpunishment)

    L R

    S1-L S2-R

    O1 O2 S1 S2

    L RO1-L O2-R

    FIGURE 5 | Network architecture of the prefrontal cortex unified model of attention, working memory, action selection, and decision making.There are sensory neuronal populations or pools for object type (O1 or O2) and spatial position (S1 or S2). These connect hierarchically (with strongerforward than backward connections) to the intermediate or ‘associative’ pools in which neurons may respond to combinations of the inputs receivedfrom the sensory pools for some types of mapping such as reversal, as described by Deco and Rolls.50 For the simulation of the data of Asaad et al.,52

    these intermediate pools respond to O1-L, O2-R, S1-L, or S2-R. These intermediate pools receive an attentional bias, which in the case of thisparticular simulation biases either the O pools or the S pools. The intermediate pools are connected hierarchically to the premotor pools, which in thiscase code for a Left or Right response. Each of the pools is an attractor network in which there are stronger associatively modified synaptic weightsbetween the neurons that represent the same state (e.g., object type for a sensory pool, or response for a premotor pool) than between neurons inthe other pools or populations. However, all the neurons in the network are associatively connected by at least weak synaptic weights. The attractorproperties, the competition implemented by the inhibitory interneurons, and the biasing inputs result in the same network implementing bothshort-term memory and biased competition, and the stronger feed forward than feedback connections between the sensory, intermediate, andpremotor pools results in the hierarchical property by which sensory inputs can be mapped to motor outputs in a way that depends on the biasingcontextual or rule input (after Deco and Rolls50).

    input, depending on the rule currently held in the ruleattractor. This provides a computational model foraction selection in the prefrontal cortex dependingon the current rule or context50, and is consistent,and indeed was based on, the properties of neuronsrecorded in the prefrontal cortex during action selec-tion tasks with short-term memory requirements.51,52

    STABILITY OF ATTRACTOR STATES

    Using an integrate-and-fire approach, the individualneurons, synapses and ion channels that comprisean attractor network, can be simulated, and whena threshold is reached the cell fires (see Figure 6(a)and Supporting Information). The firing times of the

    Volume 1, January /February 2010 2009 John Wi ley & Sons, L td. 125

  • Advanced Review wires.wiley.com/cogsci

    neurons can be approximately like those of neuronsin the brain, approximately Poisson distributed, thatis the firing time is approximately random for a givenmean rate. The random firing times of neurons areone source of noise in the attractor network, andcan influence the stability of the network.53–56 Theattractor dynamics can be pictured by effective energylandscapes, which indicate the basin of attraction byvalleys, and the attractor states or fixed points by thebottom of the valleys. The stability of an attractor ischaracterised by the average time in which the systemstays in the basin of attraction under the influence ofnoise, which provokes transitions to other attractorstates. Noise results from the interplay between thePoissonian character of the spikes, and the finite-sizeeffect because of the limited numbers of neurons inthe network. Two factors determine the stability.First, if the depths of the attractors are shallow (as inthe left compared to the right valley in Figure 6(b)),less force is needed to move a ball from the shallowvalley to the next. Second, a high level of noiseincreases the likelihood that the system will jump overan energy boundary from one state to another. Weenvision that the brain, as a dynamical system, hascharacteristics of such an attractor system, includingstatistical fluctuations.

    This type of model can then be applied to theprefrontal cortex and used to link these low-levelneuronal properties to the cognitive functions such asshort-term memories that result from the interactionsbetween thousands of neurons in the whole network.In order to maintain a short-term memory, theseinteractions have to remain stable, and several factorsinfluence the stability of such a short-term memoryattractor state with noise inherent in its operation.

    First, the stable states of the network are the‘low energy’ states in which one set of the neurons,connected by strengthened recurrent collateralsynapses, and representing one memory, is activated(see Figures 1 and 2). The higher the firing rates ofthis set of neurons, the stronger will be the negativefeedback inhibition by the GABA inhibitory interneu-rons to the other excitatory (pyramidal) neurons inthe network. This will keep the short-term memorystate stable, and will prevent distracting inputs tothe other, inhibited, neurons in the network fromtaking over.57 Any factor that reduces the currentsthrough the N-Methyl-D-Aspartate (NMDA) recep-tors (NMDARs) on the pyramidal cells, as appearsto be the case in patients with schizophrenia,58 woulddecrease the firing rates of the set of activated neuronsand tend to make the network more distractible.59–61

    Second, the strong synaptic connections imple-mented by the recurrent collateral synapses between

    Spike EPSP/IPSPIsyn Spike

    Synapse Soma

    Rsyn

    RsynRsyn

    RmRmmmCsyn Cm

    ‘‘Pot

    entia

    l’’

    Shallow, unstable attractor Deep, stable attractor

    Fixed pointsFiring rate

    (a)

    (b)

    FIGURE 6 | (a) Using an integrate-and-fire approach, the individualneurons, synapses, and ion channels that comprise an attractornetwork, can be simulated, and when a threshold is reached the cellfires. (b) The attractor dynamics can be pictured by effective energylandscapes, which indicate the basin of attraction by valleys, and theattractor states or fixed points by the bottom of the valleys. The stabilityof an attractor is characterised by the average time in which the systemstays in the basin of attraction under the influence of noise, whichprovokes transitions to other attractor states. Two factors determine thestability. First, if the depths of the attractors are shallow (as in the leftcompared to the right valley), less force is needed to move a ball fromthe shallow valley to the next. Second, a high level of noise increasesthe likelihood that the system will jump over an energy boundary fromone state to another.

    the excitatory neurons in the network (e.g., the pyra-midal cells in the prefrontal cortex) also tend topromote stability, by enhancing the firing of the neu-rons that are active for a short-term memory.62 Thishelps to keep the energy low in the Hopfield equa-tion [see Eq. (4)], and thus to make it difficult tojump from one energy minimum over a barrier to adifferent energy minimum that represents a differentmemory.

    Third, the operation of the network is inherentlynoisy and probabilistic owing to the random spikingof the individual neurons in the network and thefinite size of the network.63–67 The random spikingwill sometimes (i.e., probabilistically) be large inneurons that are not among those in the currentlyactive set that represents the short-term memory in

    126 2009 John Wi ley & Sons, L td. Volume 1, January /February 2010

  • WIREs Cognitive Science Attractor networks

    mind; this chance effect, perhaps in the presence ofa distracting stimulus, might make the network jumpover an energy barrier between the memory statesinto what becomes a different short-term memory,resulting in distraction. In a different scenario, thesame type of stochastic noise could make the networkjump from a spontaneous state of firing in which thereis no item in short-term memory, to an active statein which one of the short-term memories becomesactive. In the context of schizophrenia, this mightrepresent an intrusive thought or hallucination.61 Theeffects of noise operating in this way would be moreevident if the firing rates are low (resulting in a lowenergy barrier over which to jump); or if the GABAinhibition is reduced, as suggested by post-mortemstudies of patients with schizophrenia,68,69 whichwould make the spontaneous firing state less stable.GABA interneurons normally inhibit the neurons thatare not in the active set that represent a memory, buthypofunction of the NMDARs on GABA interneuronscould diminish this inhibition.58

    Fourth, the stability of the attractor state isenhanced by the long time constants (around 100 ms)of the NMDARs in the network.70–73 The contributionof these long time constants (long in relation to thoseof the alpha-amino-3-hydroxy-5-methyl-4-isoxazolepropionate (AMPA) excitatory receptors, which arein the order of 5–10 ms) is to smooth out in time thestatistical fluctuations that are caused by the randomspiking of populations of neurons in the network, andthus to make the network more stable and less likelyto jump to a different state. The different state mightrepresent a different short-term memory; or the noisemight return the active state back to the spontaneouslevel of firing, producing failure of the short-termmemory and failure to maintain attention. Further,once a neuron is strongly depolarised, the voltagedependence of the NMDAR may tend to promotefurther firing.70 If the NMDARs were less efficacious,as has been observed in patients with schizophrenia,58

    the short-term memory network would be less stablebecause the effective time constant of the wholenetwork would be reduced, owing to the greaterrelative contribution of the short time constant AMPAreceptors to the effects implemented through therecurrent collateral excitatory connections betweenthe pyramidal cells.71,72,74

    ATTRACTOR NETWORK STABILITYAND PSYCHIATRIC DISORDERSIt is hypothesised that some of the cognitive symptomsof schizophrenia, including poor short-term memoryand attention, can be related to a reduced depth in

    the basins of attraction of the attractor networksin the prefrontal cortex that implement thesefunctions.60,61,75 The reduced depth of the basins ofattraction may be related to hypoglutamatergia,58,76

    and/or changes in dopaminergic function which actpartly by influencing glutamatergic function.61,77–80

    The negative and positive symptoms of schizophreniamay be related to similar underlying changes, butexpressed in different parts of the brain such as theorbitofrontal and anterior cingulate cortex, and thetemporal lobes.4,60,61,75

    Obsessive–compulsive disorder has been linkedto overstability in cortical attractor networks involvedin short-term memory, attention, and action selec-tion, related it is hypothesised at least in part tohyperglutamatergia.62

    Attractor Networks, Noise, andDecision-MakingRecently, a series of biologically plausible models,motivated and constrained by neurophysiologicaldata, have been formulated to establish an explicitlink between probabilistic decision-making and theway in which the noisy (i.e., stochastic) firing ofneurons influences which attractor state, representinga decision, is reached when there are two or morecompeting inputs or sources of evidence to theattractor network.55,56,66,81–84 The way in which thesedecisión-making attractor network models operate isas follows.

    An attractor network of the type illustratedin Figure 7(a) is set up to have two possible highfiring rate attractor states, one for each of thetwo decisions. The evidence for each decision(1 vs. 2) biases each of the two attractors viathe external inputs λ1 and λ2. The attractors aresupported by strengthened synaptic connections in therecurrent collateral synapses between the (e.g., corticalpyramidal) neurons activated when λ1 is applied, orwhen λ2 is applied. (This is an associative or Hebbianprocess set up during a learning stage by a processlike long-term potentiation.) Inhibitory interneurons(not shown in Figure 7(a)) receive inputs from thepyramidal neurons and make negative feedbackconnections onto the pyramidal cells to controltheir activity. When inputs λ1 and λ2 are applied,there is positive feedback via the recurrent collateralconnections, and competition implemented throughthe inhibitory interneurons so that there can be onlyone winner. The network starts in a low spontaneousstate of firing. When λ1 and λ2 are applied, thereis competition between the two attractors, each ofwhich is pushed towards a high firing rate state,

    Volume 1, January /February 2010 2009 John Wi ley & Sons, L td. 127

  • Advanced Review wires.wiley.com/cogsci

    hi = dendritic activation

    yi = output firing

    Output axons

    yj

    Recurrentcollateral

    axonsCell bodies

    Dendrites

    RecurrentCollateralSynapses wij

    S D

    Spontaneous stateattractor

    Decision stateattractor

    λ1 λ1 λ2 λ2

    wij

    λext

    λext

    λ1 λ2

    (AMPA)

    (AMPA, NMDA)

    (AMPA, NMDA)

    Inhibitorypool

    (GABA)

    (GABA)

    f1>f2 f1

  • WIREs Cognitive Science Attractor networks

    0 100 200 300 400 5000

    10

    20

    30

    40

    50

    60

    70

    80

    Time (ms)

    Rat

    e (H

    z)

    Inhibitory

    Pool (f1>f2)

    Pool (f1f2)

    Pool(f1 f2)(corresponding to a decision that f1 is greater thanf2), (f1 < f2), and the inhibitory population.(b) The corresponding rastergrams of 10 randomlyselected neurons for each pool (population ofneurons) in the network. Each vertical linecorresponds to the generation of a spike. Thespatio-temporal spiking activity shows thetransition to the correct final single-state attractor,i.e., a transition to the correct final attractorencoding the result of the discrimination (f1 > f2)(after Deco and Rolls66).

    with the noise in each attractor determining whichdecision is taken on a particular trial. If one of theinputs is larger than the other, then the decision isbiased towards it, but is still probabilistic. Becausethis is an attractor network, it has short-term memoryproperties implemented by the recurrent collaterals,which tend to promote a state once it is started,and these help it to accumulate evidence over time,an important part of a decision-making mechanism,and also to maintain the firing once it has reachedthe decision state, enabling a suitable action to beimplemented even if this takes some time.

    This approach to decision making shows hownoise in the brain can be useful, how we can accountfor probabilistic choice and the way it is influenced bythe odds as in the Matching Law, how we can accountfor reaction times for easy versus difficult decisions,and even how Weber’s Law may be implemented inthe brain.4,55,56,66

    Attractor Networks, Noise, and SignalDetectionA similar approach has been taken to the detection ofsignals, where noise caused by the stochastic firing of

    Volume 1, January /February 2010 2009 John Wi ley & Sons, L td. 129

  • Advanced Review wires.wiley.com/cogsci

    Fornix

    To septum,mammillary bodies

    CA3

    Collateral

    2

    Schaffer

    collatera

    l

    3

    Dentate gyrus

    1Perforant path

    Mossy fibres

    To subiculum

    4CA1

    FIGURE 9 | Representation of connections within the hippocampus. Inputs reach the hippocampus through the perforant path (1) which makessynapses with the dendrites of the dentate granule cells and also with the apical dendrites of the CA3 pyramidal cells. The dentate granule cellsproject via the mossy fibres (2) to the CA3 pyramidal cells. The well-developed recurrent collateral system of the CA3 cells is indicated. The CA3pyramidal cells project via the Schaffer collaterals (3) to the CA1 pyramidal cells, which in turn have connections (4) to the subiculum.

    neurons influences how an attractor network may ormay not fall into a basin of attraction that representsthe detection of a signal.55,56

    Hippocampal Versus Neocortical AttractorNetworksThe neocortex has local recurrent collateral connec-tions between the pyramidal that achieve a highdensity only for a few millimetres across the cortex.It is hypothesised that this enables the neocortex tohave many local attractor networks, each concernedwith a different type of processing, short-term mem-ory, long-term memory, decision making, etc.4 Thisis important, for recall that the capacity of an attrac-tor network is set to first order by the number ofconnections onto a neuron from other neurons in thenetwork. If there were widespread recurrent collateralconnections in the neocortex so that the whole neocor-tex operated as a single attractor, the total memorycapacity of the neocortex would be only that of asingle attractor network (of order thousands of mem-ories), and this possibility is thus ruled out.4,85 Thereare great advantages in having large numbers of localbut weakly couple neocortical attractor networks, andsome have been described above, and many more aredescribed by Rolls.4

    However, it has been suggested that one networkin the brain, the hippocampal CA3 network, doesoperate as a single attractor network4,9,15,86–94 (withrelated approaches, although not emphasising therelative important of a single attractor network (or

    CA395) including.96–99) Part of the anatomical basisfor this is that the recurrent collateral connectionsbetween the CA3 neurons are very widespread, andhave a chance of contacting any other CA3 neuronin the network (see Figure 9).14,100 The underlyingtheory is that the associativity in the network allowsany one set of active neurons, perhaps representingone part of an episodic memory, to have a fairchance of making modifiable synaptic contacts withany other set of CA3 neurons perhaps representinganother part of an episodic memory. (An episodicmemory is a memory of a single event or episode,such as where one was at dinner, with whom, whatwas eaten, and what was discussed.) This widespreadconnectivity providing for a single attractor networkmeans that any one part of an episodic memory canbe associated with any other part of an episodic orevent memory. (This is what I mean by calling thisan arbitrary memory, in than any arbitrary set ofevents can be associated with any other.) Now thisfunctionality would be impossible in the neocortex,as the connections are local. This is thus a specialcontribution that the hippocampus can make to eventor episodic memory.4,93,101,102

    CONCLUSIONI propose that attractor networks are fundamentaldesign features of the neocortex and hippocampalcortex (and not of, e.g., the cerebellar cortex or basalganglia). In the neocortex the attractor networksare local and therefore there can be many of

    130 2009 John Wi ley & Sons, L td. Volume 1, January /February 2010

  • WIREs Cognitive Science Attractor networks

    them. They allow many items of information to beheld on-line, and thus provide the basis and/or under-pinning for powerful computations that require short-term memory, working memory (which involves themanipulation of items in short-term memory), plan-ning, attention, and even language (which requiresmultiple items to be held on-line during the pars-ing of a sentence). In the hippocampal cortex, anattractor network in the CA3 region allows asso-ciations between any events that co-occur, and thusprovides a basis for the memory of particular episodes,

    and the recall of an episodic memory from anypart.

    NOTESaEach memory is precisely defined in the theory: itis a set of firing rates of the population of neurons(which represent a memory) that can be stored andlater retrieved, with retrieval being possible from afraction of the originally stored set of neuronal firingrates.

    REFERENCES

    1. Hopfield JJ. Neural networks and physical systems withemergent collective computational abilities. Proc NatlAcad Sci U S A 1982, 79:2554–2558.

    2. Amit DJ. Modeling Brain Function. Cambridge: Cam-bridge University Press; 1989.

    3. Hertz J, Krogh A, Palmer RG. An Introduction to theTheory of Neural Computation. Wokingham: Addison-Wesley; 1991.

    4. Rolls ET. Memory, Attention, and Decision-making:A Unifying Computational Neuroscience Approach.Oxford: Oxford University Press; 2008.

    5. Hebb DO. The Organization of Behavior: A Neuropsy-chological Theory. New York: John Wiley & Sons;1949.

    6. Rolls ET, Treves A. The relative advantages of sparseversus distributed encoding for associative neuronalnetworks in the brain. Network 1990, 1:407–421.

    7. Treves A, Rolls ET. What determines the capacity ofautoassociative memories in the brain? Network 1991,2:371–397.

    8. Fazeli MS, Collingridge GL, eds. Cortical Plasticity:LTP and LTD. Oxford: Bios; 1996.

    9. Rolls ET, Treves A. Neural Networks and Brain Func-tion. Oxford: Oxford University Press; 1998.

    10. Rolls ET, Deco G. Computational Neuroscience ofVision. Oxford: Oxford University Press; 2002.

    11. Treves A. Graded-response neurons and informationencodings in autoassociative memories. Phys Rev A1990, 42:2418–2430.

    12. Treves A, Rolls ET. Computational constraints suggestthe need for two distinct input systems to the hippocam-pal CA3 network. Hippocampus 1992, 2:189–199.

    13. Rolls ET, Treves A, Foster D, Perez-Vicente C. Simula-tion studies of the CA3 hippocampal subfield modelledas an attractor neural network. Neural Netw 1997,10:1559–1569.

    14. Amaral DG, Ishizuka N, Claiborne B. Neurons, num-bers and the hippocampal network. Prog Brain Res1990, 83:1–11.

    15. Treves A, Rolls ET. A computational analysis of the roleof the hippocampus in memory. Hippocampus 1994,4:374–391.

    16. Franco L, Rolls ET, Aggelopoulos NC, Jerez JM. Neu-ronal selectivity, population sparseness, and ergodicityin the inferior temporal visual cortex. Biol Cybern2007, 96:547–560.

    17. Rolls ET. Neurophysiological mechanisms underlyingface processing within and beyond the temporal corti-cal visual areas. Philos Trans R Soc Lond B Biol Sci1992, 335:11–21.

    18. Rolls ET. Consciousness absent and present: a neu-rophysiological exploration. Prog Brain Res 2003,144:95–106.

    19. Panzeri S, Rolls ET, Battaglia F, Lavis R. Speed of infor-mation retrieval in multilayer networks of integrate-and-fire neurons. Netw Comput Neural Syst 2001,12:423–440.

    20. Treves A. Mean-field analysis of neuronal spike dynam-ics. Network 1993, 4:259–284.

    21. Battaglia FP, Treves A. Stable and rapid recurrent pro-cessing in realistic auto-associative memories. NeuralComput 1998, 10:431–450.

    22. Amari S. Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 1977,27:77–87.

    23. Zhang K. Representation of spatial orientation by theintrinsic dynamics of the head-direction cell ensemble:a theory. J Neurosci 1996, 16:2112–2126.

    24. Taylor JG. Neural ‘‘bubble’’ dynamics in two dimen-sions: foundations. Biol Cybern 1999, 80:393–409.

    25. Samsonovich A, McNaughton BL. Path integration andcognitive mapping in a continuous attractor neural net-work model. J Neurosci 1997, 17:5900–5920.

    Volume 1, January /February 2010 2009 John Wi ley & Sons, L td. 131

  • Advanced Review wires.wiley.com/cogsci

    26. Battaglia FP, Treves A. Attractor neural networksstoring multiple space representations: a modelfor hippocampal place fields. Phys Rev E 1998,58:7738–7753.

    27. Stringer SM, Trappenberg TP, Rolls ET, Araujo IET.Self-organizing continuous attractor networks and pathintegration: one-dimensional models of head directioncells. Netw Comput Neural Syst 2002, 13:217–242.

    28. Stringer SM, Rolls ET, Trappenberg TP, Araujo IET.Self-organizing continuous attractor networks and pathintegration. Two-dimensional models of place cells.Netw Comput Neural Syst 2002, 13:429–446.

    29. Stringer SM, Rolls ET, Trappenberg TP. Self-organizingcontinuous attractor network models of hippocam-pal spatial view cells. Neurobiol Learn Mem 2005,83:79–92.

    30. Stringer SM, Rolls ET, Trappenberg TP, de AraujoIET. Self-organising continuous attractor networks andmotor function. Neural Netw 2003, 16:161–182.

    31. Stringer SM, Rolls ET, Trappenberg TP. Self-organisingcontinuous attractor networks with multiple activitypackets, and the representation of space. Neural Netw2004, 17:5–27.

    32. Rolls ET, Stringer SM. Spatial view cells in the hip-pocampus, and their idiothetic update based on placeand head direction. Neural Netw 2005, 18:1229–1241.

    33. Stringer SM, Rolls ET. Self-organizing path integrationusing a linked continuous attractor and competitive net-work: path integration of head direction. Netw ComputNeural Syst 2006, 17:419–445.

    34. Rolls ET, Stringer SM, Trappenberg TP. A unifiedmodel of spatial and episodic memory. Proc R SocLond B Biol Sci 2002, 269:1087–1093.

    35. Braitenberg V, Schütz A. Anatomy of the Cortex. Berlin:Springer-Verlag; 1991.

    36. Abeles M. Corticonics - Neural Circuits of the CerebralCortex. New York: Cambridge University Press; 1991.

    37. Goldman-Rakic PS. Cellular basis of working memory.Neuron 1995, 14:477–485.

    38. Goldman-Rakic PS. The prefrontal landscape: impli-cations of functional architecture for understandinghuman mentation and the central executive. PhilosTrans R Soc Lond B Biol Sci 1996, 351:1445–1453.

    39. Fuster JM. Executive frontal functions. Exp Brain Res2000, 133(1):66–70.

    40. Fuster JM, Alexander GE. Neuron activity related toshort-term memory. Science 1971, 173:652–654.

    41. Kubota K, Niki H. Prefrontal cortical unit activityand delayed alternation performance in monkeys. JNeurophysiol 1971, 34(3):337–347.

    42. Funahashi S, Bruce CJ, Goldman-Rakic PS. Mnemoniccoding of visual space in monkey dorsolateral pre-frontal cortex. J Neurophysiol 1989, 61:331–349.

    43. Fuster JM. Memory in the Cerebral Cortex. Cambridge,MA: MIT Press; 1995.

    44. Renart A, Parga N, Rolls ET. A recurrent model ofthe interaction between the prefrontal cortex and infe-rior temporal cortex in delay memory tasks. In: SollaSA, Leen TK, Mueller K-R, eds. Advances in NeuralInformation Processing Systems. Cambridge, MA: MITPress; 2000, 171–177.

    45. Renart A, Moreno R, de la Rocha J, Parga N,Rolls ET. A model of the IT-PF network in objectworking memory which includes balanced persistentactivity and tuned inhibition. Neurocomputing 2001,38–40:1525–1531.

    46. Goldman-Rakic PS, Leung H-C. Functional architec-ture of the dorsolateral prefrontal cortex in monkeysand humans. In: Stuss DT, Knight RT, eds. Principles ofFrontal Lobe Function. New York: Oxford UniversityPress; 2002, 85–95.

    47. Desimone R, Duncan J. Neural mechanisms of selec-tive visual attention. Annu Rev Neurosci 1995,18:193–222.

    48. Deco G, Rolls ET. Attention, short-term memory, andaction selection: a unifying theory. Prog Neurobiol2005, 76:236–256.

    49. Deco G, Rolls ET. Neurodynamics of biased compe-tition and co-operation for attention: a model withspiking neurons. J Neurophysiol 2005, 94:295–313.

    50. Deco G, Rolls ET. Attention and working memory: adynamical model of neuronal activity in the prefrontalcortex. Eur J Neurosci 2003, 18:2374–2390.

    51. Asaad WF, Rainer G, Miller EK. Neural activity in theprimate prefrontal cortex during associative learning.Neuron 1998, 21:1399–1407.

    52. Asaad WF, Rainer G, Miller EK. Task-specific neuralactivity in the primate prefrontal cortex. J Neurophysiol2000, 84:451–459.

    53. Tuckwell H. Introduction to Theoretical Neurobiology.Cambridge: Cambridge University Press; 1988.

    54. Jackson BS. Including long-range dependence inintegrate-and-fire models of the high interspike-intervalvariability of cortical neurons. Neural Comput 2004,16(10):2125–2195.

    55. Deco G, Rolls ET, Romo R. Stochastic dynamics asa principle of brain function. Prog Neurobiol 2009,88:1–16.

    56. Rolls ET, Deco G. Stochastic dynamics as a principleof brain function: noise in the Brain, 2009.

    57. Brunel N, Wang XJ. Effects of neuromodulation ina cortical network model of object working memorydominated by recurrent inhibition. J Comput Neurosci2001, 11:63–85.

    58. Coyle JT. Glutamate and schizophrenia: beyond thedopamine hypothesis. Cell Mol Neurobiol 2006,26(4–6):365–384.

    132 2009 John Wi ley & Sons, L td. Volume 1, January /February 2010

  • WIREs Cognitive Science Attractor networks

    59. Durstewitz D, Seamans JK, Sejnowski TJ. Neurocom-putational models of working memory. Nat Neurosci2000, 3:1184–1191.

    60. Loh M, Rolls ET, Deco G. A dynamical systemshypothesis of schizophrenia. PLoS Comput Biol 2007,3(11):e228. doi:10.1371/journal.pcbi.0030228.

    61. Rolls ET, Loh M, Deco G, Winterer G. Computationalmodels of schizophrenia and dopamine modulationin the prefrontal cortex. Nat Rev Neurosci 2008,9:696–709.

    62. Rolls ET, Loh M, Deco G. An attractor hypothesis ofobsessive-compulsive disorder. Eur J Neurosci 2008,28:782–793.

    63. Brunel N, Hakim V. Fast global oscillations in net-works of integrate-and-fire neurons with low firingrates. Neural Comput. 1999, 11(7):1621–1671.

    64. Mattia M, Del Giudice P. Attention and working mem-ory: a dynamical model of neuronal activity in theprefrontal cortex. Phys Rev E 2002, 66:51917–51919.

    65. Mattia M, Del Giudice P. Finite-size dynamics ofinhibitory and excitatory interacting spiking neurons.Phys Rev E Stat Nonlin Soft Matter Phys 2004,70(5 pt 1): 052903.

    66. Deco G, Rolls ET. Decision-making and Weber’s Law:a neurophysiological model. Eur J Neurosci 2006,24:901–916.

    67. Faisal AA, Selen LP, Wolpert DM. Noise in the nervoussystem. Nat Rev Neurosci 2008, 9(4):292–303.

    68. Benes FM. Emerging principles of altered neural cir-cuitry in schizophrenia. Brain Res Brain Res Rev 2000,31(2–3):251–269.

    69. Hashimoto T, Arion D, Unger T, Maldonado-AvilésJG, Morris HM, et al. Alterations in GABA-relatedtranscriptome in the dorsolateral prefrontal cortexof subjects with schizophrenia. Mol Psychiatry 2008,13(2): 147–161.

    70. Lisman JE, Fellous JM, Wang XJ. A role for NMDA-receptor channels in working memory. Nat Neurosci1998, 1(4):273–275.

    71. Wang X-J. Synaptic basis of cortical persistent activ-ity: the importance of NMDA receptors to workingmemory. J Neurosci 1999, 19(21):9587–9603.

    72. Compte A, Brunel N, Goldman-Rakic PS, Wang XJ.Synaptic mechanisms and network dynamics under-lying spatial working memory in a cortical networkmodel. Cereb Cortex 2000, 10(9): 910–923.

    73. Wang XJ. Synaptic reverberation underlying mnemo-nic persistent activity. Trends Neurosci 2001, 24(8):455–463.

    74. Tegner J, Compte A, Wang XJ. The dynamical stabil-ity of reverberatory neural circuits. Biol Cybern 2002,87(5–6):471–481.

    75. Rolls ET. Emotion Explained. Oxford: Oxford Univer-sity Press; 2005.

    76. Coyle JT, Tsai G, Goff D. Converging evidence ofNMDA receptor hypofunction in the pathophysiol-ogy of schizophrenia. Ann N Y Acad Sci 2003,1003:318–327.

    77. Seamans JK, Yang CR. The principal features andmechanisms of dopamine modulation in the prefrontalcortex. Prog Neurobiol 2004, 74(1):1–58.

    78. Durstewitz D. A few important points aboutdopamine’s role in neural network dynamics. Phar-macopsychiatry 2006, 39(Suppl 1):S72–S75.

    79. Durstewitz D. Dopaminergic modulation of prefrontalcortex network dynamics. In: Tseng K-Y, Atzori M,eds. Monoaminergic Modulation of Cortical Excitabil-ity. New York: Springer; 2007, 217–234.

    80. Winterer G, Weinberger DR. Genes, dopamine andcortical signal-to-noise ratio in schizophrenia. TrendsNeurosci 2004, 27(11): 683–690.

    81. Wang XJ. Probabilistic decision making by slowreverberation in cortical circuits. Neuron 2002,36:955–968.

    82. Brody CD, Romo R, Kepecs A. Basic mechanisms forgraded persistent activity: discrete attractors, continu-ous attractors, and dynamic representations. Curr OpinNeurobiol 2003, 13:204–211.

    83. Machens CK, Romo R, Brody CD. Flexible controlof mutual inhibition: a neural model of two-intervaldiscrimination. Science 2005, 307:1121–1124.

    84. Wong KF, Wang XJ. A recurrent network mechanismof time integration in perceptual decisions. J Neurosci2006, 26(4):1314–1328.

    85. O’Kane D, Treves A. Why the simplest notion of neo-cortex as an autoassociative memory would not work.Network 1992, 3:379–384.

    86. Rolls ET. Information representation, processing andstorage in the brain: analysis at the single neuron level.In: Changeux J-P, Konishi M, eds. The Neural andMolecular Bases of Learning. Chichester: John Wiley& Sons; 1987, 503–540.

    87. Rolls ET. Functions of neuronal networks in the hip-pocampus and neocortex in memory. In: Byrne JH,Berry WO, eds. Neural Models of Plasticity: Experi-mental and Theoretical Approaches. San Diego, CA:Academic Press; 1989, 240–265.

    88. Rolls ET. The representation and storage of informa-tion in neuronal networks in the primate cerebral cortexand hippocampus. In: Durbin R, Miall C, Mitchison G,eds. The Computing Neuron. Wokingham: Addison-Wesley; 1989, 125–159.

    89. Rolls ET. Functions of neuronal networks in the hip-pocampus and cerebral cortex in memory. In: CotterillRMJ, ed. Models of Brain Function. Cambridge: Cam-bridge University Press; 1989, 15–33.

    90. Rolls ET. Theoretical and neurophysiological analy-sis of the functions of the primate hippocampus in

    Volume 1, January /February 2010 2009 John Wi ley & Sons, L td. 133

  • Advanced Review wires.wiley.com/cogsci

    memory. Cold Spring Harb Symp Quant Biol 1990,55:995–1006.

    91. Rolls ET. Functions of the primate hippocampus in spa-tial processing and memory. In: Olton DS, Kesner RP,eds. Neurobiology of Comparative Cognition. Hills-dale, NJ: L. Erlbaum; 1990, 339–362.

    92. Rolls ET. Functions of the primate hippocampus inspatial and non-spatial memory. Hippocampus 1991,1:258–261.

    93. Rolls ET, Kesner RP. A computational theory of hip-pocampal function, and empirical tests of the theory.Prog Neurobiol 2006, 79:1–48.

    94. Rolls ET. An attractor network in the hippocam-pus: theory and neurophysiology. Learn Mem 2007,14:714–731.

    95. Marr D. Simple memory: a theory for archicortex.Philos Trans R Soc Lond B Biol Sci 1971, 262:23–81.

    96. McNaughton BL, Morris RGM. Hippocampal synap-tic enhancement and information storage within adistributed memory system. Trends Neurosci 1987,10(10):408–415.

    97. Levy WB. A computational approach to hippocampalfunction. In: Hawkins RD, Bower GH, eds. Computa-tional Models of Learning in Simple Neural Systems.San Diego, CA: Academic Press; 1989, 243–305.

    98. McNaughton BL. Associative pattern completion inhippocampal circuits: new evidence and new questions.Brain Res Rev 1991, 16:193–220.

    99. McClelland JL, McNaughton BL, O’Reilly RC. Whythere are complementary learning systems in the hip-pocampus and neocortex: insights from the successesand failures of connectionist models of learning andmemory. Psychol Rev 1995, 102:419–457.

    100. Ishizuka N, Weber J, Amaral DG. Organization ofintrahippocampal projections originating from CA3pyramidal cells in the rat. J Comp Neurol 1990,295:580–623.

    101. Rolls ET. The primate hippocampus and episodic mem-ory. In: Dere E, Easton A, Nadel L, Huston JP, eds.Handbook of Episodic Memory. Amsterdam: Elsevier;2008, 417–438.

    102. Rolls ET, Xiang J-Z. Spatial view cells in the primatehippocampus, and memory recall. Rev Neurosci 2006,17:175–200.

    FURTHER READING

    Amit DJ. Modeling Brain Function. Cambridge: Cambridge University Press; 1989.Deco G, Rolls ET, Romo R. Stochastic dynamics as a principle of brain function. Prog Neurobiol 2009, 88:1–16.Hertz J, Krogh A, Palmer RG. An Introduction to the Theory of Neural Computation. Wokingham: Addison-Wesley; 1991.Rolls ET. Memory, Attention, and Decision-Making: A Unifying Computational Neuroscience Approach.Oxford: Oxford University Press; 2008.Rolls ET, Deco G. Computational Neuroscience of Vision. Oxford: Oxford University Press; 2002.Rolls ET, Loh M, Deco G, Winterer G. Computational models of schizophrenia and dopamine modulation inthe prefrontal cortex. Nat Rev Neurosci 2008, 9:696–709.Rolls ET, Deco G. The Noisy Brain: Stochastic Dynamics as a Principle of Brain Function. Oxford: OxfordUniversity Press; 2010.

    134 2009 John Wi ley & Sons, L td. Volume 1, January /February 2010