9. continuous attractor and competitive networks

27
1 9. Continuous attractor and competitive networks Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering Graduate Programs in Cognitive Science, Brain Science and Bioinformatics Brain-Mind-Behavior Concentration Program Seoul National University E-mail: [email protected] This material is available online at http://bi.snu.ac.kr/ Fundamentals of Computational Neuroscience, T. P. Trappenberg, 2002.

Upload: galvin

Post on 23-Jan-2016

48 views

Category:

Documents


0 download

DESCRIPTION

9. Continuous attractor and competitive networks. Fundamentals of Computational Neuroscience, T. P. Trappenberg , 2002. Lecture Notes on Brain and Computation Byoung-Tak Zhang Biointelligence Laboratory School of Computer Science and Engineering - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: 9. Continuous attractor and competitive networks

1

9. Continuous attractor and competitive networks

Lecture Notes on Brain and Computation

Byoung-Tak Zhang

Biointelligence Laboratory

School of Computer Science and Engineering

Graduate Programs in Cognitive Science, Brain Science and Bioinformatics

Brain-Mind-Behavior Concentration Program

Seoul National University

E-mail: [email protected]

This material is available online at http://bi.snu.ac.kr/

Fundamentals of Computational Neuroscience, T. P. Trappenberg, 2002.

Page 2: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

Outline

2

9.19.29.39.4

9.5

Spatial representations and the sense of directionLearning with continuous pattern representationsAsymptotic states and the dynamics of neural fields‘Path’ integration, Hebbian trace rule, and sequence learningCompetitive networks and self-organizing maps

Page 3: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.1 Spatial representations and the sense of direction Auto-associative attractor models General memory states in mind

¨ The shape of objects, their smell, texture, or color Point attractor neural networks (PANNs)

¨ Memory represented by independent vectors Continuous attractor neural networks (CANNs) The training patterns represent continuous

¨ Spatial location of an object¨ Topographic maps

3

Page 4: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.1.1 Head direction

The sense of direction¨ Representation of body or head direction¨ A mechanism to update this information without visual cues

4

Fig. 9.1 (A) Experimental response of a neuron in the subiculum of a rodent when the rodent is heading in different directions in a familiar maze. The dashed line represents the new head properties of the same neuron when the rodent is placed in the new unfamil-iar maze. The new response properties will normally be similar to the previous one, that is, head direction cells try to maintain approximately their response properties to specific head directions. However, the results shown were produced in experiments with a rodent that had cortical lesions that weakened the ability to maintain the response properties after the rodent was transferred into a new environment. (B) Neuronal response from many hippocampal neurons in a rodent that responded to the subject’s location (places) in a maze. The figure shows the firing rates of the neurons in response to a particular place. whereby the neurons were placed in the figure so that neurons with similar response properties were placed adjacent to each other.

Page 5: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.1.2 Place fields

Head direction representations The spatial representation of a one-dimensional feature space

in the brain¨ Apply equally to higher-dimensional representations

Neurons in the hippocampus of rats¨ Fire in relation to specific locations within a maze¨ A specific topography of neurons within the hippocampal tissue

with respect to their maximal response to a particular place has not been found

¨ The rearrangement

5

Page 6: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.1.3 Spatial representations in network models A possible solution to representing head directions

6

Fig. 9.2 A proposal as to how the activity of nodes, for clarity arranged into a circle, can represent head directions. With the 20 nodes of this model we can represent head directions with a resolution of 18 degree when using a single binary node as a repre-sentation of a head direction. The single ac-tive node in the figure, represented as a solid circle, indicates a head direction of 72 degrees in this example.

Page 7: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.1.4 Graded winner-take-all models

Winner-take-all¨ Only one node or one activity packet of nodes

The dynamic equation for the networks

Neural field equation

1-dimension

The discretization rules

7

),(),(),(),(),(

txIdytyryxwtxht

txh ext

yi

),(),(),(),(),(

11 tIydytrwth

t

th ext

y y did

xyyxxx

xdx

xix

j

extijiji

i tItrwN

thdt

tdh)()(

1)(

)(1

Nx

thtxih i

/1

)(),(

(9.1)

(9.2)

(9.3)

(9.4)

(9.5)

Page 8: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.2 Learning with continuous pattern rep-resentations Recurrent neural networks

¨ Represent a continuous set of patterns¨ Hebbian learning

Hebbian rule for the excitatory weights¨ In the neural field representation

¨ The firing rate r μ is the firing rate of the neural field while dom-inated by the training example of a pattern μ presented to the network

The inhibition from inhibitory interneurons

8

),(),(),( tyrtxryxwE

cww E

(9.6)

(9.7)

Page 9: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.2.1 Learning Gaussian head direction patterns The external input to a node i, Gaussian profile around a preferred direction The displacement between

¨ The head direction αHD provided by the external input¨ The optimal firing direction of the cell αi

A contribution to each weight component

For infinite resolution of the model, i.e. Δ α → 0:

The continuous notations

9

|),(|),( yxwyxw EE

22 2/ ieri

|)|360|,min(| HDi

HDii

222 2/)( jierrw jiEij

22/2)( ji

NewEij

22 2/)()( jiew iHDE

ij

222 2/]))(()[()( ni

HDEij

jijienw

for a node with a preferred direction equal to that of training example

for a node with a preferred direction different from the direction of the training example

The weight matrix has a Gaussian shape and the same width as the receptive fields of the nodes

: weight matrix depends only on the distance between nodes

(9.8)

(9.9)

(9.10)(9.11)(9.12)

(9.13)

(9.14)

Page 10: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.2.2 Gaussian interaction profiles in the brain An effective interaction structure

¨ Short-distance excitation and long-distance inhibition¨ Columnar organizations in the cortex

The superior colliculus from cell recordings in monkeys

10

Fig. 9.3 Data from cell recordings in the superior colliculus in a monkey that indicate the interaction strength ρw between cells in this midbrain struc-ture. The solid line displays the corre-sponding measurement from simula-tions of a CANN model of this brain structure.

Page 11: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.2.3 Self-organized interaction structures in CANNs

11

Fig. 9.4 A recurrent associative attractor network model, similar to the model shown in Fig. 9.2, where the nodes have been arbitrarily placed in the physical space on a circle. The relative connection strength between the nodes is indicated by the thickness of the lies between the nodes. Each node responds dur-ing learning with a Gaussian firing profile around the stimulus that excites the node maximally. Each node is assigned a center of the receptive field randomly from a pool of centers covering the periodic training domain. (A) Before training all nodes have the same relative weights between them. (B) After training the relative weight structure has changed with a few strong connections and some weaker con-nections. (C) The regularities of the interactions can be revealed when reordering the nodes so that nodes with the strongest connections are adjacent to each other.

Page 12: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.3 Asymptotic states and the dynamics of neural fields The asymptotic states (attractors) The weight matrix is shift invariant after training the network

on continuous Gaussian patterns Local cooperation and global competition Activity packet

¨ A collection of nodes to be active Shift invariant

¨ The activity packet can be stabilized at any location in the net-work depending on an initial external stimulus

Dynamic competition

12

Page 13: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.3.1 Attractor regimes

The different regimes in the CANN model depend on the level of inhibition c

1. Growing activity¨ The inhibition is weak compared to the excitation

2. Decaying activity¨ The inhibition is strong compared to the excitation

3. Stable activity packet¨ In an intermediate range of the strength of inhibitions relative to that of the

excitations

13

Fig. 9.5 (A) Time evolution of the firing rates in a CANN model with 100 nodes. Equal ex-ternal inputs to nodes 30-70 were applied at t = 0. This external input was removed at t = 10τ. The inhibition was set to three times the average firing rate of a node when driven by a Gaussian external input like that used for training the network. (B) The solid line repre-sents the firing rate profile of the simulation shown in (A) at t = 20τ. The dashed line cor-responds to the firing rate profile in a similar simulation with reduced (by a factor of three) inhibition.

Page 14: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.3.2 Formal analysis of attractor states

A threshold activation function g(x) = 1/exp(0.007x) The stationary state of the dynamics eqn 9.3 h(x1)=h(x2)=0, For the weighting function w = wE - c,

14

)()2

(erf2 12

12 xxcxx

N

2

1

),()(x

xdyyxwxh

0),(2

11

x

xdyyxw

Fig. 9.6 (A) Plot of the functions (9.17) and two linear functions with slope c = 1 and c = 0.4. The intersection of the functions (other than at x2 – x1 = 0) gives the solutions we are seeking of eqn 9.17. (B) The solution of eqn 9.17 as a function of the in-hibition constant.

(9.15)

(9.16)

(9.17)

Page 15: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.3.3 Stability of the activity packet

The stability of the activity packet with respect to its move-ment¨ Calculate the velocity of the boundaries

The velocity The centre The velocity of the centre of the activity packet

15

t

hx

dt

dx

))()((2

1)( 21 txtxtxc

2

1

2

1

),(2

1),(

2

12

21

1

x

x

x

x

c dyyxwx

dyyxwxdt

dx

Fig. 9.7 (A) Two Gaussian bell curves cen-tered around two different values x1 and x2. The striped and dotted areas are the same due to the symmetry of the bell curve. The integrals from x1 to x2 over the two different curves are therefore the same. This is not true if the two curves are not symmetric and have different shapes. (B) The dashed line outlines the shape of an activity packet from a simulation. The symmetry of this ac-tivity packet makes the gradients of bound-aries equal except for a sign.

(9.18)

(9.19)

(9.20)

Page 16: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.3.4 Drifting activity packets

16

Fig. 9.8 (A) Noisy weight matrix. Time evolution of the center of gravity of activity packets in CANN model with 100 nodes. The model was trained with activity packets on all possible lo-cations. Each component of the resulting weight matrix was then convoluted with some noise. (B) Irregular or partial learning. Partial view of the weight matrix resulting from training the network with activity packets on only a few locations. (C) Time evolution of the center of gravity of activity packets in CANN model with 100 nodes after training the network on only 10 different locations.

Page 17: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.3.5 Stabilization of the activity packet

The drift in the activity packet can be stabilized by a small in-crease in the excitability of neurons once they have been re-cently activated

NMDA receptor¨ Voltage-dependent nonlinearity

An increases of the voltage-dependent nonlinearity would make more states stable

17

Fig. 9.8 (D) ‘NMDA’-stabilization. The network trained o the 10 locations was augmented a stabilization mechanism that reduces the firing threshold of active neu-rons.

Page 18: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.4 ‘Path’ integration, Hebbian trace rule, and sequence learning The possibility of ‘updating’ the state A subject might not have an absolute value available

¨ Rotate a subject with closed eyes Path integration

¨ Calculate the new position from the old position

18

9.4.1 Path integration with asymmetric weighting functions The path integration problem involves using such asymme-

tries in a systematic way The strength of the asymmetry to the velocity of the move-

ment Idiothetic cues

Page 19: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.4.2 Idiothetic update of head direction representations

19

Fig. 9.9 Model for path integration in CANNs. The central nodes are part of the network with collateral connections as used to represent head directions (Fig. 9.2). The rotation nodes represent collections of neu-rons that signal rotation velocities propor-tional to their activity. The afferents of these rotation cells can modulate the collateral connections within the head direction net-work. We symbolized this with synapses close to the synapses of the collateral con-nections. Each rotation cell can synapse on to each synapse in the head direction net-work. The separation of the connections, as indicated by the solid and dashed lines in the figure, is self-organized during learning

Page 20: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.4.3 Self-organization of a rotation net-work Biologically realistic model

¨ Self-organize the network Clockwise rotation

¨ Clockwise synapse Short-term memory

¨ Trace term

The weight between rotation nodes with Hebbian rule

The rule strengthens the weights between the rotation node and the appropriate synapses in the recurrent network

20

)()()1()1( trtrtr iii

rotkji

rotijk rrkrw

(9.21)

(9.22)

Page 21: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.4.4 Updating the network after learning

The dynamics of the model

21

)1)((

)(),(),,(),(),(

rotk

rotijkij

effij

y

roti

roteff

rwcww

dytrtyrryxwtxht

txh

Fig. 9. 10 (A) Simulation of a CANN model with idiothetic updating mecha-nisms. The acitivity packet can be moved with idiothetic inputs in either clockwise or anti-clockwise directions, depending on the firing rates of the cor-responding rotation cells. (B) The dif-ferent weighting functions from node 50 to the other nodes in the network af-ter learning. w, solid line; wrot, dashed line, weff, dotted line.

(9.23)(9.24)

Page 22: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.4.5 Sequence learning

Sequence learning¨ Apply the generic mechanisms of asymmetric weighting func-

tions¨ Including a trace term (in pattern space) in the canonical learn-

ing rule

For a sufficient strength of the asymmetric component¨ Using the strength parameter¨ The network is able to jump between the patterns in sequences

22

)(1 1

jijiij Nw

(9.25)

Page 23: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.5 Competitive networks and self-organizing maps9.5.1 Two-dimensional SOM

Two-dimensional feature vectors

23

in

inin

r

r

2

1r

Fig. 9.11 Architecture of a two-dimensional self-organizing map. Each of the two input values rin

1

and rin2 , each representing one of

two feature components, is mapped on the map network with individual weight values win. The nodes in the map network are ar-ranged in a two-dimensional sheet with collateral connections (not shown) corresponding to the dis-tances between nodes in this two-dimensional sheet.

(9.26)

Page 24: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.5.2 Simplifying winner-take-all descrip-tion The response of the network

¨ A Gaussian firing rate around the node that receives the strong-est input

¨ Wining node and label it with ‘*’¨ The firing rate of other nodes,¨ Hebbian learning rule,

The weight vector of the wining node is closest to the cor-responding input vector,

24

xww xx allfor * inin

22*2* 2/))()(( jjiiij er

)( ini

ini

ini wrkrw xxx

Fig. 9.12 Experiment with two-dimensional self-organizing feature maps. (A) Initial map with random weight values. (B) and (C) Two examples of the resulting feature map after 1000 random training examples with different random initial conditions. These simulations are discussed further in Chapter 12.

in*x

w

(9.27)

(9.28)

(9.29)

Page 25: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.5.3 Other competitive networks (1)

25

Fig. 9.13 Another example of a two-dimensional self-organizing feature map. In this example we trained the network on 1000 random training ex-amples from the lower left quadrants. The new training examples were then chosen randomly from the lower-left and upper-right quadrant. The parame-ter t specifies how many training ex-amples have been presented to the network.

Page 26: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

9.5.3 Other competitive networks (2)

26

Fig. 9. 14 Example of categorization (vector quantization) of two-dimensional input data (two-dimensional training vectors). The training data are represented as dots, and the input vector that would best evoke a response of one of the three output nodes is represented by a cross. (A) Before training there is no correspondence be-tween the group of input data and the output node representing category. (B) After training we have a ‘preferred vector’ for each node that corresponds to each of the clusters in the training data set.

Page 27: 9. Continuous attractor and competitive networks

(C) 2012 SNU CSE Biointelligence Lab, http://bi.snu.ac.kr

Conclusion

The continuous attractor model¨ Spatial representation

Winner-take-all models Hebbian learning with continuous pattern Gaussian interaction Self-organization models Attractor regimes Path integration and Sequence learning Competitive networks

27