mean field description of neuronal network dynamics - fmi

32
Mean field description of neuronal network dynamics Arvind Kumar Neurobiology and Biophysics Bernstein Center Freiburg Faculty of Biology University of Freiburg, Germany FMI Basel, Switzerland– Introduction to Computational Neuroscience September 23, 2013 Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 1 / 28

Upload: others

Post on 09-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Mean field description of neuronal network dynamics - FMI

Mean field description of neuronal network dynamics

Arvind Kumar

Neurobiology and BiophysicsBernstein Center Freiburg

Faculty of BiologyUniversity of Freiburg, Germany

FMI Basel, Switzerland– Introduction to Computational Neuroscience

September 23, 2013Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 1 / 28

Page 2: Mean field description of neuronal network dynamics - FMI

Brain structure and dynamics: Some facts

• On average there are 80% excitatory 20% inhibitory neurons

• 1mm3 of brain tissue contains 100,000 neurons

• Each neuron makes sparse connections with a probability of10%

• Each neuron receives 105 synapses from its neighbors inmouse and 106 in humans

• Neurons and synapses are stochastic

• Synapses can be changed depending on the properties ofthe synapse and neural activity

• Ongoing activity is highly dynamics and is with low firing rate( 1 Bq) and weak correlations (<0.01).

• [Poisson or Gamma distribution is a decent approximation ofthe single neuron and population activity

• Most neurons seem to be silent

• Task related activity is modulated in both firing rates andcorrelations

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 2 / 28

Page 3: Mean field description of neuronal network dynamics - FMI

Dynamics of random recurrent neural networksChaotic Balanced State in a Model of Cortical Circuits 1325

External

Inhib.

Excit.JEE

JII

JIEJEI

EE

EI

Figure 1: A schematic representation of the network architecture. Excitatoryconnections are shown as open circles; inhibitory ones as filled circles.

2 The Model

We consider a network of N1 excitatory and N2 inhibitory neurons. Thenetwork also receives input fromexcitatoryneurons outside it (see Figure 1).We will use either the subscript 1 or E to denote the excitatory populationand 2 or I for the inhibitory one. The pattern of connections is random butfixed in time. The connection between the ith postsynaptic neuron of the kthpopulation and the jth presynaptic neuron of the lth population, denoted

Jij

kl, is Jkl/√Kwith probability K/Nk and zero otherwise. Here k, l = 1, 2. The

synaptic constants Jk 1 are positive and Jk 2 negative. Thus, on average, Kexcitatory and K inhibitory neurons project to each neuron. We will call Kthe connectivity index. The state of each neuron is described by a binaryvariable σ . The value σ = 0 (σ = 1) corresponds to a quiescent (active)state. The network has an asynchronous dynamics where only one neuronupdates its state at anygiven time. Theupdated state of theupdatingneuronat time t is

σ ik(t) = "(uik(t)), (2.1)

where "(x) is the Heaviside function, "(x) = 0 for x ≤ 0 and "(x) = 1 forx > 0. The total synaptic input, uik to the neuron, relative to the threshold,θk, at time t is

uik(t) =2∑

l=1

Nl∑

j=1Jij

klσj

l (t) + u0k − θk, (2.2)

where u0k denotes the constant external input to the kth population. As ex-plained in appendix B, the precise definition of the order of updates is notessential. Onemodel is a stochastic model in which each neuron updates its

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 3 / 28

Page 4: Mean field description of neuronal network dynamics - FMI

Dynamics of random recurrent neural networksChaotic Balanced State in a Model of Cortical Circuits 1325

External

Inhib.

Excit.JEE

JII

JIEJEI

EE

EI

Figure 1: A schematic representation of the network architecture. Excitatoryconnections are shown as open circles; inhibitory ones as filled circles.

2 The Model

We consider a network of N1 excitatory and N2 inhibitory neurons. Thenetwork also receives input fromexcitatoryneurons outside it (see Figure 1).We will use either the subscript 1 or E to denote the excitatory populationand 2 or I for the inhibitory one. The pattern of connections is random butfixed in time. The connection between the ith postsynaptic neuron of the kthpopulation and the jth presynaptic neuron of the lth population, denoted

Jij

kl, is Jkl/√Kwith probability K/Nk and zero otherwise. Here k, l = 1, 2. The

synaptic constants Jk 1 are positive and Jk 2 negative. Thus, on average, Kexcitatory and K inhibitory neurons project to each neuron. We will call Kthe connectivity index. The state of each neuron is described by a binaryvariable σ . The value σ = 0 (σ = 1) corresponds to a quiescent (active)state. The network has an asynchronous dynamics where only one neuronupdates its state at anygiven time. Theupdated state of theupdatingneuronat time t is

σ ik(t) = "(uik(t)), (2.1)

where "(x) is the Heaviside function, "(x) = 0 for x ≤ 0 and "(x) = 1 forx > 0. The total synaptic input, uik to the neuron, relative to the threshold,θk, at time t is

uik(t) =2∑

l=1

Nl∑

j=1Jij

klσj

l (t) + u0k − θk, (2.2)

where u0k denotes the constant external input to the kth population. As ex-plained in appendix B, the precise definition of the order of updates is notessential. Onemodel is a stochastic model in which each neuron updates its

neu

ron

(#)

Asynchronous Irregular

0 500 10000

20

40

60

80

100

rate

(B

q)

time (ms) 0 500 1000

0

5

10

Synchronous Irregular

0 500 10000

20

40

60

80

100

time (ms) 0 500 1000

0

10

20

Synchronous Regular

0 500 10000

20

40

60

80

100

time (ms) 0 500 1000

0

10

20

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 4 / 28

Page 5: Mean field description of neuronal network dynamics - FMI

Dynamics of random recurrent neural networksHigh-Conductance State of Cortical Networks 13

bombardment (see Figure 4b). For states that exhibit high network syn-chrony, the observed values were higher than what is normally found inhealthy cortical neurons. AI-type states, in contrast, were accompanied byan increase of the total membrane conductance by a factor of 2 to 5 and acorresponding reduction of the membrane time constant from τrest = 15 msto a value of 3 to 8 ms. These results are in very good agreement withphysiological observations in vivo (Destexhe et al., 2003; Leger et al., 2005).

High-Conductance State of Cortical Networks 13

bombardment (see Figure 4b). For states that exhibit high network syn-chrony, the observed values were higher than what is normally found inhealthy cortical neurons. AI-type states, in contrast, were accompanied byan increase of the total membrane conductance by a factor of 2 to 5 and acorresponding reduction of the membrane time constant from τrest = 15 msto a value of 3 to 8 ms. These results are in very good agreement withphysiological observations in vivo (Destexhe et al., 2003; Leger et al., 2005).

High-Conductance State of Cortical Networks 13

bombardment (see Figure 4b). For states that exhibit high network syn-chrony, the observed values were higher than what is normally found inhealthy cortical neurons. AI-type states, in contrast, were accompanied byan increase of the total membrane conductance by a factor of 2 to 5 and acorresponding reduction of the membrane time constant from τrest = 15 msto a value of 3 to 8 ms. These results are in very good agreement withphysiological observations in vivo (Destexhe et al., 2003; Leger et al., 2005).

Chaotic Balanced State in a Model of Cortical Circuits 1325

External

Inhib.

Excit.JEE

JII

JIEJEI

EE

EI

Figure 1: A schematic representation of the network architecture. Excitatoryconnections are shown as open circles; inhibitory ones as filled circles.

2 The Model

We consider a network of N1 excitatory and N2 inhibitory neurons. Thenetwork also receives input fromexcitatoryneurons outside it (see Figure 1).We will use either the subscript 1 or E to denote the excitatory populationand 2 or I for the inhibitory one. The pattern of connections is random butfixed in time. The connection between the ith postsynaptic neuron of the kthpopulation and the jth presynaptic neuron of the lth population, denoted

Jij

kl, is Jkl/√Kwith probability K/Nk and zero otherwise. Here k, l = 1, 2. The

synaptic constants Jk 1 are positive and Jk 2 negative. Thus, on average, Kexcitatory and K inhibitory neurons project to each neuron. We will call Kthe connectivity index. The state of each neuron is described by a binaryvariable σ . The value σ = 0 (σ = 1) corresponds to a quiescent (active)state. The network has an asynchronous dynamics where only one neuronupdates its state at anygiven time. Theupdated state of theupdatingneuronat time t is

σ ik(t) = "(uik(t)), (2.1)

where "(x) is the Heaviside function, "(x) = 0 for x ≤ 0 and "(x) = 1 forx > 0. The total synaptic input, uik to the neuron, relative to the threshold,θk, at time t is

uik(t) =2∑

l=1

Nl∑

j=1Jij

klσj

l (t) + u0k − θk, (2.2)

where u0k denotes the constant external input to the kth population. As ex-plained in appendix B, the precise definition of the order of updates is notessential. Onemodel is a stochastic model in which each neuron updates its

neu

ron

(#)

Asynchronous Irregular

0 500 10000

20

40

60

80

100

rate

(B

q)

time (ms) 0 500 1000

0

5

10

Synchronous Irregular

0 500 10000

20

40

60

80

100

time (ms) 0 500 1000

0

10

20

Synchronous Regular

0 500 10000

20

40

60

80

100

time (ms) 0 500 1000

0

10

20

Recurrent Inhibition (g)

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 5 / 28

Page 6: Mean field description of neuronal network dynamics - FMI

Single neuron approximation

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

Chaotic Balanced State in a Model of Cortical Circuits 1325

External

Inhib.

Excit.JEE

JII

JIEJEI

EE

EI

Figure 1: A schematic representation of the network architecture. Excitatoryconnections are shown as open circles; inhibitory ones as filled circles.

2 The Model

We consider a network of N1 excitatory and N2 inhibitory neurons. Thenetwork also receives input fromexcitatoryneurons outside it (see Figure 1).We will use either the subscript 1 or E to denote the excitatory populationand 2 or I for the inhibitory one. The pattern of connections is random butfixed in time. The connection between the ith postsynaptic neuron of the kthpopulation and the jth presynaptic neuron of the lth population, denoted

Jij

kl, is Jkl/√Kwith probability K/Nk and zero otherwise. Here k, l = 1, 2. The

synaptic constants Jk 1 are positive and Jk 2 negative. Thus, on average, Kexcitatory and K inhibitory neurons project to each neuron. We will call Kthe connectivity index. The state of each neuron is described by a binaryvariable σ . The value σ = 0 (σ = 1) corresponds to a quiescent (active)state. The network has an asynchronous dynamics where only one neuronupdates its state at anygiven time. Theupdated state of theupdatingneuronat time t is

σ ik(t) = "(uik(t)), (2.1)

where "(x) is the Heaviside function, "(x) = 0 for x ≤ 0 and "(x) = 1 forx > 0. The total synaptic input, uik to the neuron, relative to the threshold,θk, at time t is

uik(t) =2∑

l=1

Nl∑

j=1Jij

klσj

l (t) + u0k − θk, (2.2)

where u0k denotes the constant external input to the kth population. As ex-plained in appendix B, the precise definition of the order of updates is notessential. Onemodel is a stochastic model in which each neuron updates its

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 6 / 28

Page 7: Mean field description of neuronal network dynamics - FMI

Single neuron approximation

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

Chaotic Balanced State in a Model of Cortical Circuits 1325

External

Inhib.

Excit.JEE

JII

JIEJEI

EE

EI

Figure 1: A schematic representation of the network architecture. Excitatoryconnections are shown as open circles; inhibitory ones as filled circles.

2 The Model

We consider a network of N1 excitatory and N2 inhibitory neurons. Thenetwork also receives input fromexcitatoryneurons outside it (see Figure 1).We will use either the subscript 1 or E to denote the excitatory populationand 2 or I for the inhibitory one. The pattern of connections is random butfixed in time. The connection between the ith postsynaptic neuron of the kthpopulation and the jth presynaptic neuron of the lth population, denoted

Jij

kl, is Jkl/√Kwith probability K/Nk and zero otherwise. Here k, l = 1, 2. The

synaptic constants Jk 1 are positive and Jk 2 negative. Thus, on average, Kexcitatory and K inhibitory neurons project to each neuron. We will call Kthe connectivity index. The state of each neuron is described by a binaryvariable σ . The value σ = 0 (σ = 1) corresponds to a quiescent (active)state. The network has an asynchronous dynamics where only one neuronupdates its state at anygiven time. Theupdated state of theupdatingneuronat time t is

σ ik(t) = "(uik(t)), (2.1)

where "(x) is the Heaviside function, "(x) = 0 for x ≤ 0 and "(x) = 1 forx > 0. The total synaptic input, uik to the neuron, relative to the threshold,θk, at time t is

uik(t) =2∑

l=1

Nl∑

j=1Jij

klσj

l (t) + u0k − θk, (2.2)

where u0k denotes the constant external input to the kth population. As ex-plained in appendix B, the precise definition of the order of updates is notessential. Onemodel is a stochastic model in which each neuron updates its

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 7 / 28

Page 8: Mean field description of neuronal network dynamics - FMI

Single neuron approximation

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

τVi(t) = −V (t) +R(Isyni )

Isyni = Iexti (t) + Iexci (t)− Iinhi (t)

λi(t) = f(Vi(t))

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 8 / 28

Page 9: Mean field description of neuronal network dynamics - FMI

Single neuron approximation

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

τVi(t) = −V (t) +R(Isyni )

Isyni = Iexti (t) + Iexci (t)− Iinhi (t)

λi(t) = f(Vi(t))

For uncorrelated Poisson type spike trains:

µ(Iexc) = λexcKexc

∫J .eEPSP (t)

σ2(Iexc) = λexcKexc

∫(J .eEPSP (t))2

µ(Iinh) = λinhKinh

∫J .iIPSP (t)

σ2(Iinh) = λinhKinh

∫(J .iIPSP (t))2

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 9 / 28

Page 10: Mean field description of neuronal network dynamics - FMI

Single neuron approximation

The mean and variance of the synaptic current

µ(Isyn) = µ(Iexc) + µ(Iexc)−µ(Iinh)

σ2(Isyn) = σ2(Iext) + σ2(Iexc) + σ2(Iinh)

• Inhibitory inputs reduce the mean input current

• Inhibitory inputs increase the variance of the input current

• Increase in weight of the synapses always increasesvariance unless inputs are correlated

• If total mean synaptic current is constant:

increases in network size (i.e. connections) will reduce thevariance

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 10 / 28

Page 11: Mean field description of neuronal network dynamics - FMI

Firing rate dynamics of random recurrent networks

When the variance of input can be ignored (e.g. large sparse networks) average firingrate of the neuronal populations can be described as

τedλe

dt= −λe + JeeKeeλe − JeiKeiλi

τidλi

dt= −λi + JieKieλe − JiiKiiλi

or in more general form

τedλe

dt= −λe + φe(λe, λi)

τidλi

dt= −λi + φi(λe, λi)

τe,i effective time constant of the population

λe,i average firing rate of the population

K1,2 No. of connections from population ’2’ to ’1’

J1,2 Connection strength from population ’2’ to ’1’

φe,i effective transfer function of the population

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 11 / 28

Page 12: Mean field description of neuronal network dynamics - FMI

Stability analysis of a linear system: Geometrical approach

λe

λi

Dλi / dt = 0

Dλe /dt = 0

• For stability Inhibitory nullcline must intersect the Excitatory nullcline from below.

• Slope at the intersection point determines whether it is stable or metastable

Lets assume a linear dynamics:

0 = −λe + Jeeλe − Jeiλi + Ue

0 = −λi + Jieλe − Jiiλi + Ui

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 12 / 28

Page 13: Mean field description of neuronal network dynamics - FMI

Stability analysis of a linear system

Lets assume a linear dynamics:

τedλe

dt= −λe + Jeeλe − Jeiλi + Ue

τidλi

dt= −λi + Jieλe − Jiiλi + Ui

Λ = AΛ + BU

A =

((Jee−1)τe

Jeiτe

(Jie)τi

Jii−1τi

)

Eigenvalues of the matrix complete describe the stabilityand nature of instabilities of the system

y = Ayy(t) = eλtν

= eλt(ν1ν2

)

λ =1

2

[Jee − 1

τe+Jii − 1

τi±

√[Jee − 1

τe+Jii − 1

τi

]2+ 4

JeiJie

τeτi

]

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 13 / 28

Page 14: Mean field description of neuronal network dynamics - FMI

Stability analysis of a linear system

Lets assume a linear dynamics:

τedλe

dt= −λe + Jeeλe − Jeiλi + Ue

τidλi

dt= −λi + Jieλe − Jiiλi + Ui

Λ = AΛ + BU

A =

((Jee−1)τe

Jeiτe

(Jie)τi

Jii−1τi

)

Eigenvalues of the matrix complete describe the stabilityand nature of instabilities of the system

y = Ayy(t) = eλtν

= eλt(ν1ν2

)

λ =1

2

[Jee − 1

τe+Jii − 1

τi±

√[Jee − 1

τe+Jii − 1

τi

]2+ 4

JeiJie

τeτi

]

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 13 / 28

Page 15: Mean field description of neuronal network dynamics - FMI

Stability analysis of a linear system

Lets assume a linear dynamics:

τedλe

dt= −λe + Jeeλe − Jeiλi + Ue

τidλi

dt= −λi + Jieλe − Jiiλi + Ui

Λ = AΛ + BU

A =

((Jee−1)τe

Jeiτe

(Jie)τi

Jii−1τi

)

Eigenvalues of the matrix complete describe the stabilityand nature of instabilities of the system

y = Ayy(t) = eλtν

= eλt(ν1ν2

)

λ =1

2

[Jee − 1

τe+Jii − 1

τi±

√[Jee − 1

τe+Jii − 1

τi

]2+ 4

JeiJie

τeτi

]

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 13 / 28

Page 16: Mean field description of neuronal network dynamics - FMI

Eigenvalues and Stability

τ = Tr(A) ∆ = det(A)

Equilibrium Scholarpedia

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 14 / 28

Page 17: Mean field description of neuronal network dynamics - FMI

Non-linear system

τedλe

dt= fe(λe, λi)

τidλi

dt= fi(λe, λi)

Example:

τedλe

dt= k1aλe − k2λeλi = fe(λe, λi)

τidλi

dt= k2λeλi − k3λi = fi(λe, λi)

The nonlinear interactions can be approximated by the Jacobian:

J =

(∂f1(λe,λi)

∂λe

∂f1(λe,λi)∂λi

∂f2(λe,λi)∂λe

∂f2(λe,λi)∂λi

)

Jacobian is given as:

J =

(k1a− k2λi −k2λe

keλi k2λe − k3

)Linearization of this system can be done at any point in the statespace (λ∗e , λ∗i )

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 15 / 28

Page 18: Mean field description of neuronal network dynamics - FMI

Non-linear system

τedλe

dt= fe(λe, λi)

τidλi

dt= fi(λe, λi)

Example:

τedλe

dt= k1aλe − k2λeλi = fe(λe, λi)

τidλi

dt= k2λeλi − k3λi = fi(λe, λi)

Jacobian is given as:

J =

(k1a− k2λi −k2λe

keλi k2λe − k3

)Linearization of this system can be done at any point in the statespace (λe, λi)

Hypebolic equilibrium:

When all the eigenvalues of the Jocobian have non-zero realpart

Small perturbations do not change the phase portrait nearequilibrium

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 16 / 28

Page 19: Mean field description of neuronal network dynamics - FMI

Firing rate dynamics of random recurrent networks

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

λe

φ e(λe,0

)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

λi

φ i(λi,0

)

τedλe

dt= −λe + φe(λe, λi)

τidλi

dt= −λi + φi(λe, λi)

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 17 / 28

Page 20: Mean field description of neuronal network dynamics - FMI

Firing rate dynamics of random recurrent networks

0 0.5 10

0.2

0.4

0.6

0.8

1

λi

φ i(λi,0

)

0 0.5 10

0.2

0.4

0.6

0.8

1

λe

φ e(λe,0

)0 0.5 1

0

0.2

0.4

0.6

0.8

1

λe

λ iS

U

M

τedλe

dt= −λe + φe(λe, λi)

τidλi

dt= −λi + φi(λe, λi)

To solve lets look at the nullclines

0 = −λe + φe(λe, λi)

0 = −λi + φi(λe, λi)

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 18 / 28

Page 21: Mean field description of neuronal network dynamics - FMI

Stability analysis from the nullclines

0 0.5 10

0.5

1

λe

λ i

λe

λi

Dλi / dt = 0

Dλe /dt = 0

S

U

M

• For stability Inhibitory nullcline must intersect the Excitatory nullcline from below.

• Slope at the intersection point determines whether it is stable or metastable

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 19 / 28

Page 22: Mean field description of neuronal network dynamics - FMI

Self-sustained activity in random networks

0 0.5 10

0.5

1

λe

λ i

0 0.5 10

0.5

1

λe

λ i

S

U

MS

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 20 / 28

Page 23: Mean field description of neuronal network dynamics - FMI

Self-sustained activity in random networksHigh-Conductance State of Cortical Networks 27

Figure 10: Self-sustained activity in conductance-based networks. Inducingself-sustained activity in a network comprising N = 30,000 neurons (see Table 1for parameters). External input was necessary only to ignite network activity;here we used external inputs firing at a rate of 1 spike per second per neu-ron. When stationary firing had established itself (here after 100 ms), externalinput was gradually decreased (exponential decay, time constant 50 ms). In-terestingly, the network remained active. In the example shown, self-sustainedactivity ceased spontaneously after about a second. The small spontaneous“population burst” that terminated self-sustained activity presumably inducedtoo much inhibition for the network to remain active. (a) Spiking activity of 100(black ticks) and 1000 (gray dots) excitatory neurons randomly selected fromthe network. (b) Peristimulus time histogram depicting the firing rate (spikes/s)averaged over all neurons in the network (bin size 1 ms). (c) Temporal protocolfor the firing rates (in spikes/s) of the external inputs.

3.3.2 Survival of Self-Sustained Activity. In simulations we observed thatthe networks could sustain their active state for a certain period of time.Typically this self-sustained activity ended by a “spontaneous” transitionto the zero-rate state (see Figure 10). What determines the survival time of(nonzero) persistent network activity?

The main assumption underlying the mean field approach taken here isthe independence of input channels for each neuron. In a recurrent network,however, the finite size of the system and the unavoidable overlap of input

High-Conductance State of Cortical Networks 29

Figure 11: Survival probability of persistent activity increases for largernetworks. (a) Survival probability estimated from multiple simulationsof large conductance-based networks. For each network size (N =20,000/30,000/40,000/50,000 neurons) numerical simulations (black symbols)were performed, all based on the same connectivity parameters (see text andTable 1). Resorting to the mean survival time τsurvival from 90 simulations for eachnetwork size, the data fit an exponential distribution P(τ > t) = exp(−t/τsurvival)very well (gray line). (b) Mean survival time τsurvival for different network sizes.The exponential increase in survival probability for larger networks suggests astabilizing influence of large populations with less shared input. Extrapolationpredicts a lifetime on the order of 1 hour for networks as large as a single corticalcolumn. Population PSTHs (bin size 1 ms) were generated from 90 simulationsof a network with N = 30,000 neurons, triggered either (c) on its spontaneousdeath or (d) on subcritical short periods of relative silence (“near death” withmean rate < 0.05 spikes/s for at least 1 ms). The much higher peak(s) in c sup-ports an explanation of spontaneous death in terms of “lethal synchrony” (seetext for more details).

Latham et al. 2000, Kumar et al. 2008

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 21 / 28

Page 24: Mean field description of neuronal network dynamics - FMI

Firing rate dynamics of random recurrent networks: fluctuations

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

τVi(t) = −V (t) +RIsyni (t)

λi(t) = f(Vi(t))

When input is Poisson type then the synaptic input is shot noise

RIsyni (t) = µ(t) + σ√τηi(t)

µ(t) = µnet(t) + µext

µnet(t) = KexcJEλ(t−D)τ −KinhJIλ(t−D)τ

µext(t) = KexcJEλextτ

σ2(t) = σ2net(t) + σ2

ext

σ2net(t) = Kexc(JEλ(t−D)τ)2 +Kinh(JIλ(t−D)τ)2

σ2ext = JE

√Kexcλextτ

Find a self-consistent solution for the following:

1

λ0= τref + τ

√π

∫ Vth−µ0σ0

Vr−µ0σ0

dueu2(1 + erf(u))

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 22 / 28

Page 25: Mean field description of neuronal network dynamics - FMI

Firing rate dynamics of random recurrent networks: fluctuations

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

τVi(t) = −V (t) +RIsyni (t)

λi(t) = f(Vi(t))

When input is Poisson type then the synaptic input is shot noise

RIsyni (t) = µ(t) + σ√τηi(t)

µ(t) = µnet(t) + µext

µnet(t) = KexcJEλ(t−D)τ −KinhJIλ(t−D)τ

µext(t) = KexcJEλextτ

σ2(t) = σ2net(t) + σ2

ext

σ2net(t) = Kexc(JEλ(t−D)τ)2 +Kinh(JIλ(t−D)τ)2

σ2ext = JE

√Kexcλextτ

Find a self-consistent solution for the following:

1

λ0= τref + τ

√π

∫ Vth−µ0σ0

Vr−µ0σ0

dueu2(1 + erf(u))

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 22 / 28

Page 26: Mean field description of neuronal network dynamics - FMI

Firing rate dynamics of random recurrent networks: fluctuations

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

τVi(t) = −V (t) +RIsyni (t)

λi(t) = f(Vi(t))

When input is Poisson type then the synaptic input is shot noise

RIsyni (t) = µ(t) + σ√τηi(t)

µ(t) = µnet(t) + µext

µnet(t) = KexcJEλ(t−D)τ −KinhJIλ(t−D)τ

µext(t) = KexcJEλextτ

σ2(t) = σ2net(t) + σ2

ext

σ2net(t) = Kexc(JEλ(t−D)τ)2 +Kinh(JIλ(t−D)τ)2

σ2ext = JE

√Kexcλextτ

Find a self-consistent solution for the following:

1

λ0= τref + τ

√π

∫ Vth−µ0σ0

Vr−µ0σ0

dueu2(1 + erf(u))

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 22 / 28

Page 27: Mean field description of neuronal network dynamics - FMI

Firing rate dynamics of random recurrent networks: fluctuations

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

time (s)

mem

. po

t (m

V)

2 3 4 5 6 7 80

5

10

15

20

25

prob. 0 0.01 0.02

0

5

10

15

20

25

High-Conductance State of Cortical Networks 23

firing rates and the “trivial” one for zero rate (gray curve)—but a thirdone for some in-between rate (black curve). The two fixed points commonto most input-output curves are typically attracting, whereas the potentialthird fixed point is normally repelling. It indicates a threshold for recurrent

Find a self-consistent solution for the following:

1

λ0= τref + τ

√π

∫ Vth−µ0σ0

Vr−µ0σ0

dueu2(1 + erf(u))

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 23 / 28

Page 28: Mean field description of neuronal network dynamics - FMI

Firing rate dynamics of random recurrent networks: fluctuations

External InputPoisson Spikes/Noise

Excitatory Input from within the network

Excitatory Input from within the network

Neuron output

High-Conductance State of Cortical Networks 25

Figure 9: Mean field theory versus network simulations. (a) Average firing ratesestimated from network simulations for different combinations of external in-put νext and relative strength of inhibition g. (b) Firing rates at the fixed pointin single-neuron simulations, where network inputs were replaced by indepen-dent Poisson inputs (mean field). (c) Absolute difference between the averagefiring rate measured in network simulations (a) and the firing rate obtainedfrom mean field theory (b). As long as the network is in the asynchronous irreg-ular regime (see Figure 4), the single-neuron approximation predicts the firingrates in the network very well. However, networks in the excitation-dominatedregime (g < 1.5) or networks with strong external inputs (νext > 4.5) exhibitsynchronous activity, and the mean field prediction fails to match the networksimulations.

High-Conductance State of Cortical Networks 25

Figure 9: Mean field theory versus network simulations. (a) Average firing ratesestimated from network simulations for different combinations of external in-put νext and relative strength of inhibition g. (b) Firing rates at the fixed pointin single-neuron simulations, where network inputs were replaced by indepen-dent Poisson inputs (mean field). (c) Absolute difference between the averagefiring rate measured in network simulations (a) and the firing rate obtainedfrom mean field theory (b). As long as the network is in the asynchronous irreg-ular regime (see Figure 4), the single-neuron approximation predicts the firingrates in the network very well. However, networks in the excitation-dominatedregime (g < 1.5) or networks with strong external inputs (νext > 4.5) exhibitsynchronous activity, and the mean field prediction fails to match the networksimulations.

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 24 / 28

Page 29: Mean field description of neuronal network dynamics - FMI

Wilson-Cowan eq. for networks with spatial connectivity:Neural field models

2

-100 -80 -60 -40 -20 0 20J0

-100

-80

-60

-40

-20

0

20

J 1

SBOB

SUOU

SW

SW

AA

OU-SW TW

LWOU OU-A

OU-SW

OU-OB

OU-TW

TW-SW

SW-LW

OU-SW

OU-SB

FIG. 1: Phase diagram of the rate model, Eq. (1) for D = 0.1.The states are: stationary uniform SU, stationary bump SB,oscillatory bump OB, oscillatory uniform OU, traveling wavesTW, standing waves SW, lurching waves LW, and aperiodicpatterns A. All solid lines have been determined analytically.Stability lines of other states (dotted lines) have been deter-mined by numerical simulations. Regions of bistability areindicated by hyphens, e.g. OU-SW. Symbols refer to the pat-terns in Fig.2. (For the case D = 0 see [6], figure 13.7.)

tions of the SU state, and whose stability can be solvedanalytically. Secondary instabilities of these states areinvestigated numerically as are the dynamical patternsto which the instabilities lead.

I. Stationary Bumps. As J1 crosses the value of 2 frombelow a Turing instability of the SU state to a stationarybump of activity occurs, SB (Fig.2 !) . The stabilityboundary of such bumps can be calculated analyticallyin Eq. 1, revealing two mechanisms of destabilization.For sufficiently strong local excitatory connectivity a rateinstability occurs (m → ∞ as t → ∞). For sufficientlystrong inhibition an oscillatory instability occurs. Thisinstability may lead to the OU state, or to an oscillatorybump state, OB for J1 sufficiently large (Fig.2 #).

II. Oscillatory Uniform. Sufficiently strong global inhi-bition (i.e. J0 negative enough), leads to a Hopf bifurca-tion to an OU state (Fig.2 "). For small D, this Hopf bi-furcation occurs for J0 ∼ −π/(2D), and the frequency ofthe unstable mode at the bifurcation is f ∼ 1/(4D). Theamplitude of the oscillatory instability grows until the in-put current crosses the threshold of the transfer functionfrom above. The emerging limit cycle thus consists of aperiod, e.g. 0 < t < T1 in which the input current isnegative and during which m(x, t) ∝ e−t. If the durationof this initial period is greater than the delay, T1 > D,then in the subsequent time period, T1 < t < T1 +D, thesolution will consist of a homogeneous exponential solu-tion and a particular solution driven by the value of m

in the preceding epoch. The complete limit cycle can beconstructed by extending this reasoning to solve Eq.(1)for as many epochs as are required to cover the full pe-riod of oscillation, T . The latter is determined by thecondition: m(T ) = m(0). Once the limit cycle mlc(t)has been found, its stability is determined by consider-ing the ansatz m(t) = mlc(t) + δm0(t) + δm1(t) cos (x),where δm0 and δm1 are small. The conditions δmi(T ) =βiδmi(0) yield the Floquet multipliers βi for i = 0, 1. IfD < T − T1 < 2D, then β0 = 1 and

β1 = e−T

(1 +

J1

2ReD + J2

1

(R − D)2

8e2D

)(2)

where R = T − T1. The homogeneous oscillations arestable if |β1| < 1. For β1 = −1, a period doubling insta-bility of the spatially heterogeneous mode occurs, leadingto SW in which two distinct regions of the network oscil-late out of phase with one another (Fig.2 ♦). Numericalsimulations show that further decreasing J1 leads to ad-ditional instabilities to aperiodic patterns A (Fig.2 $). Aphase instability occurs for β1 = 1. It can be shown thatthis condition is met in particular for J1 = 2J0, leadingto SW. This condition is also met on an additional curvein the region J1 > 0. The instability which occurs as onecrosses this line from below leads either to an OB or aSB state, depending on J0.

III. Traveling Waves. When J1 is sufficiently negative(J1 ∼ −π/D for small D), the SU state undergoes a bi-furcation to TW. The profile of the wave can be derived,yielding a relation for its velocity, v = − tan (vD). TheTW state can destabilize along three curves in Fig.1. Ifthe global inhibition increases, an oscillatory instabilityto a OU state occurs. If local inhibition and long-rangeexcitation are strengthened (i.e. decreasing J1) an oscil-latory instability of the waves leads to a lurching wavestate [11, 20], LW, in which the waves slow down andspeed up periodically, (see supplementary information).Simulations reveal that LW become unstable to SW asJ1 decreases (Fig.2 %). Further decreasing J1 leads toadditional bifurcations to more complex patterns withaperiodic dynamics, A (Fig.2 &).

The stability boundaries of bumps, homogeneous oscil-lations and waves indicate several regions of bistability(indicated by a hyphen in Fig.1, e.g. OU-SW). Addi-tional regions of bistability are found in numerical sim-ulations bringing the total to 8 for the phase-diagram inFigure 1.

In the limit D → 0 all the bifurcation lines except theTuring instability line at J1 = 2 and the rate instabilitylines go to negative infinity. Hence, only the SU and SBstates survive in that limit. Beyond D = 0.155 the linedefining the period doubling instability from the OU tothe SW state moves towards more negative values of J0

and J1. This line goes to infinity as D approaches 0.365,at which point the corresponding SW and A regions dis-appear.

The results presented thus far are for a threshold lin-ear transfer function and simplified connectivity. Simu-

3

! !

" #

♦ %

& '

time time

FIG. 2: Space-time plots of typical patterns of activity inthe different regions of Figure 1, shown over 5 units of time.Lefthand column from top to bottom: J0 = −80 and J1 =15, 5, −46, −86 corresponds to OB, OU, SW and A in Fig.1.Right-hand column from top to bottom: J0 = −10 and J1 =5, −38, −70, −80 corresponding to SB, TW, SW and A. D =0.1 and I is varied to maintain the mean firing rate at 0.1.Dark regions indicate higher levels of activity in gray scale.Symbols refer to the location of the patterns in the phase-diagram, Fig.1.

lations of Eq. (1) with other nonlinear transfer functionsΦ reveal a qualitatively similar phase diagram in whichall the dynamical regimes seen in Fig. 1 are present.Nonetheless, the nonlinearity of the transfer function de-termines the nature of the bifurcation and will thus alterthe regions of bistability. A general, symmetric func-tion J may introduce Turing and Turing-Hopf instabili-ties at higher wavenumber. While the simplicity of Eq.1allows for analysis, firing rate models do not necessar-ily provide an accurate description of the dynamics ofmore realistic networks of spiking neurons (NSN). Towhat extent are the dynamics in Eq.1 relevant for under-standing the patterns of activity observed in the NSN?We consider a 1-D network of conductance-based neuronswith periodic boundary conditions, composed of 2 popu-lations of N neurons: excitatory E and inhibitory I. Allneurons are described by a Hodgkin-Huxley type model[21] with one somatic compartment. Na and K currentsshape the action potentials. The probability of connec-tion from a neuron in population A(= E, I) to a neuronin population B is pBA, where p depends on the distancer between them as pBA = pBA

0 + pBA1 cos (r). Synap-

tic currents are modeled as Isyn,A = −gAs(t)(V − VA),A ∈ E, I, where V is the voltage of the post-synapticneuron, VA is the reversal potential of the synapse (0mVfor A = E and -80mV for A = I), gA is the maxi-

500 600 700time (ms)

1000 1100 1200time (ms)

FIG. 3: Typical firing patterns in a NSN. See text fordetails. Compare with Fig. 2. The network consistsof two populations of 2000 neurons each. Lefthand col-umn from top to bottom: a localized bump with oscil-lations, homogeneous oscillations, a period-doubled stateof oscillating bumps, and a chaotic state. Parameters:pE0 = 0.2, 0, 0, 0, pI

0 = 0.2, 0.5, 0.5, 0.5, pE1 = 0.1, 0, 0, 0,

pI1 = 0, 0, 0.2, 0.5, gE = 0.1, 0, 0, 0, gI = 0.28, 0.1, 0.1, 0.1,

νext = 2000, 15000, 15000, 15000 and gext = 0.01. δ =0ms. Righthand column from top to bottom: a steadyand localized bump, the stationary uniform state, oscillat-ing bumps and a chaotic state. Parameters: pE

0 = 0.2,pI0 = 0.2, pE

1 = 0.2, 0, 0, 0, (top to bottom) pI1 = 0, 0, 0.2, 0.2

, gE = 0.01, 0.01, 0.01, 0.1, gI = 0.028, 0.028, 0.028, 0.28,νext = 500, 500, 5000, 500 and gext = 0.01, 0.01, 0.001, 0.01.δ = 0, 0, 0, 2.0ms

mum conductance change and s(t) is a variable which,given a presynaptic spike at time t∗ − δ, takes the forms(t) = 1

τ1−τ2

(e−(t−t∗)/τ1 − e−(t−t∗)/τ2

), where δ is the

delay and τ1 and τ2 are the rise and decay times. Finally,each neuron receives external, excitatory synaptic inputas a Poisson process with a rate νext, modeled as thesynaptic currents above with a maximum conductancechange of gext. To compare the dynamical patterns foundin the NSN and their dependence on the parameters withthose of the firing rate model we choose pIE = pEE = pE ,pEI = pII = pI and identical synaptic time constantsfor excitatory and inhibitory connections (τ1 = 1ms andτ2 = 3ms). This creates an effective one-population net-work with an effective coupling similar to that of therate model. Fig. 3 shows eight typical firing patterns inthe NSN. The figures have been arranged to allow com-parison with those in Fig. 2. The patterns show goodqualitative agreement, and were obtained by altering thenetwork parameters in a way analogous to those changesmade in the rate model to produce Fig. 2. The only

Neural field models –Wilson-Cowan eq. with spatial connectivity

∂λ(x, t)

dt= −λ+

∫ ∞−∞

dyJ(y)f(λ(x− y, t− |x− y|/v))

J(|x− y|) = J0 + J1cos(x− y)

J0 > |J1| – All excitatory

J0 < − |J1| – All inhibitory

J1 > |J0| – Mexican hat

J1 < − |J0| – Inverted mexican hat

Roxin et al. 2005Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 25 / 28

Page 30: Mean field description of neuronal network dynamics - FMI

Wilson-Cowan eq. for networks with spatial connectivity:Neural field models

2

-100 -80 -60 -40 -20 0 20J0

-100

-80

-60

-40

-20

0

20

J 1

SBOB

SUOU

SW

SW

AA

OU-SW TW

LWOU OU-A

OU-SW

OU-OB

OU-TW

TW-SW

SW-LW

OU-SW

OU-SB

FIG. 1: Phase diagram of the rate model, Eq. (1) for D = 0.1.The states are: stationary uniform SU, stationary bump SB,oscillatory bump OB, oscillatory uniform OU, traveling wavesTW, standing waves SW, lurching waves LW, and aperiodicpatterns A. All solid lines have been determined analytically.Stability lines of other states (dotted lines) have been deter-mined by numerical simulations. Regions of bistability areindicated by hyphens, e.g. OU-SW. Symbols refer to the pat-terns in Fig.2. (For the case D = 0 see [6], figure 13.7.)

tions of the SU state, and whose stability can be solvedanalytically. Secondary instabilities of these states areinvestigated numerically as are the dynamical patternsto which the instabilities lead.

I. Stationary Bumps. As J1 crosses the value of 2 frombelow a Turing instability of the SU state to a stationarybump of activity occurs, SB (Fig.2 !) . The stabilityboundary of such bumps can be calculated analyticallyin Eq. 1, revealing two mechanisms of destabilization.For sufficiently strong local excitatory connectivity a rateinstability occurs (m → ∞ as t → ∞). For sufficientlystrong inhibition an oscillatory instability occurs. Thisinstability may lead to the OU state, or to an oscillatorybump state, OB for J1 sufficiently large (Fig.2 #).

II. Oscillatory Uniform. Sufficiently strong global inhi-bition (i.e. J0 negative enough), leads to a Hopf bifurca-tion to an OU state (Fig.2 "). For small D, this Hopf bi-furcation occurs for J0 ∼ −π/(2D), and the frequency ofthe unstable mode at the bifurcation is f ∼ 1/(4D). Theamplitude of the oscillatory instability grows until the in-put current crosses the threshold of the transfer functionfrom above. The emerging limit cycle thus consists of aperiod, e.g. 0 < t < T1 in which the input current isnegative and during which m(x, t) ∝ e−t. If the durationof this initial period is greater than the delay, T1 > D,then in the subsequent time period, T1 < t < T1 +D, thesolution will consist of a homogeneous exponential solu-tion and a particular solution driven by the value of m

in the preceding epoch. The complete limit cycle can beconstructed by extending this reasoning to solve Eq.(1)for as many epochs as are required to cover the full pe-riod of oscillation, T . The latter is determined by thecondition: m(T ) = m(0). Once the limit cycle mlc(t)has been found, its stability is determined by consider-ing the ansatz m(t) = mlc(t) + δm0(t) + δm1(t) cos (x),where δm0 and δm1 are small. The conditions δmi(T ) =βiδmi(0) yield the Floquet multipliers βi for i = 0, 1. IfD < T − T1 < 2D, then β0 = 1 and

β1 = e−T

(1 +

J1

2ReD + J2

1

(R − D)2

8e2D

)(2)

where R = T − T1. The homogeneous oscillations arestable if |β1| < 1. For β1 = −1, a period doubling insta-bility of the spatially heterogeneous mode occurs, leadingto SW in which two distinct regions of the network oscil-late out of phase with one another (Fig.2 ♦). Numericalsimulations show that further decreasing J1 leads to ad-ditional instabilities to aperiodic patterns A (Fig.2 $). Aphase instability occurs for β1 = 1. It can be shown thatthis condition is met in particular for J1 = 2J0, leadingto SW. This condition is also met on an additional curvein the region J1 > 0. The instability which occurs as onecrosses this line from below leads either to an OB or aSB state, depending on J0.

III. Traveling Waves. When J1 is sufficiently negative(J1 ∼ −π/D for small D), the SU state undergoes a bi-furcation to TW. The profile of the wave can be derived,yielding a relation for its velocity, v = − tan (vD). TheTW state can destabilize along three curves in Fig.1. Ifthe global inhibition increases, an oscillatory instabilityto a OU state occurs. If local inhibition and long-rangeexcitation are strengthened (i.e. decreasing J1) an oscil-latory instability of the waves leads to a lurching wavestate [11, 20], LW, in which the waves slow down andspeed up periodically, (see supplementary information).Simulations reveal that LW become unstable to SW asJ1 decreases (Fig.2 %). Further decreasing J1 leads toadditional bifurcations to more complex patterns withaperiodic dynamics, A (Fig.2 &).

The stability boundaries of bumps, homogeneous oscil-lations and waves indicate several regions of bistability(indicated by a hyphen in Fig.1, e.g. OU-SW). Addi-tional regions of bistability are found in numerical sim-ulations bringing the total to 8 for the phase-diagram inFigure 1.

In the limit D → 0 all the bifurcation lines except theTuring instability line at J1 = 2 and the rate instabilitylines go to negative infinity. Hence, only the SU and SBstates survive in that limit. Beyond D = 0.155 the linedefining the period doubling instability from the OU tothe SW state moves towards more negative values of J0

and J1. This line goes to infinity as D approaches 0.365,at which point the corresponding SW and A regions dis-appear.

The results presented thus far are for a threshold lin-ear transfer function and simplified connectivity. Simu-

3

! !

" #

♦ %

& '

time time

FIG. 2: Space-time plots of typical patterns of activity inthe different regions of Figure 1, shown over 5 units of time.Lefthand column from top to bottom: J0 = −80 and J1 =15, 5, −46, −86 corresponds to OB, OU, SW and A in Fig.1.Right-hand column from top to bottom: J0 = −10 and J1 =5, −38, −70, −80 corresponding to SB, TW, SW and A. D =0.1 and I is varied to maintain the mean firing rate at 0.1.Dark regions indicate higher levels of activity in gray scale.Symbols refer to the location of the patterns in the phase-diagram, Fig.1.

lations of Eq. (1) with other nonlinear transfer functionsΦ reveal a qualitatively similar phase diagram in whichall the dynamical regimes seen in Fig. 1 are present.Nonetheless, the nonlinearity of the transfer function de-termines the nature of the bifurcation and will thus alterthe regions of bistability. A general, symmetric func-tion J may introduce Turing and Turing-Hopf instabili-ties at higher wavenumber. While the simplicity of Eq.1allows for analysis, firing rate models do not necessar-ily provide an accurate description of the dynamics ofmore realistic networks of spiking neurons (NSN). Towhat extent are the dynamics in Eq.1 relevant for under-standing the patterns of activity observed in the NSN?We consider a 1-D network of conductance-based neuronswith periodic boundary conditions, composed of 2 popu-lations of N neurons: excitatory E and inhibitory I. Allneurons are described by a Hodgkin-Huxley type model[21] with one somatic compartment. Na and K currentsshape the action potentials. The probability of connec-tion from a neuron in population A(= E, I) to a neuronin population B is pBA, where p depends on the distancer between them as pBA = pBA

0 + pBA1 cos (r). Synap-

tic currents are modeled as Isyn,A = −gAs(t)(V − VA),A ∈ E, I, where V is the voltage of the post-synapticneuron, VA is the reversal potential of the synapse (0mVfor A = E and -80mV for A = I), gA is the maxi-

500 600 700time (ms)

1000 1100 1200time (ms)

FIG. 3: Typical firing patterns in a NSN. See text fordetails. Compare with Fig. 2. The network consistsof two populations of 2000 neurons each. Lefthand col-umn from top to bottom: a localized bump with oscil-lations, homogeneous oscillations, a period-doubled stateof oscillating bumps, and a chaotic state. Parameters:pE0 = 0.2, 0, 0, 0, pI

0 = 0.2, 0.5, 0.5, 0.5, pE1 = 0.1, 0, 0, 0,

pI1 = 0, 0, 0.2, 0.5, gE = 0.1, 0, 0, 0, gI = 0.28, 0.1, 0.1, 0.1,

νext = 2000, 15000, 15000, 15000 and gext = 0.01. δ =0ms. Righthand column from top to bottom: a steadyand localized bump, the stationary uniform state, oscillat-ing bumps and a chaotic state. Parameters: pE

0 = 0.2,pI0 = 0.2, pE

1 = 0.2, 0, 0, 0, (top to bottom) pI1 = 0, 0, 0.2, 0.2

, gE = 0.01, 0.01, 0.01, 0.1, gI = 0.028, 0.028, 0.028, 0.28,νext = 500, 500, 5000, 500 and gext = 0.01, 0.01, 0.001, 0.01.δ = 0, 0, 0, 2.0ms

mum conductance change and s(t) is a variable which,given a presynaptic spike at time t∗ − δ, takes the forms(t) = 1

τ1−τ2

(e−(t−t∗)/τ1 − e−(t−t∗)/τ2

), where δ is the

delay and τ1 and τ2 are the rise and decay times. Finally,each neuron receives external, excitatory synaptic inputas a Poisson process with a rate νext, modeled as thesynaptic currents above with a maximum conductancechange of gext. To compare the dynamical patterns foundin the NSN and their dependence on the parameters withthose of the firing rate model we choose pIE = pEE = pE ,pEI = pII = pI and identical synaptic time constantsfor excitatory and inhibitory connections (τ1 = 1ms andτ2 = 3ms). This creates an effective one-population net-work with an effective coupling similar to that of therate model. Fig. 3 shows eight typical firing patterns inthe NSN. The figures have been arranged to allow com-parison with those in Fig. 2. The patterns show goodqualitative agreement, and were obtained by altering thenetwork parameters in a way analogous to those changesmade in the rate model to produce Fig. 2. The only

Neural field models –Wilson-Cowan eq. with spatial connectivity

∂λ(x, t)

dt= −λ+

∫ ∞−∞

dyJ(y)f(λ(x− y, t− |x− y|/v))

J(|x− y|) = J0 + J1cos(x− y)

J0 > |J1| – All excitatory

J0 < − |J1| – All inhibitory

J1 > |J0| – Mexican hat

J1 < − |J0| – Inverted mexican hat

Roxin et al. 2005Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 26 / 28

Page 31: Mean field description of neuronal network dynamics - FMI

Linear dynamics with delays

τedλe(t)

dt= −λe(t) + Jeeλe(t−∆ee)− Jeiλi(t−∆ei) + ue(t))

τidλi(t)

dt= −λi(t) + Jieλe(t−∆ie)− Jiiλi(t−∆ii) + ui(t)

We can do Taylor expansion of the delayed equations:

τe ˙λe(t) = −λe(t) + Jee(λe(t)−∆eeλe(t))− Jei(λi(t)−∆eiλi(t)) + ue(t))

τi ˙λi(t) = −λi(t) + Jei(λe(t)−∆ieλe(t))− Jii(λi(t)−∆iiλi(t)) + ue(t))

This can be reorganised as:

[Jee∆ee + τe Jei∆ei

Jei∆ie Jii∆ii − τi

] [λeλi

]=

[Jee − 1 JeiJie Jii − 1

] [λeλi

]

Its easier to solve these equations in Laplace domain

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 27 / 28

Page 32: Mean field description of neuronal network dynamics - FMI

Further reading

• Izhekevich E (2007) Dynamical Systems in Neuroscience: TheGeometry of Excitability and Bursting. The MIT press

• Wilson and Cowan (1972) Excitatory and inhibitory interactions inlocalized populations of model neurons. Biophys. J., 12:124

• Latham et al. (2000) Intrinsic dynamics in neuronal networks. I.Theory. J. Neurophys. 88:808-27

• Brunel (2000) Dynamics of sparsely connected networks of excitatoryand inhibitory spiking neurons J. Comp. Neurosci. 8:183:208

• Roxin et al. (2005) The role of delays in shaping spatio-temporaldynamics of neuronal activity in large networksPhys. Rev. Lett.94:238103

• Kumar et al. (2008) The high-conductance state of cortical networksNeural Computation 20:1-43

Arvind Kumar (Bernstein Center Freiburg) Mean field analysis September 23, 2013 28 / 28