tni: computational neuroscience instructors:peter latham maneesh sahani peter dayan

59
TNI: Computational Neuroscience tructors: Peter Latham Maneesh Sahani Peter Dayan Phillipp Hehrmann, [email protected] site: http://www.gatsby.ucl.ac.uk/~hehrmann/TN1/ (slides will be on website) tures: Tuesday/Friday, 11:00-1:00. iew: Friday, 1:00-3:00. ework: Assigned Friday, due Friday (1 week later). first homework: 2 weeks later (no class Oct.

Upload: kirkan

Post on 17-Jan-2016

33 views

Category:

Documents


2 download

DESCRIPTION

TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan TA:Phillipp Hehrmann, [email protected] Website:http://www.gatsby.ucl.ac.uk/~hehrmann/TN1/ (slides will be on website) Lectures:Tuesday/Friday, 11:00-1:00. Review:Friday, 1:00-3:00. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

TNI: Computational Neuroscience

Instructors: Peter LathamManeesh SahaniPeter Dayan

TA: Phillipp Hehrmann, [email protected]: http://www.gatsby.ucl.ac.uk/~hehrmann/TN1/

(slides will be on website)

Lectures: Tuesday/Friday, 11:00-1:00.Review: Friday, 1:00-3:00.

Homework: Assigned Friday, due Friday (1 week later).first homework: 2 weeks later (no class Oct. 12).

Page 2: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What is computational neuroscience?

Our goal: figure out how the brain works.

Page 3: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

10 microns

There are about 10 billion cubes ofthis size in your brain!

Page 4: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

How do we go about making sense of this mess?

David Marr (1945-1980) proposed three levels of analysis:

1. the problem (computational level) 2. the strategy (algorithmic level) 3. how it’s actually done by networks of neurons (implementational level)

Page 5: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #1: vision.

the problem (Marr):2-D image on retina → 3-D reconstruction of a visual scene.

Page 6: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #1: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

housesuntreebad artist

Page 7: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #1: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

the algorithm:graphical models.

x1 x2 x3

r1 r2 r3 r4

x1 x2 x3^ ^ ^

latent variables

peripheral spikes

estimate of latent variables

Page 8: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #1: vision.

the problem (modern version):2-D image on retina → reconstruction of latent variables.

the algorithm:graphical models.

implementation in networks of neurons:no clue.

Page 9: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: memory.

the problem:recall events, typically based on partial information.

Page 10: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: memory.

the problem:recall events, typically based on partial information.associative or content-addressable memory.

the algorithm:dynamical systems with fixed points.

r3

r2

r1 activity space

Page 11: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Example #2: memory.

the problem:recall events, typically based on partial information.associative or content-addressable memory.

the algorithm:dynamical systems with fixed points.

neural implementation:Hopfield networks.

xi = sign( ∑j Jij xj)

Page 12: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem:the algorithm:neural implementation:

Page 13: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

often ignored!!!

Page 14: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #1:

the problem: easierthe algorithm: harderneural implementation: harder

My favorite example: CPGs (central pattern generators)

rate

rate

Page 15: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #2:

the problem: easierthe algorithm: harderneural implementation: harder

You need to know a lot of math!!!!!

x1 x2 x3

r1 r2 r3 r4

x1 x2 x3^ ^ ^

r3

r2

r1 activity space

Page 16: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Comment #3:

the problem: easierthe algorithm: harderneural implementation: harder

This is a good goal, but it’s hard to do in practice.

We shouldn’t be afraid to just mess around withexperimental observations and equations.

Page 17: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

volt

age

100 mstime

-50 mV

+40 mV

dendrites

soma

axon

1 ms

A classic example: Hodgkin and Huxley.

Page 18: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

A classic example: Hodgkin and Huxley.

C dV/dt = –gL(V-VL) – gNam3h(V-VNa) – … dm/dt = … …

Page 19: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

the problem: easierthe algorithm: harderneural implementation: harder

A lot of what we do as computational neuroscientistsis turn experimental observations into equations.

The goal here is to understand how networks or singleneurons work.

We should always keep in mind that: a) this is less than ideal, b) we’re really after the big picture: how the brain works.

Page 20: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Basic facts about the brain

Page 21: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Your brain

Page 22: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Your cortex unfolded

~30 cm

~0.5 cm

neocortex (cognition)

subcortical structures(emotions, reward,homeostasis, much muchmore)

6 layers

Page 23: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Your cortex unfolded

1 cubic millimeter,~3*10-5 oz

Page 24: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

Page 25: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

Page 26: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

whole brain (2 kg):

1011 neurons1015 connections8 million km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

whole CPU:

109 transistors2*109 connections2 km of wire

Page 27: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1 mm3 of cortex:

50,000 neurons10000 connections/neuron(=> 500 million connections)4 km of axons

whole brain (2 kg):

1011 neurons1015 connections8 million km of axons

1 mm2 of a CPU:

1 million transistors2 connections/transistor(=> 2 million connections).002 km of wire

whole CPU:

109 transistors2*109 connections2 km of wire

Page 28: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

volt

age

100 mstime

-50 mV

+40 mV

dendrites (input)

soma (spike generation)

axon (output)

1 ms

Page 29: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan
Page 30: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

current flow

synapse

Page 31: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

current flow

synapse

Page 32: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

volt

age

100 mstime

-50 mV

+40 mV

Page 33: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

EPSP

Page 34: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

Page 35: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

amplitude = wij

Page 36: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

neuron jneuron i

neuron j emits a spike:

V o

n n

euro

n i

t

10 ms

IPSP

amplitude = wij

changes withlearning

Page 37: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

current flow

synapse

Page 38: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

A bigger picture view of the brain

Page 39: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

xr

sensory processing

motor processing

x'

r'

emotionscognitionmemory

peripheral spikes

latent variables

motor actions

peripheral spikes

brain

r̂ “direct” code forlatent variables

r'̂ “direct” code formotor actions

actionselection

Page 40: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 41: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 42: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 43: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

Page 44: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r

you are thecutest stickfigure ever!

Page 45: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

r'̂

x

r

x'

actionselection

r'

emotionscognitionmemory

brain

Questions:1. How does the brain re-represent latent variables?2. How does it manipulate re-represented variables?3. How does it learn to do both?

Ask at three levels:1. What are the properties of task x?2. What are the algorithms?3. How are they implemented in neural circuits?

Page 46: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Knowing the algorithms is a critical, but often neglected, step!!!

We know the algorithms that the vestibular system uses.We know (sort of) how it’s implemented at the neural level.

We know the algorithm for echolocation.We know (mainly) how it’s implemented at the neural level.

We know the algorithm for computing x+y.We know (mainly) how it might be implemented in the brain.

Page 47: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Knowing the algorithms is a critical, but often neglected, step!!!

We don’t know the algorithms for anything else.We don’t know how anything else is implemented atthe neural level.

This is not a coincidence!!!!!!!!

Page 48: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

What we know about the brain

Highly

biased

Page 49: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1. Anatomy. We know a lot about what is where. But becareful about labels: neurons in motor cortex sometimes

respond to color.

Connectivity. We know (more or less) which areais connected to which. We don’t know the wiring diagramat the microscopic level.

wij

Page 50: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

2. Single neurons. We know very well how point neurons work(think Hodgkin Huxley).

Dendrites. Lots of potential for incredibly complexprocessing. My guess: they make neurons bigger andreduce wiring length.

Page 51: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

3. The neural code. We’re pretty sure that information is carried in action potentials. We’re not sure what aspects of action potentials carry the information. The two main candidates:

precise timingfiring rate

Once you get away from periphery, it’s mainly firing rate.

Page 52: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

4. Recurrent networks of spiking neurons. This is a field thatis advancing rapidly! There were two absolutely seminalpapers about a decade ago:

van Vreeswijk and Sompolinsky (Science, 1996)van Vreeswijk and Sompolinsky (Neural Comp., 1998)

We now understand very well randomly connected networks(harder than you might think), and (I believe) we are onthe verge of:

i) understanding networks that have interesting computational properties.ii) computing the correlational structure in those networks.

Page 53: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

5. Learning. We know a lot of facts (LTP, LTD, STDP), but it’snot clear which, if any, are relevant.

Theorists are starting to develop unsupervised learningalgorithms, mainly ones that maximize mutual information.

These are promising, but the link to the brain has not beenfully established.

Page 54: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

5. Learning. We know a lot of facts (LTP, LTD, STDP), but it’snot clear which, if any, are relevant.

Theorists are starting to develop unsupervised learningalgorithms, mainly ones that maximize mutual information.

These are promising, but the link to the brain has not beenfully established.

Page 55: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

A word about learning (remember these numbers!!!):

You have about 1015 synapses.

If it takes 1 bit of information to set a synapse,you need 1015 bits to set all of them.

30 years ≈ 109 seconds.

To set 1/10 of your synapses in 30 years,

you must absorb 100,000 bits/second.

Learning in the brain is almost completely unsupervised!!!

Page 56: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

6. Where we know algorithms we know the neuralimplementation (sort of). Vestibular system, soundlocalization, echolocation, x+y.

Page 57: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

1. What we know: my score (1=low, 10=high).

a. Anatomy. 7b. Single neurons. 7c. The neural code. 5d. Recurrent networks of spiking neurons. 4e. Learning. 2

Questions: all answers are “we don’t know”.1. How does the brain re-represent latent variables? 0.0012. How does it manipulate re-represented variables? 0.0023. How does it learn to do both? 0.001

Page 58: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Outline:

1. Basics: single neurons/axons/dendrites/synapses. Latham2. Language of neurons: neural coding. Sahani3. What we know about networks (very little). Latham4. Learning at network and behavioral level. Dayan

Page 59: TNI: Computational Neuroscience Instructors:Peter Latham Maneesh Sahani Peter Dayan

Outline for this part of the course (biophysics):

1. What makes a neuron spike.2. How current propagates in dendrites.3. How current propagates in axons.4. How synapses work.5. Lots and lots of math!!!