cse/neuro reinforcement learningcse/neuro 528 lecture 13: reinforcement learning & course review...

18
1 Animation: Tom Creed, SJU CSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning experiments F Training : Bell Food F After : Bell Salivate F Conditioned stimulus (bell) predicts future reward (food) Image: Wikimedia Commons; Animation: Tom Creed, SJU

Upload: others

Post on 25-Jul-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

1Animation: Tom Creed, SJU

CSE/NEURO

528

Lecture 13:

Reinforcement

Learning &

Course Review(Chapter 9)

2

Early Results: Pavlov and his Dog

F Classical (Pavlovian)

conditioning experiments

F Training: Bell Food

F After: Bell Salivate

F Conditioned stimulus

(bell) predicts future

reward (food)

Image: Wikimedia Commons; Animation: Tom Creed, SJU

Page 2: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

3

Predicting Delayed Rewards

F How do we predict rewards delivered some time after a

stimulus is presented?

F Given: Many trials, each of length T time steps

F Time within a trial: 0 t T with stimulus u(t) and reward

r(t) at each time step t (Note: r(t) can be zero for some t)

F We would like a neuron whose output v(t) predicts the

expected total future reward starting from time t

trials

tT

trtv

0

)()(

4

Learning to Predict Future Rewards

F Use a set of synaptic weights w(t) and predict

based on all past stimuli u(t):

F Learn weights w() that minimize error:

)()()(0

tuwtvt

2

0

)()(

tvtrtT

Yes, BUT future rewards are not yet available!

(Can we minimize this using

gradient descent and delta rule?)

)(tv

)(tu )1( tu )0(u

)0(w )(tw )(Tw

(Linear filter!)

Page 3: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

5

Temporal Difference (TD) Learning

F Key Idea: Rewrite error function to get rid of future terms:

F Temporal Difference (TD) Learning:

2

21

0

2

0

)()1()(

)()1()()()(

tvtvtr

tvtrtrtvtrtTtT

)( )]()1()([ )( tutvtvtrw

Expected future reward Prediction

Minimize this using

gradient descent!

6

Predicting Future Rewards: TD Learning

Stimulus at t = 100 and reward at t = 200

Prediction error for each time step

(over many trials)

Image Source: Dayan & Abbott textbook

Page 4: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

7

Possible Reward Prediction Error Signal in the

Primate Brain

Dopaminergic cells in Ventral Tegmental Area (VTA)

Before Training

After Training

Reward Prediction error δ?

No error

)]()1()([ tvtvtr

)1()()( tvtrtv)]1()(0[ tvtv

0)]()1()([ tvtvtr

Image Source: Dayan & Abbott textbook

8

More Evidence for Prediction Error Signals

Dopaminergic cells in VTA after Training

Negative error

)()]()1( )([

0)1( ,0)(

tvtvtvtr

tvtr

Reward expected

but not delivered

Image Source: Dayan & Abbott textbook

Page 5: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

9

Reinforcement Learning: Acting to Maximize Rewards

Agent

Environment

State

ut

Reward

rt

Action

at

10

The Problem

Agent

Environment

State

utAction

atReward

rt

Learn a state-to-action

mapping or “policy”:

which maximizes the

expected total future

reward:

trials

tT

tr

0

)(

au )(

Page 6: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

11

Example: Rat in a barn

States = locations A,

B, or C

Actions= L (go left)

or R (go right)

If the rat chooses L or

R at random (random

“policy”), what is the

expected reward (or

“value”) v for each

state?

Image Source: Dayan & Abbott textbook

12

Policy Evaluation

For random policy:

Can learn value of states

using TD learning:

75.1)(2

1)(

2

1)(

102

12

2

1)(

5.252

10

2

1)(

CvBvAv

Cv

Bv

)]()'()([ )()( uvuvuruwuw

Let value of state u

v(u) = weight w(u)

(Location, action) new location i.e., (u,a) u’

Page 7: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

13

TD Learning of Values for Random Policy

1.752.5

1

Once I know the values, I can pick the action

that leads to the higher valued state!

(For all three,

= 0.5)

Image Source: Dayan & Abbott textbook

14

Selecting Actions based on Values

2.5 1 Values act as

surrogate immediate

rewards Locally

optimal choice leads

to globally optimal

policy for “Markov”

environments

(Related to Dynamic

Programming)

Page 8: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

15

Putting it all together:

Actor-Critic Learning

F Two separate components: Actor (selects action and maintains policy) and Critic (maintains value of each state)

1. Critic Learning (“Policy Evaluation”): Value of state u = v(u) = w(u)

2. Actor Learning (“Policy Improvement”):

3. Repeat 1 and 2

)]()'()([ )()( uvuvuruwuw

));'()](()'()([)()( ''' uaPuvuvuruQuQ aaaa

b

b

a

uQ

uQuaP

))(exp(

))(exp();(

Probabilistically select an

action a at state u

(same as TD rule)

For all actions a’:

16

Actor-Critic Learning in our Barn Example

Probability of going Left at each location

Image Source: Dayan & Abbott textbook

Page 9: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

17

Possible Implementation of the Actor-Critic

Model in the Basal Ganglia

Cortex

Striatum

GPi/SNr

Thalamus

STN

GPe SNc

DA

State Estimate

Hidden Layer

TD

error

Action

Value

Actor Critic

(See Supplementary Materials for references)

18

Reinforcement learning has been applied to

many real-world problems!

Example:

Google’s AlphaGo beats human champion in Go,

Autonomous Helicopter Flight

(learned from human demonstrations)

(Videos and papers at: http://heli.stanford.edu/)

Page 10: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

19

Course Summary

• Where have we been?• Course Highlights

• Where do we go from here?• Challenges and Open Problems

• Further Reading

20

What is the neural code?

What is the nature of the code?

Representing the spiking output:

single cells vs populations

rates vs spike times vs intervals

What features of the stimulus does the neural system represent?

Page 11: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

21

Encoding and decoding neural information

Encoding: building functional models of neurons/neural systems

and predicting the spiking output given the stimulus

Decoding: what can we say about the stimulus given what we

observe from the neuron or neural population?

22

Information

maximization

as a design principle

of the nervous

system

Page 12: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

23

Biophysical Models of Neurons

• Voltage dependent

• transmitter dependent (synaptic)

• Ca dependent

24

-

The neural equivalent circuit

Ohm’s law: and Kirchhoff’s law

Capacitive

currentIonic currents Externally

applied current

Page 13: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

25

Simplified models: integrate-and-fire

V

meLm RIEVdt

dV )(

If V > Vthreshold Spike

Then reset: V = Vreset

Integrate-and-

Fire Model

26

Modeling Networks of Neurons

)MW( vuvv

Fdt

d

Output Decay Input Feedback

Page 14: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

27

Unsupervised Learning

• For linear neuron:

• Basic Hebb Rule:

• Average effect over many inputs:

• Q is the input correlation matrix:

vdt

dw u

w

wuuwTTv

wuw

Qvdt

dw

TQ uu

w

Hebb rule performs principal

component analysis (PCA)

28

The Connection to Statistics

];|[ Gp vu

];|[ Gp uvCauses v

Data u

Generative

model

Recognition

model

(data likelihood)

(posterior)

Unsupervised learning = learning the hidden causes of input data

G = (v, v)

Causes of

clustered

data

“Causes”

of natural

images

Use EM

algorithm for

learning

Page 15: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

29

Generative Models

Droning lectureMathematical

derivationsLack of sleep

30

Supervised Learning

))(( m

k

k

jk

j

ij

m

i uwgWgv

2

,

)(2

1),( m

i

im

m

ijkij vdwWE

m

ku

m

jx

Goal: Find W and w that minimize errors:

Backpropagation for Multilayered Networks

jk

m

j

m

j

jk

jk

jkjk

ij

ijij

w

x

x

Ew

w

Eww

W

EWW

Gradient descent learning rules:

Desired output

(Delta rule)

(Chain rule)

Page 16: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

31

Reinforcement Learning

• Learning to predict rewards:

• Learning to predict delayed

rewards (TD learning):

• Actor-Critic Learning:• Critic learns value of each

state using TD learning

• Actor learns best actions

based on value of next state

(using the TD error)

(http://employees.csbsju.edu/tcreed/pb/pdoganim.html)

uvrww )(

)( )]()1()([ )()( tutvtvtrww

2.5 1

32

The Future: Challenges and Open Problems

• How do neurons encode information?

• Topics: Synchrony, Spike-timing based learning, Dynamic

synapses

• Does a neuron’s structure confer computational

advantages?

• Topics: Role of channel dynamics, dendrites, plasticity in

channels and their density

• How do networks implement computational principles

such as efficient coding and Bayesian inference?

• How do networks learn “optimal” representations of their

environment and engage in purposeful behavior?

• Topics: Unsupervised/reinforcement/imitation learning

Page 17: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

33

Further Reading (for Spring and beyond)

• Spikes: Exploring the Neural Code, F. Riekeet al., MIT Press, 1997

• The Biophysics of Computation, C. Koch, Oxford University Press, 1999

• Large-Scale Neuronal Theories of the Brain, C. Koch and J. L. Davis, MIT Press, 1994

• Probabilistic Models of the Brain, R. Rao et al., MIT Press, 2002

• Bayesian Brain, K. Doya et al., MIT Press, 2007

• Reinforcement Learning: An Introduction, R. Sutton and A. Barto, MIT Press, 1998

34

Next two classes: Project presentations!

• Keep your presentation short: ~7-8 slides, 8 + 3 mins mins/group (with questions)

Introduction, Background, Methods, Results, Conclusion

• Slides:• Bring your slides on a USB stick to use the class

laptop (Windows machine)OR• Bring your own laptop (esp if you have videos etc.)

• Projects reports (10-15 pages total) due March 12 (by email to both Adrienne, Rich, and Raj before midnight)

Page 18: CSE/NEURO Reinforcement LearningCSE/NEURO 528 Lecture 13: Reinforcement Learning & Course Review (Chapter 9) 2 Early Results: Pavlov and his Dog F Classical (Pavlovian) conditioning

35

Have a

great

weekend!