lecture 14: batch rl - stanford university...checkingyourunderstanding: importancesampling we can...

74
Lecture 14: Batch RL Emma Brunskill CS234 Reinforcement Learning. Winter 2020 Slides drawn from Philip Thomas with modifications Emma Brunskill (CS234 Reinforcement Learning. ) Lecture 14: Batch RL Winter 2020 Slides drawn from Philip Tho / 70 *Note: we only went carefully through slides before slide 34. The remaining slides are kept for those interested but will not be material required for the quiz. See the last slide for a summary of what you should know

Upload: others

Post on 30-Mar-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Lecture 14: Batch RL

Emma Brunskill

CS234 Reinforcement Learning.

Winter 2020Slides drawn from Philip Thomas with modifications

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 1

/ 70

*Note: we only went carefully through slides before slide 34. The remaining slides are kept for those interested but will not be material required for the quiz. See the last slide for a summary of what you should know

Page 2: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Refresh Your Understanding: Fast RL III

Select all that are true:• Thompson sampling for MDPs the posterior over the dynamics can be

updated after each transition• When using a Beta prior for a Bernoulli reward parameter for an (s,a)

pair, the posterior after N samples of that pair time steps can be thesame as after N+2 samples• The optimism bonuses discussed for MBIE-EB depend on the maximum

reward but not on the maximum value function• In class we discussed adding a bonus term to the policy gradient update

for a (s,a,r,s’) tuple using Q-learning with function approximation.Adding this bonus term will ensure all Q estimates used to makedecisions online using DQN are optimistic with respect to Q*• Not sure

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 2

/ 70

Page 3: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Class Structure

• Last time: Fast Reinforcement Learning• This time: Batch RL• Next time: Guest Lecture

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 3

/ 70

Page 4: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

A Scientific Experiment

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 4

/ 70

Page 5: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

A Scientific Experiment

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 5

/ 70

Page 6: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

What Should We Do For a New Student?

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 6

/ 70

Page 7: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Involves Counterfactual Reasoning

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 7

/ 70

Page 8: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Involves Generalization

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 8

/ 70

Page 9: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Batch Reinforcement Learning

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 9

/ 70

Page 10: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Batch RL

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 10

/ 70

Page 11: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Batch RL

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 11

/ 70

Page 12: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

The Problem

• If you apply an existing method, do you have confidence that it willwork?

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 12

/ 70

Page 13: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

A property of many real applications

• Deploying "bad" policies can be costly or dangerous

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 13

/ 70

Page 14: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

What property should a safe batch reinforcement learningalgorithm have?

• Given past experience from current policy/policies, produce a new policy• “Guarantee that with probability at least 1− δ, will not change

your policy to one that is worse than the current policy.”• You get to choose δ• Guarantee not contingent on the tuning of any hyperparameters

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 14

/ 70

Page 15: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Table of Contents

1 Notation

2 Create a safe batch reinforcement learning algorithmOff-policy policy evaluation (OPE)Safe policy improvement (SPI)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 15

/ 70

Page 16: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Notation

• Policy π: π(a∣∣ ) = P(at = a

∣∣ st = s)

• Trajectory: T = (s1, a1, r1, s2, a2, r2, · · · , sL, aL, rL)

• Historical data: D = {T1,T2, · · · ,Tn}• Historical data from behavior policy, πb• Objective:

V π = E[L∑

t=1

γtRt

∣∣π]

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 16

/ 70

Page 17: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Safe batch reinforcement learning algorithm

• Reinforcement learning algorithm, A• Historical data, D, which is a random variable• Policy produced by the algorithm, A(D), which is a random variable• a safe batch reinforcement learning algorithm, A, satisfies:

Pr(VA(D) ≥ V πb) ≥ 1− δ

or, in general

Pr(VA(D) ≥ Vmin) ≥ 1− δ

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 17

/ 70

Page 18: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Table of Contents

1 Notation

2 Create a safe batch reinforcement learning algorithmOff-policy policy evaluation (OPE)Safe policy improvement (SPI)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 18

/ 70

Page 19: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Create a safe batch reinforcement learning algorithm

• Off-policy policy evaluation (OPE)• For any evaluation policy, πe , Convert historical data, D, into n

independent and unbiased estimates of V πe

• High-confidence off-policy policy evaluation (HCOPE)• Use a concentration inequality to convert the n independent and

unbiased estimates of V πe into a 1− δ confidence lower bound onV πe

• Safe policy improvement (SPI)• Use HCOPE method to create a safe batch reinforcement learning

algorithm,• Methods today focused on work by Philip Thomas UAI and ICML 2015

papers.

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 19

/ 70

Page 20: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation (OPE)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 20

/ 70

Page 21: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation (HCOPE)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 21

/ 70

Page 22: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Safe policy improvement (SPI)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 22

/ 70

Page 23: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Create a safe batch reinforcement learning algorithm

• Off-policy policy evaluation (OPE)• For any evaluation policy, πe , Convert historical data, D, into n

independent and unbiased estimates of V πe

• High-confidence off-policy policy evaluation (HCOPE)• Use a concentration inequality to convert the n independent and

unbiased estimates of V πe into a 1− δ confidence lower bound onV πe

• Safe policy improvement (SPI)• Use HCOPE method to create a safe batch reinforcement learning

algorithm,

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 23

/ 70

Page 24: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Monte Carlo (MC) Off Policy Evaluation

• Aim: estimate value of policy π1, V π1(s), given episodes generatedunder behavior policy π2

• s1, a1, r1, s2, a2, r2, . . . where the actions are sampled from π2

• Gt = rt + γrt+1 + γ2rt+2 + γ3rt+3 + · · · in MDP M under policy π• V π(s) = Eπ[Gt |st = s]

• Have data from a different policy, behavior policy π2

• If π2 is stochastic, can often use it to estimate the value of an alternatepolicy (formal conditions to follow)• Again, no requirement that have a model nor that state is Markov

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 24

/ 70

Page 25: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Monte Carlo (MC) Off Policy Evaluation: DistributionMismatch

• Distribution of episodes & resulting returns differs between policies

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 25

/ 70

Page 26: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Importance Sampling

• Goal: estimate the expected value of a function f (x) under someprobability distribution p(x), Ex∼p[f (x)]

• Have data x1, x2, . . . , xn sampled from distribution q(s)

• Under a few assumptions, we can use samples to obtain an unbiasedestimate of Ex∼q[f (x)]

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 26

/ 70

Page 27: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Importance Sampling

Ex∼q[f (x)] =

∫xq(x)f (x)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 27

/ 70

Page 28: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Checking Your Understanding: Importance SamplingWe can use importance sampling to do batch bandit policy evaluation. Consider we havea dataset for pulls from 3 arms. Consider that arm 1 is a Bernoulli where with probability.98 we get 0 and with probability 0.02 we get 100. Arm 2 is a Bernoulli where withprobability 0.55 the reward is 2 else the reward is 0. Arm 3 has a probability of yielding areward of 1 with probability 0.5 else it gets 0. Select all that are true.

• Data is sampled from pi1 where with probability 0.8 it pulls arm 3 else it pulls arm 2.The policy we wish to evaluate, pi2, pulls arm 2 with probability 0.5 else it pulls arm1. pi2 has higher true reward than pi1.

• We cannot use pi1 to get an unbiased estimate of the average reward pi2 usingimportance sampling.

• We can use pi1 to get a lower bound on the average reward of pi2 using importancesampling.

• If rewards can be positive or negative, we can still get a lower bound on pi2 usingdata from pi1 using importance sampling

• Now assume pi1 selects arm1 with probability 0.2 and arm2 with probability 0.8. Wecan use importance sampling to get an unbiased estimate of pi2 using data from pi1.

• Still with the same pi1, it is likely with N=20 pulls that the estimate using IS for pi2will be higher than the empirical value of pi1.

• Not sureEmma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RL

Winter 2020 Slides drawn from Philip Thomas with modifications 28/ 70

Page 29: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Importance Sampling (IS) for RL Policy Evaluation

• Let hj be episode j (history) of states, actions and rewards

hj = (sj ,1, aj ,1, rj ,1, sj ,2, aj ,2, rj ,2, . . . , sj ,Lj (terminal))

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 29

/ 70

Page 30: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Importance Sampling (IS) for Policy Evaluation

• Let hj be episode j (history) of states, actions and rewards

hj = (sj ,1, aj ,1, rj ,1, sj ,2, aj ,2, rj ,2, . . . , sj ,Lj (terminal))

p(hj |π, s = sj ,1) =p(aj ,1|sj ,1)p(rj ,1|sj ,1, aj ,1)p(sj ,2|sj ,1, aj ,1)

p(aj ,2|sj ,2)p(rj ,2|sj ,2, aj ,2)p(sj ,3|sj ,2, aj ,2) . . .

=

Lj−1∏t=1

p(aj ,t |sj ,t)p(rj ,t |sj ,t , aj ,t)p(sj ,t+1|sj ,t , aj ,t)

=

Lj−1∏t=1

π(aj ,t |sj ,t)p(rj ,t |sj ,t , aj ,t)p(sj ,t+1|sj ,t , aj ,t)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 30

/ 70

Page 31: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Importance Sampling (IS) for Policy Evaluation

• Let hj be episode j (history) of states, actions and rewards, where theactions are sampled from π2

hj = (sj ,1, aj ,1, rj ,1, sj ,2, aj ,2, rj ,2, . . . , sj ,Lj (terminal))

V π1(s) ≈n∑

j=1

p(hj |π1, s)

p(hj |π2, s)G (hj)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 31

/ 70

Page 32: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Importance Sampling for Policy Evaluation

• Aim: estimate V π1(s) given episodes generated under policy π2

• s1, a1, r1, s2, a2, r2, . . . where the actions are sampled from π2

• Have access to Gt = rt + γrt+1 + γ2rt+2 + γ3rt+3 + · · · in MDP Munder policy π2

• Want V π1(s) = Eπ1 [Gt |st = s]

• IS = Monte Carlo estimate given off policy data• Model-free method• Does not require Markov assumption• Under some assumptions, unbiased & consistent estimator of V π1

• Can be used when agent is interacting with environment to estimatevalue of policies different than agent’s control policy

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 32

/ 70

Page 33: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Leveraging Future Can’t Influence Past Rewards

• Importance sampling (IS):

IS(D) =1n

n∑i=1

(L∏

t=1

πe(at∣∣ st)

πb(at∣∣ st)

)(L∑

t=1

γtR it

)

• Per-decision importance sampling (PDIS)

PSID(D) =L∑

t=1

γt1n

n∑i=1

(t∏

τ=1

πe(aτ∣∣ sτ )

πb(aτ∣∣ sτ )

)R it

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 33

/ 70

Page 34: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation

• Importance sampling (IS):

IS(D) =1n

n∑i=1

wi

(L∑

t=1

γtR it

)

• Weighted importance sampling (WIS)

WIS(D) =1∑n

i=1 wi

n∑i=1

wi

(L∑

t=1

γtR it

)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 34

/ 70

*Note: we only went carefully through slides before this point. The remaining slides are kept for those interested but will not be material required for the quiz. See the last slide for a summary of what you should know

Page 35: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation

• Weighted importance sampling (WIS)

WIS(D) =1∑n

i=1 wi

n∑i=1

wi

(L∑

t=1

γtR it

)

• Biased or unbiased?

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 35

/ 70

Page 36: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation

• Weighted importance sampling (WIS)

WIS(D) =1∑n

i=1 wi

n∑i=1

wi

(L∑

t=1

γtR it

)

• Biased. When n = 1,E[WIS ] = V (πb)

• Strongly consistent estimator of V πe

• i.e. Pr(limn→∞WIS(D) = V πe ) = 1• If• Finite horizon• One behavior policy, or bounded rewards

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 36

/ 70

Page 37: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Control variates

• Given: X• Estimate: µ = E[X ]

• µ̂ = X

• Unbiased: E[µ̂] = E[X ] = µ

• Variance: Var(µ̂) = Var(X )

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 37

/ 70

Page 38: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Control variates

• Given: X ,Y ,E[Y ]

• Estimate: µ = E[X ]

• µ̂ = X − Y + E[Y ]

• Unbiased:E[µ̂] =

• Variance:

Var(µ̂) = Var(X − Y + E[Y ]) = Var(X − Y )

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 38

/ 70

Page 39: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Control variates

• Given: X ,Y ,E[Y ]

• Estimate: µ = E[X ]

• µ̂ = X − Y + E[Y ]

• Unbiased:E[µ̂] = E[X − Y + E[Y ]] = E[X ]− E[Y ] + E[Y ] = E[X ] = µ

• Variance:

Var(µ̂) = Var(X − Y + E[Y ]) = Var(X − Y )

= Var(X ) + Var(Y )− 2Cov(X ,Y )

• Lower variance if 2Cov(X ,Y ) > Var(Y )

• We call Y a control variate• We saw this idea before: baseline term in policy gradient estimation

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 39

/ 70

Page 40: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation

• Idea: add a control variate to importance sampling estimators• X is the importance sampling estimator• Y is a control variate build from an approximate model of the

MDP

• Called the doubly robust estimator (Jiang and Li, 2015)• Robust to (1) poor approximate model, and (2) error in estimates

of πb• If the model is poor,the estimates are still unbiased• If the sampling policy is unknown, but the model is good,

MSE will still be low• Non-recursive and weighted forms, as well as control variate view

provided by Thomas and Brunskill (ICML 2016)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 40

/ 70

Page 41: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation

DR(πe∣∣D) =

1n

n∑i=1

∞∑t=0

γtw it (R i

t − q̂πe (S it ,A

it)) + γtρit−1v̂

πe (S it),

where w it =

∏tτ1

πe(aτ

∣∣ sτ )πb(aτ

∣∣ sτ )

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 41

/ 70

Page 42: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 42

/ 70

Page 43: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 43

/ 70

Page 44: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 44

/ 70

Page 45: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 45

/ 70

Page 46: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results (Gridworld)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 46

/ 70

Page 47: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation : Blending

• Importance sampling is unbiased but high variance• Model based estimate is biased but low variance• Doubly robust is one way to combine the two• Can also trade between importance sampling and model based estimatewithin a trajectory• MAGIC estimator (Thomas and Brunskill ICML 2016)• Can be particularly useful when part of the world is non-Markovian in

the given model, and other parts of the world are Markov

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 47

/ 70

Page 48: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Can Need an Order of Magnitude Less Data To Get GoodEstimates

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 48

/ 70

Page 49: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation

• What if supp(πe ⊂ supp(πb))• There is a state-action pair, (s, a), such that πe(a

∣∣ s) = 0, butπb(a

∣∣ s) 6= 0.• If we see a history where (s, a) occurs, what weight should we give it?

• IS(D) = 1n

∑ni=1

(∏Lt=1

πe(at

∣∣ st)πb(at

∣∣ st))(∑L

t=1 γtR i

t

)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 49

/ 70

Page 50: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation

• What if there are zero samples (n = 0)?• The importance sampling estimate is undefined

• What if no samples are in supp(πe) (or supp(p) in general)?• Importance sampling says: the estimate is zero• Alternate approach: undefined

• Importance sampling estimator is unbiased if n > 0• Alternate approach will be unbiased given that at least one sample is in

the support of p• Alternate approach detailed in Importance Sampling with Unequal

Support (Thomas and Brunskill, AAAI 2017)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 50

/ 70

Page 51: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off-policy policy evaluation

• Thomas et. al. Predictive Off-Policy Policy Evaluation forNonstationary Decision Problems, with Applications to DigitalMarketing (AAAI 2017)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 51

/ 70

Page 52: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Create a safe batch reinforcement learning algorithm

• Off-policy policy evaluation (OPE)• For any evaluation policy, πe , Convert historical data, D, into n

independent and unbiased estimates of V πe

• High-confidence off-policy policy evaluation (HCOPE)• Use a concentration inequality to convert the n independent and

unbiased estimates of V πe into a 1− δ confidence lower bound onV πe

• Safe policy improvement (SPI)• Use HCOPE method to create a safe batch reinforcement learning

algorithm,

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 52

/ 70

Page 53: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

• Consider using IS + Hoeffding’s inequality for HCOPE on mountain car

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 53

/ 70

Page 54: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Hoeffding’s inequality

• Let X1, · · · ,Xn be n independent identically distributed randomvariables such that Xi ∈ [0, b]

• Then with probability at least 1− δ:

E[Xi ] ≥1n

n∑i=1

Xi − b

√ln(1/δ)

2n,

where Xi = 1n

∑ni=1(wi

∑Lt=1 γ

tR it) in our case.

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 54

/ 70

Page 55: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

• Using 100,000 trajectories• Evaluation policy’s true performance is 0.19 ∈ [0, 1]

• We get a 95% confidence lower bound of: -5,8310,000

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 55

/ 70

Page 56: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

What went wrong

wi =L∏

t=1

πe(at∣∣ st)

πb(at∣∣ st)

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 56

/ 70

Page 57: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

• Removing the upper tail only decreases the expected value.

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 57

/ 70

Page 58: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

• Thomas et. al, High confidence off-policy evaluation, AAAI 2015

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 58

/ 70

Page 59: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 59

/ 70

Page 60: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

• Use 20% of the data to optimize c (cutoff)• Use 80% to compute lower bound with optimized c

• Mountain car results:

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 60

/ 70

Page 61: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

Digital marketing:

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 61

/ 70

Page 62: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

Cognitive dissonance:

E[Xi ] ≥1n

n∑i=1

Xi − b

√ln(1/δ)

2n

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 62

/ 70

Page 63: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

• Student’s t-test• Assumes that IS(D) is normally distributed• By the central limit theorem, it (is as n→∞)

Pr

(E[

1n

n∑i=1

Xi ] ≥1n

n∑i=1

Xi

)=

√1

n−1∑n

i=1(Xi − X̄n)2

√n

t1−δ,n−1

≥ 1− δ

• Efron’s Bootstrap methods (e.g., BCa)• Also, without importance sampling: Hanna, Stone, and

Niekum, AAMAS 2017

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 63

/ 70

Page 64: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

High-confidence off-policy policy evaluation

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 64

/ 70

Page 65: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Create a safe batch reinforcement learning algorithm

• Off-policy policy evaluation (OPE)• For any evaluation policy, πe , Convert historical data, D, into n

independent and unbiased estimates of V πe

• High-confidence off-policy policy evaluation (HCOPE)• Use a concentration inequality to convert the n independent and

unbiased estimates of V πe into a 1− δ confidence lower bound onV πe

• Safe policy improvement (SPI)• Use HCOPE method to create a safe batch reinforcement learning

algorithm

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 65

/ 70

Page 66: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Safe policy improvement

Thomas et. al, ICML 2015

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 66

/ 70

Page 67: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results: Digital Marketing

Agent

Environment

Action, 𝑎

State, 𝑠 Reward, 𝑟

Page 68: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results: Digital Marketing

0.002715

0.003832

n=10000 n=30000 n=60000 n=100000

Expecte

d N

orm

aliz

ed R

etu

rn

None, CUT None, BCa k-Fold, CUT k-Fold, Bca

Page 69: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results: Digital Marketing

Page 70: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Empirical Results: Digital Marketing

Page 71: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Other Relevant Work

• How to deal with long horizons? (Guo, Thomas, Brunskill NIPS 2017)• How to deal with importance sampling being “unfair”? (Doroudi,Thomas and Brunskill, best paper UAI 2017)• What to do when the behavior policy is not known? (Liu, Gottesman,

Raghu, Komorowski, Faisal, Doshi-Velez, Brunskill NeurIPS 2018)• What to do when the behavior policy is deterministic?• What to do when care about safe exploration?• What to do when care about performance on a single trajectory• Many others also doing great work in this space, including the groups ofYisong Yue, Susan Murphy, Finale Doshi-Velez, Marco Pavone, PieterAbbeel, Shie Mannor, Sergey Levine and Claire Tomlin, amongst others

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 67

/ 70

Page 72: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Off Policy Policy Evaluation and Selection

• Very important topic: healthcare, education, marketing, ...• Insights are relevant to on policy learning• Big focus of my lab• A number of others on campus also working in this area (e.g. StefanWager, Susan Athey...)• Very interesting area at the intersection of causality and control• Our Science 2019 paper show how to do safe policy improvement for a

high fidelity diabetes simulator and discusses the need for ensuring goodbehavior

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 68

/ 70

Page 73: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

What You Should Know: Off Policy Policy Evaluation andSelection

• Be able to define and apply importance sampling for off policy policyevaluation• Define some limitations of IS (variance)• List a couple alternatives (weighted IS, doubly robust)• Define why we might want safe reinforcement learning• Define the scope of the guarantees implied by safe policy improvement

as defined in this lecture

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 69

/ 70

Page 74: Lecture 14: Batch RL - Stanford University...CheckingYourUnderstanding: ImportanceSampling We can use importance sampling to do batch bandit policy evaluation. Consider we have a dataset

Class Structure

• Last time: Fast Reinforcement Learning• This time: Batch RL• Next time: Guest Lecture

Emma Brunskill (CS234 Reinforcement Learning. )Lecture 14: Batch RLWinter 2020 Slides drawn from Philip Thomas with modifications 70

/ 70