jug saxony day 2019 reinforcement learning · 2019. 9. 23. · deep reinforcement learning •...

Post on 04-Aug-2021

3 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Reinforcement learningA gentle introduction

Uwe Friedrichsen (codecentric AG) – JUG Saxony Day – Radebeul, 13. September 2019

Uwe Friedrichsen

CTO @ codecentrichttps://twitter.com/ufriedhttps://www.speakerdeck.com/ufriedhttps://medium.com/@ufried

What can I expect from this talk?

Goals of this talk

• Understand the most important conceptsand how they are connected to each other

• Know the most important terms

• Get some understanding how to translate concepts to code

• Lose fear of math … ;)

• Spark interest

• Give you a little head start if you decide to dive deeper

WhySome success stories and a prognosis

2013

2016

2017

2018

2019

… to be continued in the near future …

Do you really think that an average white collar jobhas more degrees of freedom than StarCraft II?

(Deep) Reinforcement Learning has the potentialto affect white collar workers similar tohow robots affected blue collar workers

WhatThe basic idea

Reinforcement learning (RL) is the study of how an agentcan interact with its environment to learn a policy

which maximizes expected cumulative rewards for a task

-- Henderson et al., Deep Reinforcement Learning that Matters

Source: https://arxiv.org/abs/1709.06560

Eh, what do you mean?

Agent

interact

Environment

solve

Core idea: An agent tries to solve a task by interacting with its environment

Task

Problem: How can we model the interaction in a way that allowsthe agent to learn how to solve the given task?

Approach: Apply concepts from learning theory, particularly fromoperant conditioning *

* learn a desired behavior (which solves the task) based on rewards and punishment

Agent

Environment

Core idea: An agent tries to maximize the rewards receivedover time from the environment

maximize rewards over time…

Task

Observe(State, Reward)

Manipulate(Action)

Observations so far

• Approach deliberately limited to reward-based learning

• Results in very narrow interaction interface

• Good modeling is essential and often challenging

• Rewards are crucial for ability to learn

• States must support deciding on actions

• Actions need to be effective in the environment

How6 questions from concepts to implementation

Agent

Environment

Goal: Learn the best possible actions(with respect to the cumulated rewards)

in response to the observed states

maximize rewards over time…

Task

Observe(State, Reward)

Manipulate(Action)

Question 1 Where do states, rewards andpossible actions come from?

Agent

Environment

Observe(State, Reward)

Manipulate(Action)

Map(State, Action)

Model(Reward)

Known to the agentSometimes known

to the agent(but usually not)

Unknown to the agent

Model

• Maps environment to narrow interface

• Makes complexity of environment manageable for agent

• Responsible for calculating rewards (tricky to get right)

• Usually not known to the agent

• Usually only interface visible

• Sometimes model also known (e.g., dynamic programming)

• Creating a good model can be challenging

• Representational learning approaches in DL can help

Question 2 How can we represent the (unknown) behaviorof a model in a general way?

Represent the behavior using a probability distribution function

Transition probability function (based on MDP)

Read: Probability that you will observe s’ and r after you sent a as a response to observing s

Reward function (derived from transition probability function)

Read: Expected reward r after you sent a as a response to observing s

Question 3 How can we represent the behavior of the agent?(It's still about learning the best possible actions)

Represent the behavior using a probability distribution function

Policy (stochastic)

Read: Probability that you will choose a after you observed s

Policy(deterministic)

Read: You know how to read that one … ;)

Question 4 How can we learn a good(or even optimal) policy?

Well, it depends …

Learning a policy

• Many approaches available

• Model-based vs. model-free

• Value-based vs. policy-based

• On-policy vs. off-policy

• Shallow backups vs. deep backups

• Sample backups vs. full backups

… plus a lot of variants

• Here focus on 2 common (basic) approaches

Approach 1 Monte Carlo

model-free – value-based – on-policy – deep backups – sample backups

Return

• Goal is to optimize future rewards

• Return describes discounted future rewards from step t

Return

• Goal is to optimize future rewards

• Return describes discounted future rewards from step t

• Why a discounting factor !?

• Future rewards may have higher uncertainty

• Makes handling of infinite episodes easier

• Controls how greedy or foresighted an agent acts

Value functions

• State-value function

• Read: Value of being in state s under policy !

Value functions

• State-value function

• Read: Value of being in state s under policy !

• Action-value function (also called Q-value function)

• Read: Value of taking action a in state s under policy !

In value-based learning an optimal policy is a policythat results in an optimal value function

Optimal policy and value function

• Optimal state-value function

• Optimal action-value function

How can we learn an optimal policy?

Generalized policy iteration

• Learning an optimal policy usually not feasible

• Instead an approximation is targeted

• Following algorithm is known to converge

1. Start with a random policy

2. Evaluate the policy

3. Update the policy greedily

4. Repeat from step 2

!

Evaluation

Improvement

"

How can we evaluate a policy?

Use a Monte Carlo method

Monte Carlo methods

• Umbrella term for class of algorithms

• Use repeated random sampling to obtain numerical results

• Algorithm used for policy evaluation (idea)

• Play many episodes, each with random start state and action

• Calculate returns for all state-action pairs seen

• Approximate q-value function by averaging returnsfor each state-action pair seen

• Stop if change of q-value function becomes small enough

Source: [Sut2018]

Monte Carlo methods – pitfalls

• Exploring enough state-action pairs to learn properly

• Known as explore-exploit dilemma

• Usually addressed by exploring starts or epsilon-greedy

• Very slow convergence

• Lots of episodes needed before policy gets updated

• Trick: Update policy after each episode

• Not yet formally proven, but empirically known to work

For code example, e.g., see https://github.com/lazyprogrammer/machine_learning_examples/blob/master/rl/monte_carlo_es.py

Approach 2 Temporal-Difference learning (TD learning)

model-free – value-based – on/off-policy – shallow backups – sample backups

Bellman equations

• Allow describing the value function recursively

Bellman equations – consequences

• Enables bootstrapping

• Update value estimate on the basis of another estimate

• Enables updating policy after each single step

• Proven to converge for many configurations

• Leads to Temporal-Difference learning (TD learning)

• Update value function after each step just a bit

• Use estimate of next step’s value to calculate the return

• SARSA (on-policy) or Q-learning (off-policy) as control

Source: [Sut2018]

Source: [Sut2018]

Question 5 When do we need all the other concepts?

Deep Reinforcement Learning

• Problem: Very large state spaces

• Default for non-trivial environments

• Make tabular representation of value function infeasible

• Solution: Replace value table with Deep Neural Network

• Deep Neural Networks are great function approximators

• Implement q-value-function as DNN

• Use Stochastic Gradient Descent (SGD) to train DNN

Policy Gradient learning

• Learning the policy directly (without value function “detour”)

• Actual goal is learning an optimal policy, not a value function

• Sometimes learning a value function is not feasible

• Leads to Policy Gradient learning

• Parameterize policy with ! which stores “configuration”

• Learn optimal policy using gradient ascent

• Lots of implementations, e.g., REINFORCE, Actor-Critic, …

• Can easily be extended to DNNs

Question 6 How can I start my own journey into DL best?

No one-size-fits-all approach – people are differentHere is what worked quite well for me …

Your own journey – foundations

• Learn some of the foundations

• Read some blogs, e.g., [Wen2018]

• Do an online course at Coursera, Udemy, …

• Read a book, e.g., [Sut2018]

• Code some basic stuff on your own

• Online courses often offer nice exercises

• Try some simple environments on OpenAI Gym [OAIGym]

Your own journey – Deep RL

• Pick up some of the basic Deep RL concepts

• Read some blogs, e.g., [Sim2018]

• Do an online course at Coursera, Udemy, …

• Read a book, e.g., [Goo2016], [Lap2018]

• Revisit OpenAI Gym

• Retry previous environments using a DNN

• Try more advanced environments, e.g., Atari environments

Your own journey – moving on

• Repeat and pick up some new stuff on each iteration

• Complete the books

• Do advanced online courses

• Read research papers, e.g., [Mni2013]

• Try to implement some of the papers(if you have enough computing power at hand)

• Try more complex environments, e.g., Vizdoom [Vizdoom]

OutlookMoving on – evolution, challenges and you

Remember my claim from the beginning?

(Deep) Reinforcement Learning has the potentialto affect white collar workers similar tohow robots affected blue collar workers

Challenges of (Deep) RL

• Massive training data demands

• Hard to provide or generate

• One of the reasons games are used so often as environments

• Probably the reason white collar workers are still quite unaffected

• Hard to stabilize and get production-ready

• Research results are often hard to reproduce [Hen2019]

• Hyperparameter tuning and a priori error prediction is hard

• Massive demand for computing power

Status quo of (Deep) RL

• Most current progress based on brute force and trial & error

• Lack of training data for most real-world problems becomes a huge issue

• Research (and application) limited to few companies

• Most other companies neither have comprehension nor skills nor resources to drive RL solutions

Potential futures of (Deep) RL

• Expected breakthrough happens soon

• Discovery how to easily apply RL to real-world problems

• Market probably dominated by few companies

• All other companies just use their solutions

• Expected breakthrough does not happen soon

• Inflated market expectations do not get satisfied

• Next “Winter of AI”

• AI will become invisible parts of commodity solutions• RL will not face any progress for several years

And what does all that mean for me?

Positioning yourself

• You rather believe in the breakthrough of (Deep) RL

• Help democratize AI & RL – become part of the community

• You rather do not believe in the breakthrough of (Deep) RL

• Observe and enjoy your coffee … ;)

• You are undecided

• It’s a fascinating topic after all

• So, dive in a bit and decide when things become clearer

Enjoy and have a great time!

References – Books

[Goo2016] I. Goodfellow, Y. Bengio, A. Courville,”Deep learning", MIT press, 2016

[Lap2018] Maxim Lapan, “Deep Reinforcement LearningHands-On”, Packt Publishing,2018

[Sut2018] Richard S. Sutton, Andrew G. Barto,“Reinforcement Learning – An Introduction”,2nd edition, MIT press, 2018

References – Papers

[Hen2019] P. Henderson et al., “Deep Reinforcement Learningthat Matters”, arxiv:1709.06560

[Mni2013] V. Mnih et al., “Playing Atari with DeepReinforcement Learning”, arxiv:1312.5602v1

References – Blogs

[Sim2018] T. Simonini, “A free course in Deep Reinforcement

Learning from beginner to expert”,

https://simoninithomas.github.io/

Deep_reinforcement_learning_Course/

[Wen2018] L. Weng,

“A (long) peek into Reinforcement Learning”,

https://lilianweng.github.io/lil-log/2018/02/19/

a-long-peek-into-reinforcement-learning.html

References – Environments

[OAIGym] OpenAI Gym,http://gym.openai.com

[Vizdoom] Vizdoom, Doom-based AI Research Platform,http://vizdoom.cs.put.edu.pl

Uwe Friedrichsen

CTO @ codecentrichttps://twitter.com/ufriedhttps://www.speakerdeck.com/ufriedhttps://medium.com/@ufried

top related