reinforcement learning methods for military applications malcolm strens centre for robotics and...

34
Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence Evaluation & Research Agency U.K. 19 February 2001 © British Crown Copyright, 2001

Post on 21-Dec-2015

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Reinforcement Learning Methodsfor Military Applications

Malcolm Strens

Centre for Robotics and Machine VisionFuture Systems Technology Division

Defence Evaluation & Research AgencyU.K.

19 February 2001

© British Crown Copyright, 2001

Page 2: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

RL & Simulation

Trial-and-error in a real system is expensive– learn with a cheap model (e.g. CMU autonomous helicopter)

– or ...

– learn with a very cheap model (a high fidelity simulation)

– analogous to human learning in a flight simulator

Why is RL now viable for application?– most theory developed in last 12 years

– computers have got faster

– simulation has improved

Page 3: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

RL Generic Problem Description

States– hidden or observable, discrete or continuous.

Actions (controls)– discrete or continuous, often arranged in hierarchy.

Rewards/penalties (cost function)– delayed numerical value for goal achievement.

– return = discounted reward or average reward per step.

Policy (strategy/plan)– maps observed/estimated states to action probabilities.

RL problem: “find the policy that maximizes the expected return”

Page 4: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Existing applications of RL

Game-playing– backgammon, chess etc.

– learn from scratch by simulation (win = reward)

Network routing and channel allocation– maximize throughput

Elevator scheduling– minimize average wait time

Traffic light scheduling– minimize average journey time

Robotic control– learning balance and coordination in walking, juggling robots

– nonlinear flight controllers for aircraft

Page 5: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Characteristics of problemsamenable to RL solution

Autonomous/automatic control & decision-making Interaction (outputs affect subsequent inputs) Stochasticity

– different consequences each time an action is taken

– e.g. non-deterministic behavior of an opponent

Decision-making over time– a sequence of actions over a period of time leads to reward

– i.e. planning

Why not use standard optimization methods?– e.g. genetic algorithms, gradient descent, heuristic search

– because the cost function is stochastic

– because there is hidden state

– because temporal reasoning is essential

Page 6: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Potential military applications of RL: examples

Autonomous decision-making over time– guidance against reacting target

– real-time mission/route planning and obstacle avoidance

– trajectory optimization in changing environment

– sensor control & dynamic resource allocation

Automatic decision-making– rapid reaction

• electronic warfare

– low-level control

• flight control for UAVs (especially micro-UAVs)

• coordination for legged robots

Logistic planning– resource allocation

– scheduling

Page 7: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

4 current approaches to the RL problem

Value-function approximation methods– estimate the discounted return for every (state, action)

– actor-critic methods (e.g. TD-Gammon)

Estimate a working model– estimate a model that explains the observations

– solve for optimal behavior in this model

– full Bayesian treatment (intractable) would provide convergence and robustness guarantees

– certainty-equivalence methods tractable but unreliable

– the eventual winner in 20 dimensions+ ?

Direct policy search– apply stochastic optimization in a parameterized space of policies

– effective up to at least 12 dimensions (see pursuer-evader results)

Policy gradient ascent– policy search using a gradient estimate

Page 8: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Learning with a simulation

reinforcementlearnerphysical system

reward

action

observed state

simulation

hidden state

restart state

random seed

Page 9: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

2D pursuer evader example

Page 10: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Learning with discrete states and actions

7

34

3 3

1 2

3

4

56

8

910

11

12

1314

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

4D

16D

64D

256D

States: relative position & motion of evader.

Actions: turn left / turn right / continue.

Rewards: based on distance between pursuer and evader.

Page 11: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Markov Decision Process

1 2 3 4 5a,0 a,0 a,0 a,0

b,2

b,2b,2b,2

b,2

a,10

(S,A,T,R) A: Set of Actions

S: Set of States

T: Transition Probabilities T(s,a,s’) R: Reward Distributions R(s,a)

Q(s,a): Expected discounted reward for taking action a in state s and following an optimal policy thereafter.

Page 12: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

2 pursuers - identical

strategies

-1000

-900

-800

-700

-600

-500

-400

-300

-200

-100

0

100

0 100 200 300 400 500 600 700 800 900 1000

x (m)z

(m)

Page 13: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Learning by policy iteration / fictitious play

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 8000 16000 24000 32000

number of trials

succ

ess

rate

baseline

2 pursuers learn together

single pursuer

2 pursuers(independent)

pursuer 2learning

pursuer 1learning

pursuer 2learning

pursuer 1learning

Page 14: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Different strategies learnt by policy iteration (no communication)

-600

-500

-400

-300

-200

-100

0

100

0 100 200 300 400 500 600 700 800 900 1000 1100

x (m)

z (m

)

Page 15: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Model-based vs model-free for MDPs

% of maximum reward

(phase 2)

Chain Loop Maze

Q-learning (Type 1) 43 98 60

IEQL+ (Type 1) 69 73 13

Bayes VPI + MIX (Type 2) 66 85 59

Ideal Bayesian (Type 2) 98 99 94* Dearden, Friedman & Russell (1998)

** Strens (2000)

*

*

**

Page 16: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Direct policy search for pursuer evader

Continuous state: measurements of evader position and motion

Continuous action: acceleration demand

Policy is a parameterized nonlinear function

Goal: find optimal pursuer policies

a

1w

2w

3w

4w

5w

zz

1

zzf ,

6w

Page 17: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Policy Search for Cooperative Pursuers

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

0 200 400 600

Trial Number

Per

form

ance

2 pursuers,symmetrical policies

(6D search)

2 pursuers,separate policies

(12D search)

Page 18: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Policy search: single pursuer

after 200 trials

Page 19: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

2 aware pursuers, symmetrical policies

untrained after 200 trials

Page 20: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

2 aware pursuers, asymmetric policies

after a further 400 trials

Page 21: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

How to perform direct policy search

Optimization Procedures for Policy Search– Downhill Simplex Method

– Random Search

– Differential Evolution

Paired statistical tests for comparing policies– Policy search = stochastic optimization

– Pegasus

– Parametric & non-parametric paired tests

Evaluation– Assessment of Pegasus

– Comparison between paired tests

– How can paired tests speed-up learning?

Conclusions

Page 22: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Downhill Simplex Method (Amoeba)

Page 23: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Random Search

Page 24: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Differential Evolution

State– a population of search points

Proposals– choose candidate for replacement

– take vector differences between 2 more pairs of points

– add the weighted differences to a random parent point

– perform crossover between this and the candidate

Replacement– test whether the proposal is better than the candidate

candidateparent

proposal

crossover

Page 25: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Modeling return– Return from a simulation trial:

– + (hidden) starting state x:

– + random number sequence y:

True objective function:

Noisy objective: N finite

PEGASUS objective:

Policy search = stochastic optimization

å=

¥®==N

iiN f

NFEV

1

1lim

Fx,F

yf ,,x

{ } å=

=N

iyiiPEG iif

NyV

1,,

1, xx

Page 26: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Policy comparison: N trials per policy

Return

1 2

meanmean

N=8

> ?Policy

Page 27: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Policy comparison: paired scenarios

Return

meanmean

N=8

> ?Policy

1 2

Page 28: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Optimizing with policy comparisons only– DSM, random search, DE, grid search

– but not quadratic approximation, simulated annealing, gradient methods

Paired statistical tests– model changes in individuals (e.g. before and after treatment)

– or the difference between 2 policies evaluated with same start state:

– allows calculation of a significance or automatic selection of N

Paired t test:– “is the expected difference non-zero?”

– the natural statistic; assumes Normality

Wilcoxon signed rank sum test:– non-parametric: “is the median non-zero?”

– biased, but works with arbitrary symmetrical distribution

Policy comparison: Paired statistical tests

xxx ,,21 21~),( FFθθD -

Page 29: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Experimental Results(Downhill Simplex Method & Random Search)

RETURN (%) 2048 TRIALS 65536 TRIALS

N = 64 TRAINING TEST TRAINING TEST

RANDOM SEARCH 1.5 ± 0.2 7.2 ± 2.0 13 ± 0 28 ± 2

PEGASUS 28 ± 1 33 ± 1 60 ± 3 34 ± 2

PEGASUS (WX) 5.3 ± 0.1 34 ± 3 46 ± 2 42 ± 1

SCENARIOS (WX) 4.3 ± 0.2 17 ± 0 44 ± 1 41 ± 1

UNPAIRED 4.6 ± 0.2 20 ± 1 40 ± 2 40 ± 2

Pairing accelerates learning Pegasus overfits (to the particular random seeds) Wilcoxon test reduces overfitting Only the start states need to be paired

Page 30: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Adapting N in Random Search

Paired t test: 99% significance (accept); 90% (reject)– Adaptive N used on average 24 trials for each policy

0

0.1

0.2

0.3

0.4

0.5

0.6

0 2048 4096 8192 16384 32768 65536

Trials

Test

Set

Per

form

ance

N=16

N=64

Adaptive N

Page 31: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Paired t test: 95% confidence

Upper limit on N increases from 16 to 128 during learning

Adapting N in the Downhill Simplex Method

0

0.1

0.2

0.3

0.4

0.5

0.6

0 8192 16384 24576 32768 40960 49152 57344 65536Trials

Per

form

ance

Training

Test

Restarts

Page 32: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Differential Evolution: N=2

Very small N can be used– because population has an averaging effect

– decisions only have to be >50% reliable

With unpaired comparisons: 27% performance With paired comparisons: 47% performance

– different Pegasus scenarios for every comparison

The challenge: find a stochastic optimization procedure that– exploits this population averaging effect

– but is more efficient than DE.

Page 33: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

2D pursuer evader: summary

Relevance of results– non-trivial cooperative strategies can be learnt very rapidly

– major performance gain against maneuvering targets compared with ‘selfish’ pursuers

– awareness of position of other pursuer improves performance

Learning is fast with direct policy search– success on 12D problem

– paired statistical tests are a powerful tool for accelerating learning

– learning was faster if policies were initially symmetrical

– policy iteration / fictitious play was also highly effective

Extension to 3 dimensions– feasible

– policy space much larger (perhaps 24D)

Page 34: Reinforcement Learning Methods for Military Applications Malcolm Strens Centre for Robotics and Machine Vision Future Systems Technology Division Defence

Conclusions

Reinforcement learning is a practical problem formulation for training autonomous systems to complete complex military tasks.

A broad range of potential applications has been identified.

Many approaches are available; 4 types identified.

Direct policy search methods are appropriate when:– the policy can be expressed compactly

– extended planning / temporal reasoning is not required

Model-based methods are more appropriate for:– discrete state problems

– problems requiring extended planning (e.g. navigation)

– robustness guarantee