reinforcement learning for adaptive traffic signal control...

1
Reinforcement Learning For Adaptive Traffic Signal Control With Limited Information Machine Learning (CS 229) Final Project, Fall 2015 Jeff Glick ([email protected]), M.S. Candidate, Department of Management Science and Engineering, Stanford University Recent research has effectively applied reinforcement learning to adaptive traffic signal control problems. Learning agents learn best with a high level of intelligence about what state the environment is in to determine the right actions for a given state. Most researchers have provided this access to perfect state information. However, in a real-life deployment, this level of intelligence would require extensive physical instrumentation. This study explores the possibility of training a control agent that has only access to information from a limited number of vehicles using cell phone geo-location data in the interest of comparing performance against legacy fixed phase timing policy and under regimes where the agent has access to perfect information. Motivation Next Steps Motivation here Reinforcement Learning Cycle Q-learning develops quality-values (, ) for each pair (, ) which is an estimate for the true value (, ) Continuous asynchronous updating for on-line learning; assume infinite visits to states for convergence Q-learning update: , , + ( + ′∈ ( , )) States: Discretized state space; number of states for problem: # ℎ ℎ ∗[ # # ] (# ) Learning Rate : Initially = 1 ignores previous experience; As 0, we are weighting previous experience more heavily , = 1 (,) = 1, 1 2 , 1 3 , 1 4 ,… Discount factor : Use ≈1 to prevent myopic decision making Control policy: Given a state , try action with probability: = , , (soft max distribution) where controls exploration; if is large, actions are chosen with equal . As →0, policy becomes deterministic and choose max (, ) with probability of 1. Reward : determined by objective function: = =1 4 ( ) + ( ) Algorithm & Key Parameters Simulation Build & Data Generation Simulation Setup: Open Street Map and Java Open Street Map Editor Simulation for Urban Mobility (SUMO) Using realistic, variable arrival and turn rates for a single 8- phase 4-way intersection (arrivalGen.py) Simulation Architecture & Learning Pipeline map.osm Prepared in Java OSM Editor in.net.xml Lanes, Intersections, Lights, Speed Limits SUMO NETCONVERT in.add.xml Induction Loops Misc. simulation inputs in.rou.xml Vehicle routes & times arrivalGen.py Fit polynomial arrival rate functions to synthetic data Generate random vehicle arrival schedule Tag selected vehicles if GEOLOCATION is ON (~30%) palm.sumocfg Simulation control file Run in SUMO GUI or Command Line out.queue.xml Lane queue sizes at time t out.fcd.xml Vehicle status at time t out.full.xml Full simulation output (lane throughput, occupancy) out.ind.xml Induction loop counts and status at time t parseFull.py Validate simulation Analyze & visualize output Assess performance of learning algorithms & adjust tuning params controller.py Decide light phase changes Collect reward based on objective function Learn optimal policy via Q- learning SERVER CLIENT Control stop light via SUMO Traffic Control Interface (TraCI) API detectState.py Maximum likelihood estimate of non-homogenous arrival rates, queue sizes & waiting times in.set.xml SUMO GUI settings CLIENT Action: Every s seconds, transition to phase (1) or continue current phase (0) Validated traffic dynamics; Selected bucket thresholds for discrete queue sizes and waiting times Queues blowing up Learning rate shrinking quickly Crude discretization (most of 25k states not being visited) Challenges with volatility Reward should = change in objective function (reward improvement) Initial Results Acknowledgements: Michael Bennon (Stanford Global Projects Center), Allen Huang (CS229 Student), Jesiska Tandy (CS229 Student) Throttled learning rate , ; system still performing better during off-peak; Some improvement by increasing bucket thresholds, delaying the progression of the learning rate Increased (important when rewards are negative) Still performance issues; MDP assumption may not hold Continue to experiment with learning strategy, parameters and objective function; improve discretization Work on state detection problem (limited information); learn arrival rates or use hour of day in the state space Change arrival rate dynamics to test robustness of process

Upload: others

Post on 27-Apr-2020

9 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Reinforcement Learning For Adaptive Traffic Signal Control ...cs229.stanford.edu/Proj2015/369_Poster.pdfReinforcement Learning For Adaptive Traffic Signal Control With Limited Information

Reinforcement Learning For Adaptive Traffic Signal Control With Limited InformationMachine Learning (CS 229) Final Project, Fall 2015

Jeff Glick ([email protected]), M.S. Candidate, Department of Management Science and Engineering, Stanford University

Recent research has effectively applied reinforcement learning to adaptive traffic signal control problems. Learning agents learn best with a high level of intelligence about what state the environment is in to determine the right actions for a given state. Most researchers have provided this access to perfect state information. However, in a real-life deployment, this level of intelligence would require extensive physical instrumentation. This study explores the possibility of training a control agent that has only access to information from a limited number of vehicles using cell phone geo-location data in the interest of comparing performance against legacy fixed phase timing policy and under regimes where the agent has access to perfect information.

Motivation

Next Steps

Motivation here

Reinforcement Learning Cycle

• Q-learning develops quality-values 𝑸(𝒔, 𝒂) for each pair (𝒔, 𝒂)which is an estimate for the true value 𝑽(𝒔, 𝒂)

• Continuous asynchronous updating for on-line learning; assume infinite visits to states for convergence

• Q-learning update:𝑸 𝒔, 𝒂 ≔ 𝟏 − 𝜶 𝑸 𝒔, 𝒂 + 𝜶(𝒓 + 𝜸𝐦𝐚𝐱

𝒂′∈𝑨𝑸(𝒔′, 𝒂′))

• States: Discretized state space; number of states for problem:

# 𝑙𝑖𝑔ℎ𝑡 𝑝ℎ𝑎𝑠𝑒𝑠 ∗ [ # 𝑞𝑢𝑒𝑢𝑒 𝑠𝑖𝑧𝑒𝑠 ∗ # 𝑤𝑎𝑖𝑡𝑖𝑛𝑔 𝑡𝑖𝑚𝑒𝑠 ](# 𝑒𝑑𝑔𝑒𝑠)

• Learning Rate 𝛼: Initially 𝛼 = 1 ignores previous experience; As 𝛼 →0, we are weighting previous experience more heavily

𝛼 𝑠, 𝑎 =1

𝑛(𝑠,𝑎)= 1,

1

2,1

3,1

4, …

• Discount factor 𝛾: Use 𝛾 ≈ 1 to prevent myopic decision making• Control policy: Given a state 𝑠, try action 𝑎 with probability:

𝑝𝑠 𝑎 =𝑒 𝑄 𝑠,𝑎 𝜏

𝑎∈𝐴 𝑒 𝑄 𝑠,𝑎 𝜏

(soft max distribution) where 𝜏 controls exploration; if 𝜏 is large, actions are chosen with ≈ equal 𝑝𝑠 𝑎 . As 𝜏 → 0, policy becomes deterministic and choose max

𝑎∈𝐴𝑄(𝑠, 𝑎) with probability of 1.

• Reward 𝑟: determined by objective function:

𝑟 = 𝑒𝑑𝑔𝑒 𝑖=14 𝛽𝑞(𝑞𝑢𝑒𝑢𝑒 𝑠𝑖𝑧𝑒)𝑖

𝜃𝑞+ 𝛽𝑤(𝑤𝑎𝑖𝑡𝑖𝑛𝑔 𝑡𝑖𝑚𝑒)𝑖𝜃𝑤

Algorithm & Key Parameters

Simulation Build & Data Generation

Simulation Setup:• Open Street Map and Java

Open Street Map Editor• Simulation for Urban

Mobility (SUMO)

• Using realistic, variable arrival and turn rates for a single 8-phase 4-way intersection (arrivalGen.py)

Simulation Architecture & Learning Pipelinemap.osm• Prepared in Java OSM Editor

in.net.xml• Lanes, Intersections,

Lights, Speed Limits

SUMO NETCONVERT

in.add.xml• Induction Loops•Misc. simulation inputs

in.rou.xml•Vehicle routes & times

arrivalGen.py• Fit polynomial arrival rate functions to synthetic data•Generate random vehicle arrival schedule• Tag selected vehicles if GEOLOCATION is ON (~30%)

palm.sumocfg• Simulation control file• Run in SUMO GUI or Command Line

out.queue.xml• Lane queue sizes at time t

out.fcd.xml•Vehicle status at time t

out.full.xml• Full simulation output (lane

throughput, occupancy)

out.ind.xml• Induction loop counts and

status at time t

parseFull.py•Validate simulation•Analyze & visualize output•Assess performance of learning algorithms & adjust tuning params

controller.py•Decide light phase changes• Collect reward based on

objective function• Learn optimal policy via Q-

learning

SERVER

CLIENT

Control stop light via SUMO Traffic Control

Interface (TraCI) API

detectState.py• Maximum likelihood estimate

of non-homogenous arrival rates, queue sizes & waiting times

in.set.xml• SUMO GUI settings

CLIENT

Action: Every s seconds,

transition to phase (1) or

continue current phase (0)

• Validated traffic dynamics; Selected bucket thresholds for discrete queue sizes and waiting times

• Queues blowing up

• Learning rate 𝛼 shrinking quickly• Crude discretization (most of

25k states not being visited)• Challenges with volatility • Reward should = change in

objective function (reward improvement)

Initial Results

Acknowledgements: Michael Bennon (Stanford Global Projects Center), Allen Huang (CS229 Student), Jesiska Tandy (CS229 Student)

• Throttled learning rate 𝛼 𝑠, 𝑎 ; system still performing better during off-peak;

• Some improvement by increasing bucket thresholds, delaying the progression of the learning rate

• Increased 𝜏 (important when rewards are negative)

• Still performance issues; MDP assumption may not hold

• Continue to experiment with learning strategy, parameters and objective function; improve discretization

• Work on state detection problem (limited information); learn arrival rates or use hour of day in the state space

• Change arrival rate dynamics to test robustness of process