a comparative analysis of agent-based pedestrian modelling ... · chapter 1 1 introduction as...
TRANSCRIPT
A Comparative Analysis of Agent-Based Pedestrian Modelling Approaches
by
Gregory Braeden Patchell Hoy
A thesis submitted in conformity with the requirements for the degree of Master of Applied Science
Department of Civil Engineering University of Toronto
© Copyright by Gregory Braeden Patchell Hoy 2017
ii
A Comparative Analysis of Agent-Based Pedestrian Modelling
Approaches
Gregory Braeden Patchell Hoy
Master of Applied Science
Department of Civil Engineering
University of Toronto
2017
Abstract
Pedestrian simulation models are becoming more frequently relied upon by researchers and
practitioners when designing or evaluating transit stations, event venues, and other large facilities.
However, the computational performance of these models can vary greatly and their relative
accuracy is generally unexplored, leaving users with little guidance when selecting a model for
their needs. Here, models based on the Social Forces, Optimal Steps, and static graph concepts are
coded into MassMotion, then calibrated against observed pedestrian flow data using Genetic
Algorithms. Using two transit station scenarios, each model’s ability to match observed and
increased volume pedestrian flows is explored and their relative speeds are compared, generally
showing significant speed advantages over the default MassMotion model, but also indicating that
the graph and Optimal Steps models are note sufficiently sensitive to congestion. Additionally,
this work explores the behaviour of these models within various facility elements, including
corridors, corners, and open spaces.
iii
Acknowledgments
Ever since I began my undergraduate studies, I knew that I wanted to focus on transportation
engineering and get an MASc degree in the field, and by completing this thesis, I have finally
achieved my goal. Not only does this work represent the end of my masters’ studies, but it
concludes my seven years at the University of Toronto. This may be the beginning of my thesis,
but it is the end of a very meaningful chapter of my life.
Of course, I could not have made it to this point without the support of many people. First and
foremost, I would like to thank my supervisor, Professor Amer Shalaby, for all his help and
guidance over the years. From discussions on data collection to debating the merits of various
pedestrian models, I greatly value the time you spent with me and the insights you provided, all of
which helped bring this project to life. I would also like to thank my industrial supervisor, Erin
Morrow, for his expertise on all things MassMotion and his help with the more technical aspects
of this project. Thank you both!
I must also thank Siva Srikukenthiran for all of his support and help with this project, as well as
Ian Mackenzie, Lachlan Miles, Micah Zarnke, and Daniel Park for their assistance with
MassMotion and the MassMotion Software Development Kit, two pieces of software that were
essential to completing this work. Thank you as well to the Natural Sciences and Engineering
Research Council of Canada (NSERC) and the Transportation Association of Canada (TAC) for
funding this research.
Finally, I would like to thank some people who have always supported and motivated me
throughout these past years. To my parents, thank you for always being there for me, and for
supporting me on this path – who knew my passion for transportation would go this far? To my
friends and colleagues in the ITS Lab, thank you for being there to celebrate the successes, keep
me going when things got rough, and of course help collect my data! Last but certainly not least,
thank you to my best friend and partner, Wendy – I couldn’t have done this without you.
iv
Table of Contents
Acknowledgments.......................................................................................................................... iii
Table of Contents ........................................................................................................................... iv
List of Tables ................................................................................................................................ vii
List of Figures ................................................................................................................................ ix
List of Appendices ........................................................................................................................ xii
List of Acronyms ......................................................................................................................... xiii
1 Introduction .................................................................................................................................1
1.1 Research Objectives .............................................................................................................3
1.2 Thesis Organization .............................................................................................................4
2 Literature Review ........................................................................................................................6
2.1 The Evolution of Pedestrian Simulation ..............................................................................6
2.1.1 Early Pedestrian Models ..........................................................................................6
2.1.2 Research Developments and Current Models ..........................................................8
2.1.3 Commercial Models and Tools ..............................................................................10
2.2 Calibration and Comparison of Agent-Based Models .......................................................11
2.2.1 Calibration of Models ............................................................................................11
2.2.2 Comparison of Models ...........................................................................................13
2.3 Data Collection for Pedestrians .........................................................................................15
2.3.1 Video Data .............................................................................................................16
2.3.2 Wireless Tracking ..................................................................................................17
3 Motivation and Context ............................................................................................................19
3.1 The Nexus Platform ...........................................................................................................19
3.2 Comparison of MassMotion and Legion ...........................................................................21
3.2.1 Bottleneck Test ......................................................................................................22
3.2.2 Stair Test ................................................................................................................26
v
3.2.3 Waiting Test ...........................................................................................................28
3.2.4 Conclusions ............................................................................................................31
4 Data Collection..........................................................................................................................33
4.1 Data Collection Methods ...................................................................................................33
4.2 Site Selection .....................................................................................................................35
4.2.1 Osgoode Station .....................................................................................................36
4.2.2 Queen’s Park Station..............................................................................................38
4.3 Data Processing ..................................................................................................................40
5 Pedestrian Model Implementation ............................................................................................44
5.1 Graph Model ......................................................................................................................45
5.2 Social Forces Model ..........................................................................................................46
5.3 Optimal Steps Model .........................................................................................................51
5.4 MassMotion .......................................................................................................................54
5.5 Cellular Automata Model ..................................................................................................55
6 Model Calibration .....................................................................................................................59
6.1 Calibration Method ............................................................................................................59
6.2 Preliminary Calibration and Revision ................................................................................62
6.2.1 Calibration of Social Forces Parameters ................................................................62
6.2.2 Developing a New Calibration Procedure .............................................................65
6.3 Final Calibration Results....................................................................................................66
7 Model Comparison ....................................................................................................................71
7.1 Base Case Performance......................................................................................................71
7.1.1 Osgoode Station Concourse ...................................................................................72
7.1.2 Queen’s Park Station Concourse ...........................................................................76
7.1.3 Station Elements ....................................................................................................79
7.2 Increased Volume Scenarios ..............................................................................................85
vi
7.2.1 Osgoode Station Concourse ...................................................................................85
7.2.2 Queen’s Park Station Concourse ...........................................................................89
8 Conclusions and Future Work ...................................................................................................94
8.1 Conclusions ........................................................................................................................94
8.2 Future Work .......................................................................................................................96
References ......................................................................................................................................98
vii
List of Tables
Table 3.1: Level of Service colours and values (Fruin, 1971) ...................................................... 26
Table 4.1: Osgoode Station summary data from August 25th, 2016 ............................................. 38
Table 4.2: Queen’s Park Station summary data from September 14th, 2016 ................................ 40
Table 4.3: Sample of raw data from Advanced Tally Counter (left) and Counter + (right) ......... 41
Table 4.4: Osgoode Station origin/destination matrix .................................................................. 42
Table 4.5: Queen’s Park Station origin/destination matrix ........................................................... 43
Table 4.6: Average pedestrian speeds in Osgoode and Queen’s Park Stations ............................ 43
Table 5.1: Common variables used in model equations ............................................................... 45
Table 5.2: Common parameter values used in model equations .................................................. 45
Table 5.3: Values of key parameters for Social Forces model ..................................................... 49
Table 5.4: Values of key parameters for Optimal Steps model .................................................... 53
Table 5.5: Component forces in MassMotion Social Forces model (Arup, 2015) ....................... 55
Table 6.1: Key parameters and functions used in Genetic Algorithm .......................................... 61
Table 6.2: Social Forces parameters from literature and calibration ............................................ 63
Table 6.3: Uncalibrated and calibrated model speed parameters ................................................. 66
Table 7.1: R-squared and SSE fits for all models, Osgoode scenario .......................................... 74
Table 7.2: Setup and run times for all pedestrian models, Osgoode scenario .............................. 76
Table 7.3: R-squared and SSE fits for all models, Queen’s Park scenario ................................... 78
Table 7.4: Setup and run times for all models, Queen’s Park scenario ........................................ 79
Table 7.5: Model R-squared and SSE against MassMotion, Osgoode scenarios ......................... 86
viii
Table 7.6: Set-up and run times for all models in higher-volume Osgoode scenarios ................. 89
Table 7.7: Model R-squared and SSE against MassMotion, Queen’s Park scenarios .................. 90
Table 7.8: Set-up and run times for all models in higher-volume Queen’s Park scenarios .......... 93
ix
List of Figures
Figure 2.1: Movement of an agent represented by three different pedestrian movement models
(Seitz, Dietrich, Köster, & Bungartz, 2016) ................................................................................... 9
Figure 3.1: Main components of the Nexus platform and their organization (Srikukenthiran &
Shalaby, 2017) .............................................................................................................................. 20
Figure 3.2: Geometry for bottleneck scenario test ........................................................................ 22
Figure 3.3: Cumulative volume plots for bottleneck test, screenlines 1 and 2 ............................. 23
Figure 3.4: Agent trip time (creation to exit) distribution for bottleneck test............................... 24
Figure 3.5: Maximum agent densities for bottleneck test ............................................................. 25
Figure 3.6: Average agent densities for bottleneck test ................................................................ 25
Figure 3.7: Geometry for stair test ................................................................................................ 27
Figure 3.8: Cumulative volume plots for stair test, screenlines 1 and 4 ....................................... 27
Figure 3.9: Maximum agent densities for stair test....................................................................... 28
Figure 3.10: Geometry for waiting scenario test .......................................................................... 29
Figure 3.11: Agent trip time distribution for waiting test ............................................................. 30
Figure 3.12: Average agent densities for waiting test................................................................... 31
Figure 4.1: Map of TTC subway network, highlighting Osgoode and Queen’s Park stations
(Toronto Transit Commission, 2017) ........................................................................................... 36
Figure 4.2: Map of Osgoode Station concourse with screenlines ................................................. 37
Figure 4.3: Map of Queen’s Park station concourse with screenlines .......................................... 39
Figure 4.4: Cumulative flow plot for Osgoode screenline A ........................................................ 41
x
Figure 5.1: Diagram of elliptical force fields used in Social Forces model (Johansson, Helbing, &
Shukla, 2007) ................................................................................................................................ 48
Figure 5.2: Impact of increasing relaxation parameter from 0.5 (left) to 0.75 (right) .................. 50
Figure 5.3: Determination of potential positions in Optimal Steps model ................................... 52
Figure 5.4: Comparison of hexagonal, Von Neumann, and Moore neighbourhoods ................... 56
Figure 5.5: Discretized corridor with centroids in Osgoode Station ............................................ 57
Figure 5.6: Disconnected and irregularly spaced centroids at junction between elements .......... 57
Figure 6.1: Comparison of Social Forces parameter sets for street exit flows ............................. 64
Figure 6.2: Comparison of Social Forces parameters sets for all concourse exit flows ............... 64
Figure 6.3: Observations vs. calibrated Graph model, 60 second flows....................................... 67
Figure 6.4: Observations vs. calibrated Social Forces model, 60 second flows ........................... 68
Figure 6.5: Observations vs. calibrated Optimal Steps model, 60 second flows .......................... 69
Figure 6.6: Observations vs. MassMotion, 60 second flows ........................................................ 70
Figure 7.1: Comparison of observations and all model flows at screenlines A-C ........................ 73
Figure 7.2: Comparison of observations and all model flows at screenlines D & E .................... 75
Figure 7.3: Comparison of observations and all model flows at screenlines A-D ....................... 77
Figure 7.4: Comparison of observations and all model flows at screenlines E & F ..................... 78
Figure 7.5: Experienced density maps of Osgoode corridor......................................................... 80
Figure 7.6: Experienced density maps of Queen’s Park T-junction ............................................. 81
Figure 7.7: Maximum density maps of Osgoode concourse......................................................... 83
Figure 7.8: Maximum density maps for Queen’s Park concourse ................................................ 84
xi
Figure 7.9: Cumulative exit plot at Osgoode screenline A, 160% volume ................................... 87
Figure 7.10: Cumulative exit plot at Osgoode screenline E, 160% volume ................................. 87
Figure 7.11: Cumulative exit plot at Osgoode screenline E, 170% volume ................................. 88
Figure 7.12: Cumulative exit plot at Queen’s Park screenlines C and E, 170% volume .............. 90
Figure 7.13: Cumulative exit plot at Queen’s Park screenline F, 170% volume .......................... 91
Figure 7.14: Cumulative exit plot at Queen’s Park screenline B, 180% volume ......................... 92
Figure 7.15: Cumulative exit plot at Queen’s Park screenline E, 180% volume.......................... 92
xii
List of Appendices
Appendix A – Osgoode Station Calibration Data ........................................................................108
Appendix B – Pedestrian Model Code.........................................................................................110
Appendix C – Model Calibration Code .......................................................................................130
Appendix D – Model Evaluation Code ........................................................................................143
Appendix E – GNU General Public License ...............................................................................148
xiii
List of Acronyms
CA Cellular Automata
GA Genetic Algorithm
GRE Global Relative Error
ITS Intelligent Transportation Systems
LOS Level of Service
MAC Media Access Control
SDK Software Development Kit
SSE Sum of Squared Errors
TTC Toronto Transit Commission
1
Chapter 1
1 Introduction
As populations in urban centres grow, so does congestion and crowding in major venues and
facilities. Not only can this negatively impact the operation of these facilities and the experience
of pedestrians moving through them, but severe crowding has led to numerous disasters involving
injuries and fatalities (Helbing, Farkas, Molnár, & Vicsek, 2002). To better understand how people
behave in these situations and what impacts various levels of crowding have on the operation and
safety of a facility, pedestrian models are employed. These models often rely on complex
algorithms to reproduce specific attributes of pedestrian movements in both low- and high-density
scenarios. With recent increases in available software and computational power, these models have
enabled researchers and practitioners to assess the performance of transit stations (King,
Srikukenthiran, & Shalaby, 2014), simulate evacuations in buildings (Zheng, Zhong, & Liu, 2009),
and propose safety improvements at a major religious site (Ball, 2007). Undoubtedly, the results
of these investigations depend on the quality of the geometric models and input data used, but the
pedestrian movement model at the heart of each simulation is not often considered. Further, the
amount of time it takes to simulate a scenario can vary drastically between movement models
(Castle, Waterson, Pellissier, & Le Bail, 2011), as can their accuracy and the level of detail with
which they simulate pedestrian movements.
With continued improvements in computer technology, detailed pedestrian movement models
have grown in favour, but simple pedestrian simulation methods are thought to still be of great
use. This is particularly true for applications and studies where detailed models are not required
for some elements of the system under investigation, or times when pedestrian volumes are low.
For example, some high-volume transfer stations in a subway system or network may demand the
use of a more complex pedestrian model, but stations that are geometrically simple and feature
lower volumes, or are investigated during off-peak times, may be sufficiently simulated with a
simple model. However, before any methods can be employed, it is important to understand how
well these models replicate real-world conditions and behaviours. It is also key to investigate when
the accuracy of such models becomes unacceptable relative to their reduced computational
demands – even if “all models are wrong, but some are useful” (Box & Draper, 1987), it seems
inappropriate to use a very wrong model in the name of saving time.
2
Although some studies have been conducted to evaluate and compare pedestrian microsimulation
models, none could be found that compared a range of models using identical real-world scenarios
within a consistent computational framework. To perform such a comparison, a novel approach
was developed using MassMotion, a commercial pedestrian microsimulation package, and its
Software Development Kit (SDK), a tool that allows external code to take full control of agent
movement and behaviour in a MassMotion simulation. On its own, MassMotion was used to
develop three dimensional models of two local Toronto Transit Commission (TTC) subway
stations from detailed floor plans, including all stairs, escalators, fare gates, and other elements.
Each of these models was then supplemented with real-world passenger movement data collected
from the corresponding station to serve as an input timetable, providing agent creation times and
locations. With geometric models for each station established and initial demand scenarios
configured, a set of simple pedestrian movement models was chosen, and using MassMotion’s
SDK, coded in C# to interface with the MassMotion core. By using the SDK, the geometric models
of each station, agent timetables, and MassMotion-generated data such as cost and distance maps
could remain identical, regardless of whether an external pedestrian model or the integrated
MassMotion movement model was being tested. This also allowed for an identical computational
framework, ensuring that all model runs were limited to a specified number of CPU threads and
given the same resources, so any comparisons of computational speed would be fair.
Testing these models began with selecting them based on existing methods in the literature,
ranging from mesoscopic to microscopic simulation methods. Although technical limitations
influenced the final choice of models, such as the need to operate in continuous space and work
with an agent-based framework, the models selected each represented a different level of
abstraction or detail with which pedestrian movements are simulated. Specifically, the models
chosen were a graph-type mesoscopic model, a simple Social Forces microscopic model, and a
hybrid Optimal Steps model, in addition to the default MassMotion microscopic Social Force-
based model. Once chosen, the movement models were coded to interface with the MassMotion
SDK, and necessary modifications were made to ensure that each model was compatible with
MassMotion geometry. Speed parameters for each model were then calibrated to best match the
observed data from Osgoode Station using an optimization approach based on Genetic Algorithms.
The full set of calibrated movement models were used to simulate base case scenarios in each
station, generating cumulative flow plots at key screenlines, agent density maps for concourse
3
areas and smaller elements, and simulation run times. Finally, higher-volume tests were conducted
to compare model behaviours under congested scenarios, providing some insight into the limits of
these models.
1.1 Research Objectives
First and foremost, this research was driven by the goal of understanding how the four chosen
pedestrian movement models compared to one another in terms of three things; their ability to
accurately simulate low- to medium-volume transit station flows, differences in their
representation of pedestrian movement in different geometric spaces, and limitations on their
ability to simulate higher-volume pedestrian scenarios. By comparing each model’s ability to
simulate flows, one can get a good idea of how well the model reproduces macroscopic
phenomena, while more microscopic details of each model’s operation and accuracy can be
obtained by inspecting agent paths and densities in specific geometric elements. Higher-volume
tests, on the other hand, help to establish boundaries for the reasonable and accurate operation of
each model. In addition to comparing the accuracy of the models, emphasis was placed on
comparing each model’s computational speed during these tests, providing another important
difference to consider when selecting a model for a specific task.
This work also involved a number of technical objectives, mostly regarding the use of MassMotion
and the development of the external models. Since the MassMotion SDK is quite new, this project
presented an opportunity to test it, helping the development team fix any errors and add new
features, and use it in an academic setting. More importantly, a major goal with the use of the
MassMotion SDK was to demonstrate its use as a platform for implementing and testing various
external pedestrian movement models, ensuring that all models could use the same geometry and
demand data, and that the computational performance of the models could be fairly assessed.
Another objective was to show that pedestrian movement models could be coded with relative ease
using information readily available in the literature and integrated into this simulation platform.
With numerous pedestrian models detailed in journals and conference proceedings, demonstrating
the ability to use and test these models within MassMotion would set the stage for significant
future work. The last objective of this research was to calibrate each of the simple pedestrian
4
models against observed data, showing the applicability of both a specific calibration algorithm
and dataset.
1.2 Thesis Organization
The body of this thesis is divided into several chapters, each of which details a significant piece of
this research project. These chapters are as follows:
• Chapter 2: Literature Review – The origins of pedestrian modelling and simulation are
described in detail, and more recent trends and developments in the field are highlighted.
Existing efforts by the research community to compare pedestrian models are explored,
highlighting the need for this research and the gap it fills. New methods of collecting
pedestrian movement data are also noted, in addition to the results of some studies using
these technologies.
• Chapter 3: Motivation and Context – Details are provided about the Nexus platform, an
integrated simulation tool combining pedestrian, rail, and surface transit simulation to
better understand network performance, and how this research fits in to Nexus. The results
of a study comparing two commercial pedestrian simulation tools, MassMotion and
Legion, are also presented, reinforcing the need to test pedestrian models before using
them.
• Chapter 4: Data Collection – Methods considered and used for collecting pedestrian data
are described, and details are provided on each of the stations from which data was
collected. A summary of the data from each station is presented.
• Chapter 5: Pedestrian Model Implementation – The rationale behind selecting each
pedestrian model is presented, along with details of how it simulates pedestrian movement
and how it was implemented within MassMotion.
• Chapter 6: Model Calibration – Details are provided on the Genetic Algorithm procedure
used to calibrate all external models, including issues encountered with the first calibration
effort, and the promising results of the second calibration.
5
• Chapter 7: Model Comparison – The performance of each model is detailed for reproducing
macroscopic flows, and a relative comparison of element-level behaviours is provided.
Higher-volume tests are also performed, offering insight into the limitations that should be
considered when choosing one of the models.
• Chapter 8: Conclusions and Future Work – The findings of this research are summarized,
and suggestions for extending and improving this work are provided.
6
Chapter 2
2 Literature Review
To better understand the world of pedestrian simulation and modelling, three key areas have been
explored in detail. First, the evolution of pedestrian simulation is described, from its simple origins
as an extension of vehicular modelling to new research-based and commercial models. This is
followed by a review of studies calibrating and comparing both vehicular and pedestrian
microsimulation models, identifying knowledge gaps this research aims to fill. Finally, a number
of new and innovative pedestrian data collection techniques are covered, suggesting some ways in
which necessary base-case and calibration data may be obtained.
2.1 The Evolution of Pedestrian Simulation
Over the last 50 years, there has been a broad range of research performed on pedestrian and crowd
dynamics. Although some simple models of pedestrian movement signified the inception of this
field years earlier (Batty, 2001), researchers and practitioners began to pay more attention to
pedestrians following Fruin’s (1971) work on defining pedestrian Level of Service metrics and
highlighting the importance of detailed pedestrian planning. With this spark, a wide range of
pedestrian models were developed, ranging from simple macroscopic approaches to detailed
microscopic simulators. Further, these tools were made accessible to practitioners with the
development and introduction of commercial pedestrian simulation software. The following
sections highlight some of the main breakthroughs in this field, as well as more recent advances
concerning both research and practical applications of pedestrian modelling.
2.1.1 Early Pedestrian Models
The first pedestrian simulation models were derived directly from existing models of vehicular
movement, with minimal changes to handle the differences in behaviour between vehicles and
pedestrians (Batty, 2001). Many of these models neglected the impact of interactions between
pedestrians, instead using simple statistical regression models to approximate the effects of
7
location and geometry on movement, while those that did consider interactions relied on aggregate
data and macroscopic networks (Batty, 2001). At that time, modellers were also constrained by
the lack of available computing power (Schadschneider, Chowdhury, & Nishinari, 2011), forcing
the use of macroscopic models instead of more detailed microscopic ones. One popular approach
was network modelling, where more complex geometric spaces were simplified into graphs of
links and nodes, with links often representing doorways or hallways, while nodes represented
rooms, shops, or other points of interest (Borgers & Timmermans, 1986). These simplified
structures made it easy for researchers to apply and test models, such as one based on probabilities
for pedestrian route choice in shopping centres, developed by Borgers and Timmermans (1986).
Later, Løvås (1994) used a similar framework to construct a model of pedestrian movement by
applying queueing equations to the links and nodes of a small building network. This model was
used to simulate how a crowd of pedestrians in one room of the building would move through
other rooms to the exit, demonstrating density-dependent route choices and reductions in speeds
caused by the congestion on links (doorways).
Around the same time that work on network models was coming to a close, other macroscopic and
microscopic approaches with a greater emphasis on agent interactions were being developed and
tested. At the macroscopic scale, Helbing (1992) proposed a fluid dynamic model to describe the
movement of pedestrians, using gas kinetic equations and Boltzmann principles to represent
multiple types of pedestrian movements as different interacting fluids. Using these equations, it
was possible to demonstrate some observed phenomena, such as the automatic formation of lanes
in a corridor, as well as make recommendations for optimizing pedestrian flow in the contexts of
town- and traffic-planning (Helbing, 1992). At the same time, one of the first Cellular Automata
(CA) models of pedestrian movement was developed, representing blocks of space around the
Jamarah ring in Makkah as cells and exploring how different rates of arrival and loading schemes
affected volumes and clearing times for the space (AlGhadi & Mahmassani, 1991). Using this
model, the authors were not only able to test multiple scenarios, but they suggested ways in which
this research could be applied to other congested, high-volume crowd scenarios, emphasizing the
importance of ongoing research in this field (AlGhadi & Mahmassani, 1991).
Finally, this decade played host to arguably one of the most significant developments in pedestrian
simulation – the proposal of the Social Force model. Based on the idea of behavioural changes
being caused by ‘forces,’ this model sought to apply this concept at the microscopic scale by
8
calculating these forces and their resulting changes in velocity and position for each pedestrian
many times during a simulation. These forces were originally calculated as a sum of three
components; an acceleration term to bring the agent to their desired velocity, a repulsive term
based on forces from nearby agents (neighbours), and a repulsive term based on forces from nearby
obstacles (walls, columns, etc.) (Helbing & Molnár, 1995). Unlike CA, this model did not require
a discretized space in which to operate, instead allowing agents to move in continuous space. By
performing computer simulations with this model, the authors were able to empirically
demonstrate its ability to represent observed pedestrian phenomena, such as lane formation in a
corridor, simply due to the interaction of pedestrians and subsequent application of forces (Helbing
& Molnár, 1995).
2.1.2 Research Developments and Current Models
Following the emergence of the first Social Force-driven model, research on pedestrian simulation
began to accelerate with many new models being developed from both existing concepts and new
ones. Due in part to significant increases in available computing power, the majority of these
models operated on the microscopic scale, considering individual agent behaviours and
movements. Improving upon earlier CA models, Blue and Adler (2001) and Lämmel and Flötteröd
(2015) both developed sets of behavioural rules allowing for bidirectional pedestrian movement
within a cellular grid, taking advantage of agent-based simulation to demonstrate simulated speed-
density relationships. Blue and Adler (2001) also indicate the computational performance of their
model – a surprising but welcome inclusion – but admit that the provided times are inherently tied
to the scenarios tested and computer system used. Other improved models were developed from
Helbing and Molnár’s core Social Force concept, in some cases simply by modifying how the
neighbour and obstacle forces were calculated (Helbing & Johansson, 2009) (Rudloff, Matyus,
Seer, & Bauer, 2011). In another case, Extended Decision Field Theory was combined with the
Social Force model to represent micro-level agent interactions and their effect on trip building
within a shopping centre, such as agents changing the order of their planned trip or choosing to
shop at different stores due to congestion in a corridor (Xi, Son, & Lee, 2010).
In addition to models developed from a single pedestrian simulation concept, some authors have
detailed their work on bringing aspects of multiple models together into novel methods. By
9
combining the continuous space of Social Force models with the discrete movement of CA models,
Seitz and Köster (2012) produced a hybrid model known as Optimal Steps (Seitz, Dietrich, Köster,
& Bungartz, 2016). This model operates by evaluating each agent’s current position and a number
of positions one “step” length away, selecting the one that minimizes the distance to their current
goal, avoids other nearby agents, and avoids nearby obstacles (Seitz & Köster, 2012). The impact
of this movement scheme is demonstrated in Figure 2.1, where it is also compared to movement
in Social Forces and Cellular Automata models. Further research on this concept led to an even
more flexible model, optimizing the agent’s next position anywhere on a disc within one “step”
length of their current position (von Sivers & Köster, 2013). Another hybrid solution relied upon
joining microscopic ordinary differential equations to macroscopic partial differential equations in
a mathematical framework, allowing some level of microscopic behaviour to affect macroscopic
simulation results (Cristiani, Piccoli, & Tosin, 2011).
Figure 2.1: Movement of an agent represented by three different pedestrian movement
models (Seitz, Dietrich, Köster, & Bungartz, 2016)
Finally, some new movement models have emerged within the research community – many
inspired by the original Social Force and CA concepts, but unique enough to stand on their own.
By configuring a system of ordinary differential equations to manipulate agent velocities based on
goal, neighbour, and obstacle proximities, Dietrich and Köster (2014) created the Gradient
Navigation model and successfully removed the oscillatory behaviours that plagued some force-
10
based models. Although the Gradient model is similar to the Social Force model, the direct
manipulation of agent velocities as opposed to agent accelerations produces different results.
Another model recently developed by Ji, Zhang, Hu, and Ran (2016) uses force-driven
manipulations of either agent acceleration or velocity, applying these changes to agent positions
constrained within a cellular grid. It could be argued that this model is yet another combination of
Social Forces and CA concepts, but this implementation further applies fuzzy logic mathematics
to maintain the effects of pedestrian inertia as agents move through space (Ji, Zhang, Hu, & Ran,
2016). A final emerging model gives consideration to another often overlooked factor in pedestrian
movement and crowd dynamics – the tendency for pedestrians to travel with others in small groups.
Working with video recordings of crowds in urban areas, Karamouzas and Overmars (2012)
defined three common group geometries (line-abreast, v-like, and leader-follower) and developed
a model that held agent groups to these formations while incorporating avoidance and adjustment
maneuvers for other groups and obstacles.
2.1.3 Commercial Models and Tools
Presently, there are many commercial models available for simulating pedestrian movements at
varying levels of complexity. Some of these models are based on popular methods developed in
the research community, including VISWALK (PTV Group, 2013) and MassMotion (Arup, 2015),
which are based on the Social Forces concept. In both cases, these models extend the basic Social
Forces model by implementing new forces, such as the queueing and drift forces used in
MassMotion, and including route choice models. This allows a user to simulate a scenario by
providing only a geometric model and demand table, then observe how pedestrians behave – of
course, these software tools also include numerous parameters to adjust, allowing users to obtain
more accurate results by calibrating their models to the population and scenario they are simulating
(Oasys Ltd., 2017). Another popular commercial tool, Legion, was not developed from an existing
model of pedestrian movement, but instead grew from Dr. G. Keith Still’s custom pedestrian model
based on event ingress and egress movements (Ronald, Sterling, & Kirley, 2007). Much like
VISWALK and MassMotion, Legion also provides flexible parameters with which to calibrate a
model. In all cases, these commercial models are more user-friendly than some of their research-
11
based counterparts, in part due to graphical user interfaces, built-in geometry creation and editing
tools, and integrated simulation environments.
While commercial and research-developed models should all offer reasonable simulations of
pedestrian movements, it must be acknowledged that no two methods will perform identically, and
without direct comparison, the magnitude of the differences remains unknown. It is also important
to note that some methods are more advanced and computationally demanding than others,
especially when comparing two models operating at different scales (e.g. macroscopic vs.
microscopic). Unfortunately, these differences in performance are rarely emphasized, providing
no clues as to whether the potential improvements in accuracy from a more demanding model
outweigh its increased computational costs, a key consideration in applications involving large-
scale pedestrian systems.
2.2 Calibration and Comparison of Agent-Based Models
With such a great variety of pedestrian simulation models available to use, it is important that both
researchers and practitioners understand how to set up and calibrate these models. Additionally,
understanding how models differ with respect to speed and accuracy, especially under different
geometric and volume scenarios, helps these users select an appropriate model for their specific
needs. In the following two sections, research on the calibration and comparison of transportation
simulation models is described, starting with the origins of each process in traffic simulation and
proceeding to its current application for pedestrian simulation.
2.2.1 Calibration of Models
The earliest studies involving the calibration of transportation simulation models were performed
using vehicular or traffic simulation models, with a mix of research-based and commercial models
being calibrated. In Hong Kong, a segment of a freeway modelled using FRESIM was simulated
and calibrated to match recorded speed and flow data (Cheu, Jin, Ng, Ng, & Srinivasan, 1998).
Near-optimal parameters for the traffic model (e.g. free-flow speed, car-following sensitivity,
acceleration lag, lane change probability) were all found using a Genetic Algorithm (GA)
12
approach, which mimics the biological concepts of ‘survival of the fittest’ and mating to produce
more successful offspring (Abdulhai, 2015). Similarly, Ma and Abdulhai (2002) calibrated a larger
model of a roadway network in Toronto, testing a number of fitness functions for a GA approach
including Point Mean Absolute Error, Point Mean Relative Error, and General Relative Error
(GRE). Due to the size and complexity of their model, the calibration was performed using turning
movement counts for 15 key intersections in the network, comparing simulated hourly counts to
those recorded in the field (Ma & Abdulhai, 2002). More recently, the calibration of a VISSIM
traffic model against speed-density curves has been detailed, noting that volume and speed data
collected from the field can also offer a useful set of data against which to calibrate (Fellendorf &
Vortisch, 2010). A similar approach was also described for users of Aimsun, noting the importance
of having two distinct sets of data for the process – one against which to calibrate the model, and
a second with which the model can be validated (Casas, Ferrer, Garcia, Perarnau, & Torday, 2010).
From these studies, it is clear that there are many methods that can be used to reasonably calibrate
a traffic model, as well as a number of data sources that can provide the necessary ground truth
against which to calibrate.
Although there are many established methods for calibrating traffic models, users of pedestrian
simulation models have faced different challenges. Regardless, researchers have had good success
calibrating models to individual scenarios, such as Helbing and Johansson’s (2009) calibration of
their Social Force models to pedestrian movements to and from escalators. Specifically, the authors
recorded video footage of pedestrians in those areas, processed it to extract the paths taken by
individual agents, and minimized the deviation of their simulated agents from those paths by
evolutionary parameter optimization (Helbing & Johansson, 2009). Others have applied similar
evolutionary optimization techniques to calibrate a variety of models against speed-density
relationships, pedestrian paths, and other datasets. Using a walker model, Campanella,
Hoogendoorn, and Daamen (2011) adjusted parameters using a hybrid Simplex-GA optimization
method to match agent paths to observed data, first calibrating each of three scenarios
independently and finally calibrating all scenarios together. Similarly, a CA model was calibrated
at both the qualitative and quantitative levels by Schadschneider, Eilhardt, Nowak, and Will (2011)
considering metrics such as lane formation and distance from walls, as well as comparing
simulated fundamental diagrams to those extracted from experimental data. Fundamental diagrams
were also used as a trusted data source when calibrating other models using GA optimization
13
methods, including a hex-lattice CA model (Davidich & Köster, 2012) and a small set of
continuous-space, force-driven models (Wolinski, et al., 2014).
2.2.2 Comparison of Models
To better understand the performance of a model, or to see where results from two or more models
begin to deviate, many studies have been performed comparing varieties of simulation models. As
with the calibration of models, these efforts began with the comparison of traffic simulation
models, especially commercial software such as Paramics and Aimsun (Cheu, Tan, & Lee, 2003),
or more recently TRANSIMS, SUMO, and VISSIM (Maciejewski, 2010). In these cases, the main
points of comparison often related to the capabilities of each software (e.g. support for bus routes
or dynamic networks) or its usability (e.g. navigability of a graphical user interface). Similarly,
some comparisons of pedestrian simulation software also focused on the differences in user
interfaces, visualizations, and model construction (Alexandersson & Johansson, 2013). While this
information can be useful, it does not offer any insight into how the results of a simulated scenario
may differ, or which software package produces the most true-to-life results.
Other recent investigations of pedestrian models have provided more useful comparisons of their
abilities to reproduce observed behaviours and handle various geometric scenarios, and to a certain
degree, their computational performance. Papadimitriou, Yannis, and Golias (2009) initially
explored the differences and similarities between a range of pedestrian flow, interaction, and route
choice models found in the literature, making note of the rules governing each model’s operation
as well as the simulation level (macroscopic or microscopic) and timing (discrete or continuous).
Duives, Daamen, and Hoogendoorn (2013) took this concept a step further, providing readers with
a detailed table describing the motion base cases tested, self-organization behaviours generated,
applicability, and relative computational burden of nearly 30 pedestrian models. Another study by
Zheng, Zhong, and Liu (2009) compared the relative merits of a set of crowd modelling
methodologies for simulating evacuations, providing many useful equations and factors to
consider when selecting a model. Although these studies offer thorough reviews of pedestrian
models based on details and descriptions available in their accompanying literature, it must be
acknowledged that the models were not directly tested by the study authors.
14
Around the same time, a number of other studies were performed on smaller sets of models,
comparing their results to one another, observed data, or both. Hoogendoorn and Daamen (2007)
compared three walker-type models; one simple and two complex, with each complex model
adding a new feature (i.e. anisotropy, finite reaction time). Using experimental data from simple
scenarios, the three models were tested to see which one could best reproduce realistic behaviours,
identifying features necessary for a good walker model. Steffen and Seyfried (2009) compared
agent trajectories computed with CA and Social Force models against experimentally-collected
paths of pedestrians moving around corners and switchback stairs (90° and 180° bends,
respectively), discovering that conventional CA floor fields can unrealistically limit the capacity
of such bends. A study performed by Seitz, Dietrich, Köster, and Bungartz (2016) compared a
wider set of models, consisting of Optimal Steps, Gradient Navigation, CA, and Social Forces, in
a few conceptual scenarios, but did not compare their results to real-world data. Similarly,
Wolinski, et al. (2014) compared agent paths generated by three pedestrian modelling algorithms
(Boids-like, Social Forces, and Reciprocal Velocity Obstacles (RVO2)) to simple experimental
data and speed-density relationships, but did not touch on the relative computational performance
of the algorithms.
Non-experimental data and scenarios have also been used as a basis for model comparisons. Work
by Castle, Waterson, Pellissier, and Le Bail (2011) explored the differences between STEPS and
Legion pedestrian software used real-world geometry from two transit stations, but estimated
pedestrian demand for future years. Accordingly, pedestrian flow rates and LOS plots from each
model could only be compared against each other, but the authors supplemented this data with a
comparison of model set-up and run times (Castle, Waterson, Pellissier, & Le Bail, 2011).
Viswanathan, Lee, Lees, Cheong, and Sloot (2014) compared flow rates and densities of three
pedestrian models (lattice gas, Social Forces, and RVO2) against evacuation data from the 2008
Sichuan earthquake, but only for a unidirectional movement scenario and without considering the
computational speeds of each model. Finally, Bauer (2011) compared a Social Force model against
a set of simpler methods, comparing paths and densities predicted by each model to observed
behaviours in the hall of Westbanhof train station in Vienna. Although this research described how
the simple models did not account for density as well as the Social Forces approach, it failed to
consider differences in computational speed, which may be the most compelling reason to choose
a simple model (Bauer, 2011). While each of these comparative efforts provided details on the
15
performance of one or more pedestrian movement models, no single study compared a set of
models against each other and real-world data, both in terms of accuracy and computational
performance.
2.3 Data Collection for Pedestrians
Unlike automobiles, which strictly drive on roads and highways in defined lanes, pedestrians do
not have many constraints on their movement – they can weave in and out of other pedestrian
streams, take unique paths across open space, and move on three-dimensional objects (e.g. stairs
and escalators). Accordingly, collecting data on how pedestrians move through facilities and open
space is challenging, especially if the facility of interest features complex geometry. While some
existing methods for recording vehicular data have been adapted to record pedestrian data, more
detailed data on pedestrian volumes, movements, and interactions were often required. To collect
this data, new technologies have been implemented and tested at street festivals, inside buildings,
and within transit stations. These technologies include the application of advanced pedestrian
tracking algorithms to video recordings, providing a detailed view of pedestrian interactions, as
well as the use of Wi-Fi and Bluetooth-based mobile device tracking, providing a sample of
pedestrian movements and flows between locations.
In NCHRP Report 797 (2014), methods for collecting pedestrian and bicycle data are explored and
compared, ranging from simple manual counting procedures to more advanced automatic
solutions. While some counting solutions such as pneumatic tubes and inductive loops only work
for bicycles, the report describes ways to use technology such as passive infrared counters, laser
scanning devices, and video recording to obtain pedestrian data, as well as the staffing
requirements for each solution. Further, appropriate locations and time ranges for using each count
method are noted, as well as the costs and benefits (with respect to setup time, financial cost, data
processing, etc.) of each method. An important takeaway from this report is that manual counts
are often the most cost-effective for recording lower volumes (< 600 pedestrians/hour/location),
and by providing manual counters with assistive technology, higher volumes can be reliably
recorded (National Cooperative Highway Research Program, 2014). These arguments are backed
up by Auberlet, et al. (2015), who further note that manual counting excels when automatic options
fail or are deemed impractical.
16
2.3.1 Video Data
As noted when describing some pedestrian model calibration efforts, video recordings focusing on
pedestrian movements can be excellent sources of data. Not only does such footage allow
individual movements to be traced, providing a useful set of paths against which to calibrate
models, but responses to other pedestrians and obstacles can be clearly identified. However, the
challenge with video data is twofold – obtaining clear videos from appropriate angles can be
difficult, and processing these videos in a meaningful way requires great manual effort or advanced
computer software. To solve the first issue, a team at Jülich Supercomputing Centre has been
running experiments where volunteer pedestrians move through small- to medium-sized
environments, ranging from corridors to stairs, and have their movements recorded by an overhead
camera (Forschungszentrum Jülich, 2017). As described in Section 2.2.2, some of these videos
and their accompanying flow paths have been used to test and compare models in these specific
scenarios (Steffen & Seyfried, 2009). Over the years, these experiments have been saved in a
database, consisting of both video and extracted pedestrian path data from all scenarios. Applying
these methods to non-experimental setups, Bera, Galoppo, Sharlet, Lake, and Manocha (2014)
developed a real-time pedestrian tracking algorithm that can extract movement from angled video
of crowds in public spaces. Unfortunately, the uneven lighting and non-uniform pedestrian shapes
and sizes found in public space limited the algorithm’s use to moderate-density conditions (Bera,
Galoppo, Sharlet, Lake, & Manocha, 2014).
Although many researchers find the Jülich pedestrian data set and other similar videos useful for
understanding pedestrian behaviours in specific scenarios, Berrou, Beecham, Quaglia, Kagarlis,
and Gerodimos (2007) raised an issue with such data, noting that organized experiments “shed
some light on particular issues but often cover small and not sufficiently diverse population
samples and contexts” (p. 168). To avoid this pitfall in their own work, the authors captured data
from 32 video cameras simultaneously, compiling the footage to gain a more complete picture of
pedestrian behaviours in New York, London, and Hong Kong subway stations. From the compiled
footage, pedestrian speeds, flow rates, and densities were extracted manually, as the high volumes
and densities within each station were beyond the capabilities of automatic tracking software
(Berrou, Beecham, Quaglia, Kagarlis, & Gerodimos, 2007). Although these studies have yielded
positive results using video recording and tracking to collect pedestrian data, Auberlet et al. (2015)
17
suggest that due to the challenges encountered and computational demands of such procedures,
video recording should be considered for “future” instead of present use.
2.3.2 Wireless Tracking
With the prevalence of smartphones for both personal and business use, as well as vehicles
featuring built-in Bluetooth and Wi-Fi radios, wireless tracking has recently become a powerful
tool for gaining access to a sample of pedestrian or vehicular movement data. In general, such
tracking is performed using multiple sensors in the form of a wireless radio, using either Wi-Fi,
Bluetooth, or a combination of the two, that are constantly searching for devices with which to
connect. When a Wi-Fi or Bluetooth-enabled device comes within range of a sensor, it records the
device’s unique media access control (MAC) address – an alphanumeric code permanently
assigned to the device’s wireless radio – and the time of the connection (Beaulieu & Farooq, 2016).
Similarly, the sensors can record when each device goes out of range. By knowing the range of
the sensor, as well as these times, one can calculate how long the device was near the sensor. By
using multiple sensors, this concept can be extended by measuring the time between a device
leaving one sensor’s range and entering another’s, providing a travel time between two or more
points (Poucin, Farooq, & Patterson, 2016).
Initially, Bluetooth-based data collection was employed to sample vehicular travel times by setting
up one sensor at each end of a study corridor and measuring the time between unique device
connections to each sensor, as detailed by Haghani, Hamedi, Sadabadi, Young, and Tarnoff
(2009). Aliari and Haghani (2012) further tested these sensors using ground truth data from probe
vehicles, successfully refining their methods for calculating vehicle speeds from the collected
wireless data. More recently, Friesen and McLeod (2015) used Bluetooth sampling as a proxy for
vehicular traffic data in a number of ITS applications, noting that previous work had found
Bluetooth to yield a better sample than Wi-Fi. In all cases, the authors demonstrated the accuracy
of this technology for sampling travel times and speeds and also highlighted the ease of setup and
low cost compared to running probe vehicles or implementing other data collection technologies.
However, concerns were raised over limits to data collection durations due to the battery life of
the sensors, and the potential for inaccurate results in dense, urban settings or along corridors with
nearby parallel routes.
18
Much like studies where vehicles were tracked with Bluetooth, using radios in the vehicles
themselves or smartphones carried by their occupants, pedestrians can be tracked if they are
carrying smartphones, tablets, or other devices with Wi-Fi or Bluetooth radios enabled. In
Indianapolis International Airport, a system was developed and implemented to measure wait
times in security lines based on this concept (Bullock, Haseman, Wasson, & Spitler, 2010). Using
two sensors, with one at the queue entrance and one at the end of the security checkpoint, Bluetooth
device connections were recorded and matched to measure travel and wait times with good
accuracy, and MAC addresses were encoded to reduce privacy concerns. This system was left in
place for a number of years and revisited, further validating the wait times reported by the
Bluetooth sensors by comparing them with data from manually distributed timing cards (Remias,
Hainen, & Bullock, 2013).
Outside of the airport, a multi-sensor approach has been tested in Montreal, tracking pedestrians
as they moved through a street festival. Due to the larger area being investigated, Wi-Fi tracking
was employed, providing a better pedestrian sample size (20-30%) than comparable Bluetooth
sensors (sampling only 5-10%) (Beaulieu & Farooq, 2016). Additionally, infra-red sensors were
used within the same area to provide an estimate of the total population size. Unfortunately, the
battery life of the wireless sensors prevented their continuous use throughout the festival, but the
data they collected provided details on how pedestrians moved between attractions at daily, hourly,
and 15-minute scales (Beaulieu & Farooq, 2016). Another multi-sensor tracking project was
performed in a campus building at Concordia University in Montreal, this time using the pre-
existing Wi-Fi router network in the building as a set of sensors (Poucin, Farooq, & Patterson,
2016). Due to the overlapping ranges of each Wi-Fi router, individual travel times could not be
obtained, but combining the collected data with knowledge of campus and building activities
(moving between classes, working in an office, etc.) allowed the authors to map out trips within
the building over a full day.
19
Chapter 3
3 Motivation and Context
When new pedestrian movement models are developed, their creators tend to focus on finding
more accurate ways to represent how people move through space and interact with each other.
Unfortunately, this often comes at the cost of computational speed – for example, updating agent
positions more frequently and precisely in a model can easily double or triple its run time. Parallel
computing approaches can be a powerful solution for reducing the runtime of a single model, but
models that rely on sequential processes to function cannot benefit from such techniques. Further,
some situations demand that multiple models are run simultaneously, preventing the allocation of
multiple processing threads to each model. In either case, it is important to understand the
computational demands of the available pedestrian models. To help choose an optimal model for
a specific use case, it is important to understand how accurate the results of each model will be –
there is little benefit to using a fast model that is wrong when a somewhat slower model can offer
much better results. As indicated in research by Castle, Waterson, Pellissier, and Le Bail (2011),
results from discrete space pedestrian models can differ noticeably from those operating in
continuous space, but the question remains of what differences could be expected when comparing
two models operating in the same realm.
In the following chapter, the two main sources of motivation for this thesis are described. First,
information is provided on the Nexus platform, an integrated simulation tool using pedestrian
simulation on a large set of transit station models. This is followed by the details of a study
comparing two commercial continuous-space pedestrian simulation software packages,
MassMotion and Legion, demonstrating the similarities and differences in their results for a set of
simple tests.
3.1 The Nexus Platform
Initially, the idea of integrating simplified pedestrian models with MassMotion emerged from the
desire to simulate an entire network of transit stations simultaneously; one of the core requirements
of the Nexus platform. Nexus is an innovative application of microsimulation modelling,
20
simulating train and subway movements on rail networks, streetcar and bus movements on surface
networks, and pedestrian movements within stations. Born from the lack of analytical and
modelling methods that could analyze network-level impacts of changes to transit systems, Nexus
more accurately simulates these impacts by connecting traditionally separate modelling methods
with a modular approach (Srikukenthiran & Shalaby, 2017). On their own, each of the external
models can only provide insight into the performance of a piece of a transit network. However,
when the models and server are connected as shown in Figure 3.1, information can be exchanged
between all of the simulators, a transit network analyzer, a visualization engine, and a data server,
making Nexus into a powerful, integrated simulation tool. This configuration allows for the
impacts of a train delay, for example, to be observed as they propagate throughout a network.
Figure 3.1: Main components of the Nexus platform and their organization
(Srikukenthiran & Shalaby, 2017)
Since the platform depends on exchanging time-dependent information between all models, a
limiting factor on its overall speed is the speed of the slowest and most computationally-
demanding model. In most cases, this bottleneck comes from simulating pedestrian movements
within stations. Currently, Nexus only calls on MassMotion to simulate complex interchange
stations in networks due to the software’s computational demands and limited speed. Simple
stations, on the other hand, are modelled using a crude pedestrian model based on agents travelling
at uniform speeds and jumping between a few key points in each station (subway trains, turnstiles,
exit doors, etc.) (Srikukenthiran & Shalaby, 2017). Although such a model can offer considerable
21
time savings, it fails to consider any variance in pedestrian speeds or paths and ignores congestion
effects entirely. This may be acceptable for uncongested, low-volume stations under normal
operating conditions, but it has not been tested or validated under any high-volume scenarios. To
avoid relying on this model, a pedestrian movement model that could interface with Nexus was
desired, providing more accurate simulation results without the computational cost of using
MassMotion for all stations in the network. However, before a specific model could be chosen, an
investigation and comparison of existing methods was required.
3.2 Comparison of MassMotion and Legion
In addition to the need for a fast and accurate pedestrian simulation tool, this research was also
motivated by the results of a simple study comparing two popular commercial pedestrian
simulation software packages; MassMotion and Legion. Since their initial development, both tools
have been used to model a variety of pedestrian facilities and spaces, ranging from event venues
and stadiums (Oasys Limited, n.d.) (Legion Limited, n.d.) to transit stations (Hoy, Morrow, &
Shalaby, 2016) (Berrou, Beecham, Quaglia, Kagarlis, & Gerodimos, 2007). However, each tool is
based around a different core model of pedestrian movement – MassMotion uses a Social Force-
based model, while Legion’s model is based on ingress and egress movements – leading to
questions about how the calculation of certain model performance metrics may differ between
them. To help answer these questions and gain a better understanding of what other differences
may exist, a set of simple scenario tests were devised to isolate various pedestrian behaviours, and
results from each test were compared. Measures were taken to ensure that all tests were performed
using the same scenario geometry and agent demand profiles, and population variables (e.g. agent
size and speed distributions) were matched in both software packages. Surprisingly, these tests
identified a number of microscopic and macroscopic differences between these two models,
highlighting the importance of comparing and understanding pedestrian movement models before
selecting one to use.
22
3.2.1 Bottleneck Test
The first scenario test was designed to demonstrate how agents in each of the software packages
behaved in free-flow conditions, how they organized themselves when queueing at a bottleneck,
and how they responded to dense, congested conditions. To isolate these behaviours, a long, wide
corridor was required, which would allow the agents to move under free-flow conditions when
entering and exiting the model. Between the entry and exit points, one or more narrow segments
were required to create congestion and trigger the desired queueing behaviours.
To meet these requirements, a 65-metre long and 10-metre wide corridor was constructed,
featuring two narrow segments, each of which was 2.5 metres long. The first segment, located 20
metres from the left side of the corridor, was 3.5 metres wide, while the second segment, located
20 metres beyond the first, was 2.5 metres wide. In MassMotion, this geometry was represented
by a set of three floors and two links, while a single, continuous floor was used in Legion, as the
software eschews multiple objects in favour of single, continuous pieces of geometry. In both
cases, 500 agents were uniformly created over a one-minute timespan at the left side portal, located
approximately 2 metres from the wall, and proceeded through the model to reach the right-side
portal where they were removed. To ensure all agent movements were captured from start to finish,
each simulation run lasted 5 minutes and two screenlines were placed in each model, labeled S1
and S2 in Figure 3.2.
Figure 3.2: Geometry for bottleneck scenario test
23
Considering the macro-level results from this test, the most obvious difference is that MassMotion
maintains a higher flow rate through both bottlenecks; roughly 74 agents per metre per minute,
compared to 64 agents per metre per minute in Legion. This leads to the last agent crossing each
screenline in Legion between 66 and 72 seconds after the same agent has crossed the screenline in
MassMotion, shown in Figure 3.3. Over a five-minute simulation, these differences are
considerable. To better understand whether this difference stemmed from a generally slower
population or the way each model’s agents responded to the geometry, a distribution of agent travel
times was plotted for each model in Figure 3.4. These distributions show that MassMotion agent
travel times lie within a narrower band, and are closer to being normally distributed than travel
times from Legion.
Figure 3.3: Cumulative volume plots for bottleneck test, screenlines 1 and 2
24
Figure 3.4: Agent trip time (creation to exit) distribution for bottleneck test
In addition to gauging performance based on flows and agent travel times, spatial distributions of
agents were considered. In Figure 3.5, Level of Service (LOS) plots for the two models are
presented, showing worst LOS at each point during the model run. There are two key points to
note from these plots; first, agents in MassMotion are reaching LOS F (the highest agent density)
as they are moving through the bottlenecks while Legion agents only reach LOS E, and the same
MassMotion agents are spreading out to use more of the available space in the central and right-
side segments of the corridor. It is surprising that these two behaviours coexist, since one would
expect spread-out agents to experience a better LOS, but in this case, it seems that MassMotion
agents are simply willing to get closer to each other than Legion agents. This conclusion is
reinforced by Figure 3.6, where plots of average LOS for each model are shown – it is clear that
Legion agents are spreading out further along the corridor (front-to-back) while MassMotion
agents congregate closer to the bottlenecks. For reference, a table of LOS values and their
equivalent numerical agent densities is provided in Table 3.1.
25
Figure 3.5: Maximum agent densities for bottleneck test
Figure 3.6: Average agent densities for bottleneck test
26
Table 3.1: Level of Service colours and values (Fruin, 1971)
LOS Colour Space (m2/agent) Density (agents/m2)
A Blue x ≥ 3.24 x ≤ 0.309
B Light Blue 3.24 > x ≥ 2.32 0.309 < x ≤ 0.431
C Green 2.32 > x ≥ 1.39 0.431 < x ≤ 0.719
D Yellow 1.39 > x ≥ 0.93 0.719 < x ≤ 1.075
E Orange 0.93 > x ≥ 0.46 1.075 < x ≤ 2.174
F Red 0.46 > x 2.174 < x
3.2.2 Stair Test
Much like the bottleneck test, this scenario was constructed to demonstrate pedestrian queueing
behaviours and responses to congestion, but instead of a flat corridor featuring narrow segments,
this model featured a raised central floor connected by narrow stairs. To see how behaviours
differed when ascending or descending stairs, each staircase was chosen to be the same size, with
equivalent open space around both entrances allowing for the formation of queues.
The geometry for this scenario shares its overall dimensions with that of the bottleneck test (65
metres long by 10 metres wide), but the layout and individual segments differ internally as shown
in Figure 3.7. On each end of the corridor, a 20-metre-long by 10-metre-wide floor is present,
located at ground level, while the central floor is 18 metres long, 10 metres wide, and 2 metres
above ground level. Connecting these floors are two identical staircases, each of which is 3.5
metres long, 4 metres wide, and 2 metres tall. Since multiple levels are present in this model, both
Legion and MassMotion geometries are constructed identically, using three floor elements and
two stair elements. Again, the entrance and exit portals were located 2 metres from the far edges
of the corridor and the simulation runs lasted 5 minutes, but only 400 agents were created over the
first minute of the simulation, proceeding from the left-hand portal to the right-hand portal. In this
scenario, four screenlines, labeled S1 through S4 in Figure 3.7, were used to monitor the progress
of agents through the model and determine flow rates.
27
Figure 3.7: Geometry for stair test
Looking at flows across different screenlines, the stair test confirms many of the trends shown in
the bottleneck test. Screenline 1 in Figure 3.8 shows MassMotion agents moving through the space
at a consistent flow rate, while the tendency of Legion agents to spread out causes their flow rate
to drop once congestion extends to the screenline. Similarly, MassMotion’s higher sustained flow
rates through narrow spaces is confirmed through Screenline 4 in Figure 3.8, where MassMotion’s
flow rate averages 43 agents per metre per minute, compared to 36 agents per metre per minute in
Legion.
Figure 3.8: Cumulative volume plots for stair test, screenlines 1 and 4
28
It is also interesting to see how the initial differences in pedestrian behaviour due to the first set of
stairs impact behaviour further along the corridor. As shown in Figure 3.9, congestion before the
leftmost staircase in Legion spreads out much more than that in MassMotion, but the congestion
on approach to the rightmost staircase is worse in MassMotion, as is congestion and the LOS on
the rightmost section of floor. Although these results may seem contradictory, the initial
congestion in Legion acts as a flow metering measure, reducing the rate of agents entering the
following floor segments and thereby reducing downstream congestion.
Figure 3.9: Maximum agent densities for stair test
3.2.3 Waiting Test
The final scenario test was developed to fully isolate pedestrian behaviours in congested conditions
and further explore how agents arranged themselves when presented with a closed gate that would
eventually open. To avoid compounding the effects of waiting with the effects of a narrow door or
bottleneck, a wide doorway was specified, and a shorter overall corridor was used.
This model was constructed as a 21-metre long by 10-metre wide room, featuring a 1-metre long
and 9-metre wide narrow ‘doorway’ in the middle that was opened at a specific time, shown in
29
Figure 3.10. In MassMotion, this doorway was represented by a gated link, set to remain closed
for the first 2 minutes of the simulation and open thereafter, forcing agents to wait on the left side
of this link until the time had elapsed. In Legion, the geometry consisted of a single piece of floor
with an identically located and configured gate. When testing both models, agents entered from
the leftmost portal, located 5 metres from the left wall, and exited through the rightmost portal,
located 1 metre from the right wall. To prevent waiting crowds from becoming too large, only 100
agents were created in each simulation, and each simulation lasted 5 minutes.
Figure 3.10: Geometry for waiting scenario test
When comparing screenline (S1 and S2) counts for both movement models, differences were
negligible, signifying that both MassMotion and Legion simulate agents similarly under these
conditions. This was further confirmed when comparing agent travel times from each model,
presented in Figure 3.11, which highlights very similar distributions between models. The only
notable difference between the way MassMotion and Legion handled this scenario was in the
waiting behaviour of agents when the gate is closed – MassMotion allows agents to spread out
within their available space while Legion agents cluster towards the gate, as shown in Figure 3.12.
Although this difference in crowd organization is not reflected in the macroscopic results from the
scenario, it can have significant impacts when other agents must cross through such a space, since
30
the dispersed crowd modelled in MassMotion obstructs others differently than the compressed
crowd in Legion.
Figure 3.11: Agent trip time distribution for waiting test
31
Figure 3.12: Average agent densities for waiting test
3.2.4 Conclusions
Clearly, MassMotion and Legion have a few differences when it comes to agent waiting
behaviours, peak flow rates through narrow passages, and even dispersion under free-flow
conditions. Although it was not possible to compare these simulations to real-world behaviours in
identical scenarios and determine which set of results is closer to reality, it is important to
acknowledge these inconsistencies and the impacts they may have on both microscopic and
macroscopic results. Further, it is interesting to see how noticeable some of these differences are,
especially since both agent-based pedestrian simulators are well-regarded commercial products. If
32
two major simulation tools cannot fully agree on the results of simple scenarios, it seems that one
cannot expect a range of other pedestrian models, especially those operating at differing levels of
detail, to produce identical results either. Accordingly, the following research serves to address
this issue by comparing the results of multiple pedestrian models to each other and real-world data.
33
Chapter 4
4 Data Collection
To perform a comparison of pedestrian movement models in the context of transit station
simulations, a set of real-world pedestrian data was required. Although an excellent set of
pedestrian movement data is available from Jülich Supercomputing Centre (Forschungszentrum
Jülich, 2017), it exclusively consists of experimental data. This data is highly detailed, but it must
be acknowledged that these scenarios are each limited to a single flow/density regime and
generally focus on small spatial elements. For this research, a more diverse dataset was desired,
incorporating a variety of geometric elements (e.g. corridors, stairs, open spaces), a range of flow
rates, and a longer time frame. Such data was compiled by observing pedestrians in two transit
stations as they moved between entrances and subway platforms. This data was obtained by
performing a set of pedestrian flow counts at all entry and exit points from the station concourses,
yielding sets of second-by-second flows accounting for all pedestrians moving through the
stations. In addition, samples of pedestrian travel times were recorded in each station, providing
information on local pedestrian speeds.
4.1 Data Collection Methods
A number of data collection methods, ranging from simple manual counts to detailed video
recording, were considered for this task in order to capture pedestrian movements throughout each
subway station. Unfortunately, the use of a Wi-Fi or Bluetooth-based tracking solution was
deemed impractical due to the need for specialized equipment, since built-in station wireless
networks were not accessible. Concerns that the captured sample size would be small and
necessitate additional data collection (Beaulieu & Farooq, 2016), as well as privacy concerns
arising from the collection of MAC addresses, led to these methods being dismissed. Video
recording was also not practical in these scenarios due to numerous factors; accessing existing
security cameras was not possible, the cost of obtaining and setting up a separate full-coverage
video camera system was prohibitive, and the low ceilings in the stations would either reduce the
range of overhead cameras or lead to low-angle footage that is difficult to process. In addition,
there are very few open-source pedestrian tracking algorithms available that could be used to
34
process such footage, and those that exist require very still videos (not handheld) with good
lighting at angles where pedestrians do not overlap (Williamson & Williamson, 2014).
In the end, pedestrian data was recorded manually by a group of researchers from the University
of Toronto, using smartphones equipped with timestamp-enabled counting applications. While this
method was not able to capture detailed pedestrian movements and interactions that could be
extracted from video data, it has been previously used to obtain screenline counts at escalators and
stairs (Srikukenthiran, Fisher, Shalaby, & King, 2013), as well as entry and exit points in TTC
subway stations (King, Srikukenthiran, & Shalaby, 2014). As noted in Section 2.3, such methods
are supported in the literature for their flexibility, low cost, and practicality where other advanced
methods may fail (Auberlet, et al., 2015) (National Cooperative Highway Research Program,
2014). It is also important to note that the recording of pedestrian data was performed as an
“unannounced experiment,” ensuring that the collected data would be authentic and not influenced
by outside factors such as specifically chosen pedestrians or customized geometry (Berrou,
Beecham, Quaglia, Kagarlis, & Gerodimos, 2007).
Pedestrian movements were observed at between 6 and 8 screenlines per station, accounting for
all pedestrians entering or exiting the station at either the street or platform levels, as well as other
internal movements. Each screenline was monitored by one counter for a one-hour duration
between 4:30 and 5:30 PM, and pedestrian data was recorded using a smartphone application. For
data collectors with an Android smartphone, the application “Advanced Tally Counter” was used
(Ying Wen Technologies, 2015), and for iPhone (iOS) users, “Counter +” was used (Leung, 2017).
Both applications were configured to record pedestrian movements and corresponding timestamps
in both directions using a single counter – pedestrians crossing a screenline who were moving
towards platform level (‘in’ to the station) were recorded by incrementing the counter, while
pedestrians moving towards street level (‘out’ of the station) were recorded by decrementing the
counter. After each count session, data recorded by each device was collected and processed into
a set of cumulative flow profiles for each screenline. In addition to screenline counts, a straight
corridor was identified in each station and pedestrians were timed moving from one end to the
other, providing average pedestrian speeds.
35
4.2 Site Selection
The ideal sites for collecting a range of pedestrian data have a few key features. Most importantly,
pedestrian volumes in these facilities should be elevated during peak hours, but not so high that
collected data is unreliable. To help find such facilities, a list of Toronto Transit Commission
(TTC) subway stations and their corresponding average daily pedestrian volumes was consulted
(Toronto Transit Commission, 2015). Stations with volumes between 20,000 and 50,000 daily
passengers were considered, as this range represents low- to medium-volume stations within the
system. This range also represents the stations that are currently modelled using an extremely
simple method in the Nexus platform, where an improved pedestrian model is thought to offer
accuracy benefits while maintaining low computational demands. Within this set of stations,
preference was given to stations featuring a variety of geometric elements and convenient locations
from which data collectors could observe pedestrian flows without getting in the way. The
accessibility of these stations was also considered when making the final selection – locations
south of Line 2, as shown in Figure 4.1, were preferred.
The two stations chosen for observation were Osgoode Station and Queen’s Park Station,
representing the lower and upper bounds of the noted passenger volume range, respectively.
Queen’s Park is ranked as the 14th busiest TTC station with an average of 48,070 daily passengers,
while Osgoode is ranked as the 44th busiest station with 22,488 daily passengers (Toronto Transit
Commission, 2015). In each station, most data of interest can be collected on the concourse level.
Following the selection of the sites, measurements were taken of various corridor widths and
turnstile dimensions to calibrate existing MassMotion geometric models of each station.
36
Figure 4.1: Map of TTC subway network, highlighting Osgoode and Queen’s Park stations
(Toronto Transit Commission, 2017)
4.2.1 Osgoode Station
Osgoode Station is located on the western leg of TTC Line 1, also known as the Yonge-University
Line, and is built around a standard centre platform design (Toronto Transit Commission, 2017).
Located at the intersection of University Avenue and Queen Street West, the station offers staircase
connections to each corner of the intersection, allowing passengers to easily transfer to and from
the busy 501 Queen streetcar, as well as access nearby offices, shops, and the Four Seasons Centre.
Descending from each of the four staircase connections, passengers enter the concourse level of
the station, shown in Figure 4.2, where they can access token vending machines and Presto fare
card machines, as well as the collector booth and fare gates. Since the concourse includes a public
(not fare-paid) area, roughly 4% of entering pedestrians use the concourse and stairs as a tunnel
system for crossing the Queen/University intersection, while the remaining 96% proceed to the
platform level to board a subway. The station features two sets of stairs and escalators that connect
the concourse level to the platform, as well as one central elevator.
37
Figure 4.2: Map of Osgoode Station concourse with screenlines
Data was collected twice at Osgoode station – first as a low volume test on Thursday August 25th,
2016, and again under slightly busier conditions on Tuesday September 13th, 2016. When
collecting data in Osgoode Station, six screenlines were established to capture the main pedestrian
movements in and out of the concourse, as shown in Figure 4.2. Two of these screenlines (D and
E) were located at the base of the stair/escalator sets at platform level, while the remaining four
were located on the concourse level. Of these screenlines, one was located at the base of the
southwest entrance corridor (A), one was located at the base of the northwest entrance staircase
(B), and one was located at the eastern edge of the concourse, capturing flows from both eastern
entrances (C). The final screenline (F) was located where the southwest entrance corridor joins the
northwest entrance corridor, allowing for the isolation of flows in a straight corridor when
combined with screenline A. It must be noted that pedestrians boarding or alighting the concourse-
platform elevator were not counted, as no current pedestrian models can adequately simulate
38
elevator behaviour, and the number of pedestrians using elevators during peak periods is
comparatively small (King, Srikukenthiran, & Shalaby, 2014).
Due to a technical limitation with the Counter + application, which is detailed in Section 4.3, only
37.5 minutes of usable data were obtained on the first count date, and 21 minutes of usable data
were obtained on the second count date. Accordingly, only screenline data from the first Osgoode
count was used, but pedestrian speed timing data from the second count remained unaffected. A
summary of the data collected at each screenline is presented in Table 4.1.
Table 4.1: Osgoode Station summary data from August 25th, 2016
Location A B C D E F
Entering Concourse
(from street/platform)
Total Volume (peds) 716 392 389 208 341 718
Average Flow Rate (peds/min) 20.2 10.8 10.9 5.5 8.6 20.4
Peak Flow Rate (peds/min) 34 18 20 22 34 39
Exiting Concourse
(to street/platform)
Total Volume (peds) 279 149 195 252 1,175 272
Average Flow Rate (peds/min) 6.3 3.8 5.3 7.2 32.8 6.2
Peak Flow Rate (peds/min) 45 14 22 12 46 48
4.2.2 Queen’s Park Station
Data was collected once at Queen’s Park Station, on Wednesday September 14th, 2016. Much like
Osgoode Station, Queen’s Park Station is also located on the western leg of TTC Line 1 and built
around a centre platform, but it is located two stops north of Osgoode at the intersection of
University Avenue/Queen’s Park and College Street (Toronto Transit Commission, 2017). Four
street-level entrances provide this station with access to the 506 College streetcar, as well as the
University of Toronto campus, nearby hospitals, and office buildings. Additional underground
connections link the station’s concourse level directly with an office tower at 700 University
Avenue, the MaRS Centre at 661 University Avenue, and nearby Government of Ontario
buildings. All street-level and underground entrances lead to the central station concourse, from
39
which passengers can access token vending machines and Presto fare card machines, as well as
the collector booth and fare gates. Passengers then descend to the platform level using one of two
sets of escalators and stairs, or via the elevator. Again, a small percentage of pedestrians use the
concourse to cross the Queen’s Park/College intersection, and passengers using elevators in the
station were not explicitly observed.
Figure 4.3: Map of Queen’s Park station concourse with screenlines
Within this station, eight screenlines were monitored to capture all movements in and out of the
concourse, as well as isolate additional station elements. As with Osgoode Station, two screenlines
(E and F) are located at platform level, each at the base of a set of stairs and escalators, as shown
in Figure 4.3. One screenline (B) is located at street level, at the top of the northwest stairs to the
concourse. Finally, the remaining five screenlines are located at concourse level – one at each
entrance to the main concourse from the west (G) and east (C and D) sides, one at the entrance to
the southwest tunnel (A), and one at the top of the northern stair/escalator set (H). With the
combination of screenlines A, B, and G, a T-junction can be isolated, and with the combination of
40
screenlines E and H, a stair/escalator set can be isolated. A summary of the data collected from
screenlines in Queen’s Park Station is presented in Table 4.2.
Table 4.2: Queen’s Park Station summary data from September 14th, 2016
Location A B C D E F G H
Entering Concourse
(from street/platform)
Total Volume (peds) 1,571 1,018 883 1,119 446 294 2,562 310
Average Flow Rate (peds/min) 27.0 18.0 15.0 19.2 7.2 4.8 44.4 5.1
Peak Flow Rate (peds/min) 43 34 24 31 20 19 70 16
Exiting Concourse
(to street/platform)
Total Volume (peds) 199 270 207 163 1,589 2,885 465 2,928
Average Flow Rate (peds/min) 3.2 4.4 3.2 2.6 27.0 49.2 7.2 49.8
Peak Flow Rate (peds/min) 10 15 11 9 38 72 23 68
4.3 Data Processing
Following the completion of the data collection in Osgoode and Queen’s Park stations, data from
individual collectors was compiled and processed into usable sets for model inputs and calibration.
Raw data from each counting application came in the form of a table, examples of which are shown
in Table 4.3, which included the date and time of each count, as well as the new count value (for
Counter +) or whether the count was incremented or decremented (for Advanced Tally Counter).
Following the initial counts at Osgoode Station, it was discovered that Counter + only keeps a
history of the last 1000 count changes for each tally, leading to data loss. For the following Queen’s
Park data collection, staff using this application were instructed to switch to a new tally every 10
to 15 minutes to keep all records. Accordingly, multiple tables had to be spliced together to form
a full hour of data for these count positions.
41
Table 4.3: Sample of raw data from Advanced Tally Counter (left) and Counter + (right)
Date Time + / – Date Time Count
2016-09-13 16:58:27 – 2016-09-13 16:37:27 01060
2016-09-13 16:58:33 + 2016-09-13 16:37:29 01061
2016-09-13 16:58:33 + 2016-09-13 16:37:30 01062
2016-09-13 16:59:13 + 2016-09-13 16:37:34 01063
2016-09-13 16:59:14 + 2016-09-13 16:37:36 01062
Using the complete records from each screenline, pedestrian flow tables were created from the
data at each station by separating each direction of flow at each screenline and recording the
second-by-second changes. From these tables, two sets of data were subsequently extracted and
used – one counting all pedestrians entering the concourse (from street level or platform level) and
one counting all pedestrians exiting the concourse (to street or platform level). The dataset of
pedestrians exiting the concourse (found in Appendix A) was used for calibration and validation
purposes as sets of 60-second cumulative values and as sets of one-second values. In both cases,
minimal processing was required, consisting of summing data into 60-second bins and converting
instantaneous flows into cumulative flows as needed. An example of a second-by-second
cumulative flow plot generated from Osgoode Station data is shown in Figure 4.4.
Figure 4.4: Cumulative flow plot for Osgoode screenline A
42
On the other hand, the dataset of pedestrians entering the concourse was used as input to the
Osgoode and Queen’s Park MassMotion models, and a more significant amount of processing was
required. First, an origin-destination matrix was constructed based on both directions of flow in
each station, accounting for pedestrians moving to and from the subway and those using the
concourse as an underground tunnel. To build this matrix for each station, the following
assumptions were made:
• Pedestrians do not cross the same screenline twice;
• Pedestrians exiting a subway train to the concourse will only exit to the street; and
• Pedestrians entering from the street on one side of the concourse (east/west) and not
boarding a train will exit on the opposite side of the concourse.
Accordingly, percentages of pedestrians entering the concourse without proceeding to the platform
were calculated based on observed differences between net flows in and out of the concourse. In
Osgoode Station, it was found that 4.3% of pedestrians entering the concourse do not take the
subway, while in Queen’s Park Station this number was lower at 2.6%. The resulting
origin/destination matrices are presented in Table 4.4 and Table 4.5 – note that these matrices
account for a slightly shorter duration (and thus lower volumes) than the full count data. Finally,
the origin totals and destination percentage splits from these tables were provided to MassMotion,
as well as the second-by-second origin flows, which were used to construct a timetable for each
scenario that would closely match the conditions observed from each data collection.
Table 4.4: Osgoode Station origin/destination matrix
To A B C D E Total
From
A 0 0 12.2 120.5 551.3 684
B 0 0 6.7 65.7 300.6 373
C 27.4 14.7 0 59.0 269.9 371
D 90.7 48.7 62.6 0 0 202
E 149.9 80.6 103.5 0 0 334
Total 268 144 185 245.2 1,121.8 -
43
Table 4.5: Queen’s Park Station origin/destination matrix
To A B C D E F Total
From
A 0 0 13.1 10.0 497.8 914.1 1,435
B 0 0 8.6 6.6 327.8 602.0 945
C 8.2 11.6 0 0 283.5 520.7 824
D 10.2 14.3 0 0 350.3 643.3 1,018
E 97.2 136.7 99.9 76.2 0 0 410
F 66.7 93.7 68.4 52.2 0 0 281
Total 182.3 256.2 190.1 145.0 1,459.3 2,680.1 -
The final pieces of collected data requiring processing were the pedestrian timings, where the data
of most interest was the average pedestrian speed and its standard deviation. In each station, the
recorded times for pedestrians to travel between two points (as indicated in Figure 4.2 and Figure
4.3) were simply divided by the distance between the points, resulting in an average speed for each
individual. Average population speeds were then determined for each direction of travel and for
all agents regardless of direction, providing the values shown in Table 4.6. While pedestrian speeds
in Osgoode Station match well with those found in the literature and used in MassMotion (~1.35
m/s), speeds in Queen’s Park seem notably higher. However, this is likely due to the times when
these measurements were taken – those from Osgoode were taken between 4:30 and 4:45 PM,
while those from Queen’s Park were taken between 4:45 and 5:00 PM, a busier time in the station.
For consistency with the literature and the ‘unconstrained’ conditions in Osgoode Station, an
average desired pedestrian speed of 1.35 m/s was used for all simulations.
Table 4.6: Average pedestrian speeds in Osgoode and Queen’s Park Stations
Location Distance Direction Time (s) Speed (m/s)
µ σ µ σ
Osgoode 12.30 m To Platform 9.51 1.42 1.32 0.19
To Street 9.38 1.00 1.33 0.14
Bidirectional 9.44 1.23 1.32 0.17
Queen’s Park 20.30 m To Platform 13.69 1.36 1.50 0.16
To Street 12.97 1.40 1.58 0.16
Bidirectional 13.33 1.43 1.54 0.17
44
Chapter 5
5 Pedestrian Model Implementation
The implementation of pedestrian movement models within the MassMotion SDK framework
began with the selection of three additional models based on simulation methods found in the
literature. To achieve the goal of testing a variety of models, each of these methods was chosen to
represent a different level of detail or complexity in simulating pedestrian movements. The
simplest method in this set is the Graph model, operating at the mesoscopic level, while the
remainder of the models operate at the microscopic level, fully considering the geometry of each
scenario and the movements and interactions of individual agents. These microscopic models
include, from simplest to most advanced, a model based on the Optimal Steps concept (a hybrid
of CA and Social Forces), a simple Social Force-based model, and the advanced Social Force-
based model built in to MassMotion. In addition to this set of models, attempts were made to
implement a CA model within the MassMotion framework, and although unsuccessful, the process
and proposed model are detailed at the end of this chapter.
To best compare the results and performance of this set of pedestrian movement models, steps
were taken to ensure that each model operated on identical input data and geometry and ran within
the same computational framework. In the following sections, specifics of each model’s operation
and implementation are provided. For reference, recurring variables used in equations for the
models are summarized in Table 5.1, and recurring values and distributions are summarized in
Table 5.2. The code used to implement these models via the MassMotion SDK can be found in
Appendix B.
45
Table 5.1: Common variables used in model equations
Variable Description
∆𝑡 Time step of the model
𝛼 (as subscript) Relating to the current agent
𝛽 (as subscript) Relating to a neighbouring agent
𝑖 (as subscript) Relating to a nearby obstacle
𝑔 (as subscript) Relating to a current goal
𝒓𝑥 Position of agent/object x
𝒗𝑥 / 𝑣𝑥 Velocity of x / Speed of x
𝑣𝑥0 Desired unconstrained speed of x
𝑣𝑥𝑚𝑎𝑥 Maximum speed of x
𝒅𝑥𝑦 Distance vector pointing from y to x
𝒇𝑥 / 𝒇𝑥𝑦 Force applied to x / Force applied to x due to y
𝑷𝑥(𝑧) Potential at position z due to x
Table 5.2: Common parameter values used in model equations
Parameter Value Description
𝑣𝛼0 µ = 1.35 m/s
σ = 0.25 m/s
Desired agent speed (normally distributed)
𝑉𝑠𝑎 0.402 Speed factor for ascending stairs*
𝑉𝑠𝑑 0.536 Speed factor for descending stairs*
𝑉𝑒 0.8 m/s Escalator tread speed
*Based on average values of parameters used in MassMotion (Oasys Ltd., 2017)
5.1 Graph Model
To understand the operation and appeal of this model, it is first important to understand how a
MassMotion model of pedestrian geometry is constructed. Unlike some modelling suites, where
connected, walkable space on a single level is represented by a single geometric shape,
MassMotion allows this space to be constructed from multiple floor objects with link objects
connecting them. Additionally, vertical transport elements such as escalators, stairs, and ramps can
be added to connect floors on varying levels. The impact of this space discretization is that agents
will target intermediate goal lines (waypoints) on each connection element when moving through
the model, as opposed to simply targeting their final destination. Akin to more detailed versions
46
of the graph networks produced by Løvås (1994), these MassMotion goal lines represent nodes
while the walkable elements themselves act as links, creating a simple graph based on each model’s
geometry.
With this network established, the operation of the Graph model is relatively simple. When an
agent is created in the model, their desired unconstrained speed is queried, followed by a
calculation of their local desired speed based on their current object – the agent’s speed remains
unaltered if they are on a floor or link, but will be adjusted based on Equation 5.1 if they are on
stairs or an escalator. Using this desired speed, as well as the location of the agent’s next waypoint,
Equation 5.2 is used to calculate the agent’s time of arrival at that waypoint, and this data is stored
in a dictionary. As the model runs to completion, each agent’s time of arrival is queried at every
time step (once per second), and when the movement time for one or more agents is reached, they
are immediately relocated to their corresponding waypoint. The agent speed and time-of-arrival
calculations are then repeated for the agents, and their new times and waypoints are added back to
the dictionary. Although this model does not capture congestion effects (it does not demonstrate
any speed-density relationship), it has a low computational burden and can offer more discrete
movement data than macroscopic approaches.
𝑣𝛼 =
{
𝑣𝛼0 𝑖𝑓 𝑜𝑛 𝑓𝑙𝑎𝑡 𝑔𝑟𝑜𝑢𝑛𝑑
𝑣𝛼0 ∙ 𝑉𝑠𝑎 𝑖𝑓 𝑎𝑠𝑐𝑒𝑛𝑑𝑖𝑛𝑔 𝑠𝑡𝑎𝑖𝑟𝑠
𝑣𝛼0 ∙ 𝑉𝑠𝑑 𝑖𝑓 𝑑𝑒𝑠𝑐𝑒𝑛𝑑𝑖𝑛𝑔 𝑠𝑡𝑎𝑖𝑟𝑠
𝑉𝑒 ∙ √1 − (𝒅𝛼𝑔
‖𝒅𝛼𝑔‖)
𝑌
2
𝑖𝑓 𝑜𝑛 𝑒𝑠𝑐𝑎𝑙𝑎𝑡𝑜𝑟
(5.1)
𝑡𝑎𝑟𝑟𝑖𝑣𝑎𝑙 = 𝑡𝑐𝑢𝑟𝑟𝑒𝑛𝑡 + (1
∆𝑡∙‖𝒓𝛼 − 𝒓𝑔‖
𝑣𝛼) (5.2)
5.2 Social Forces Model
This model is based on the initial Social Forces approach developed by Helbing and Molnár
(1995), but uses the more advanced elliptical force specification presented by Helbing and
Johansson (2009). Much like all Social Force models, this method operates in continuous space
47
and calculates a force to be applied to each agent at each time step in the form of an acceleration.
This force is then applied to the agent’s velocity, leading to a new velocity and position as shown
in Equations 5.3 and 5.4. For simplicity in this case, it is important to note that the random
fluctuation term (𝝃𝛼(𝑡)) was set to zero for all model runs. It is also important to note that speed
changes based on escalators and stairs, as shown in Equation 5.1 and implemented in the Graph
model, have been implemented here as well.
𝒗𝛼(𝑡 + 1) = 𝒗𝛼(𝑡) + ((𝒇𝛼(𝑡) + 𝝃𝛼(𝑡)) ∙ ∆𝑡 ∙ 𝑔(𝑣𝛼𝑚𝑎𝑥 ‖𝒇𝛼(𝑡) ∙ ∆𝑡‖⁄ )) (5.3)
𝒓𝛼(𝑡 + 1) = 𝒓𝛼(𝑡) + (𝒗𝛼(𝑡 + 1) ∙ ∆𝑡) (5.4)
The single force term shown in Equation 5.3 is actually calculated as the sum of three separate
forces, consisting of an attractive goal force, a repulsive neighbour force, and a repulsive obstacle
force, detailed in Equation 5.5. The calculation of the goal force is rather straightforward – the
difference between the agent’s desired velocity (composed of desired speed and goal direction)
and their current velocity is determined, and this vector is divided by a relaxation term to smooth
the acceleration of the agent. The neighbour force calculation is more complex, consisting of the
calculation of a repulsive force vector for each neighbour within the agent’s range. Each repulsive
force is calculated using Equation 5.6, whose components (𝑏𝛼𝛽, 𝒅𝛼𝛽, and 𝒚𝛼𝛽) are calculated in
Equations 5.7 and 5.8. These equations represent an elliptical interpretation of the agent’s personal
space, as described in (Helbing & Johansson, 2009), which practically means that each agent is
more sensitive to others directly in front of them than those at their side, with the strength of the
repulsion growing exponentially as the two agents get closer together, shown visually in Figure
5.1. This calculation not only considers distance, but also relative velocities of both agents,
meaning that a stopped or slow neighbour in front of an agent is considered to be more repulsive
than a neighbour moving at a similar speed in a similar direction. Obstacle forces for each agent
are also calculated using the same three equations, except considering the position of the obstacle
instead of the position of the neighbour (𝒓𝑖 instead of 𝒓𝛽), setting 𝒗𝛽 to zero, and using obstacle-
specific parameters (scale and range). A set of all parameters used in this model are presented in
Table 5.3.
48
𝒇𝛼(𝑡) =1
𝜏𝛼(𝑣𝛼
0𝒆𝛼0 − 𝒗𝑎) + ∑ 𝑤 (𝜑𝛼𝛽(𝑡)) 𝒇𝛼𝛽(𝑡)
𝛽≠𝛼
+ ∑𝒇𝛼𝑖(𝑡)
𝑖
(5.5)
𝒇𝛼𝛽 (𝒅𝛼𝛽(𝑡)) = 𝐴𝛽𝑒
−𝑏𝛼𝛽𝐵𝛽 ∙
‖𝒅𝛼𝛽‖ + ‖𝒅𝛼𝛽 − 𝒚𝛼𝛽‖
2𝑏𝛼𝛽∙1
2(𝒅𝛼𝛽
‖𝒅𝛼𝛽‖+
𝒅𝛼𝛽 − 𝒚𝛼𝛽
‖𝒅𝛼𝛽 − 𝒚𝛼𝛽‖) (5.6)
𝑏𝛼𝛽 =1
2√(‖𝒅𝛼𝛽‖ + ‖𝒅𝛼𝛽 − 𝒚𝛼𝛽‖)
2− ‖𝒚𝛼𝛽‖
2 (5.7)
Figure 5.1: Diagram of elliptical force fields used in Social Forces model (Johansson,
Helbing, & Shukla, 2007)
The final calculations required for this model relate to the agent’s visual field and its effects on
perceived neighbours, and practical limits on the agent’s velocity. In Equation 5.5, a term precedes
the neighbour force within the summation of all neighbour forces – this prefactor represents the
agent’s visual field and is calculated in Equation 5.9. Accordingly, it considers the angle between
the agent’s velocity and the distance vector between both agents, using this to calculate a reduction
𝒅𝛼𝛽 = 𝒓𝛼 − 𝒓𝛽 , 𝒚𝛼𝛽 = (𝒗𝛽 − 𝒗𝛼)∆𝑡 (5.8)
49
in force. Practically, this factor is 1.0 (no effect) when the agent’s velocity points directly to their
neighbour, and it drops off to 𝜆𝛼 as the two vectors rotate to point in opposite directions. Through
a number of tests, Helbing and Johansson (2009) found that 0.1 is a reasonable value for 𝜆𝛼, and
this value is used here. Finally, Equation 5.10 shows the calculation of the force limiting factor
applied in Equation 5.3, as originally presented by Helbing and Molnár (1995). This equation
serves to cap agent speed at a maximum value (1.3 times desired speed, as used by Helbing and
Molnár) to prevent agents from reaching unrealistic speeds in cases when forces become high.
𝑤 (𝜑𝛼𝛽(𝑡)) = (𝜆𝛼 + (1 − 𝜆𝛼) ∙1 + cos(𝜑𝛼𝛽)
2) , cos(𝜑𝛼𝛽) =
𝒗𝛼‖𝒗𝛼‖
∙−𝒅𝛼𝛽
‖𝒅𝛼𝛽‖ (5.9)
𝑔(𝑣𝛼𝑚𝑎𝑥 ‖𝒇𝛼(𝑡) ∙ ∆𝑡‖) =⁄ {
1 𝑣𝛼𝑚𝑎𝑥 ‖𝒇𝛼(𝑡) ∙ ∆𝑡‖⁄
𝑖𝑓 ‖𝒇𝛼(𝑡) ∙ ∆𝑡‖ < 𝑣𝛼𝑚𝑎𝑥
𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (5.10)
Table 5.3: Values of key parameters for Social Forces model
Parameter Value Description
∆𝑡 0.5 s Model time step
𝜏𝛼 0.75 Relaxation on matching desired velocity
𝐴𝛽 0.1845 m/s2 Neighbour force scale
𝐵𝛽 5.9334 m Neighbour force range
𝐴𝑖 1.9534 m/s2 Obstacle force scale
𝐵𝑖 0.1366 m Obstacle force range
𝜆𝛼 0.10 Visual field effect parameter
𝑣𝛼𝑚𝑎𝑥 1.3 x 𝑣𝛼
0 Maximum agent speed
The final step of setting up the Social Forces model involved the selection of appropriate
parameters, including those for pedestrian interactions, velocity relaxation, and a model time step.
To match current work using the same Social Forces approach by Helbing and Johansson (2009)
and Seer, Rudloff, Matyus, and Brändle (2014), a time step of 0.5 seconds was chosen, striking a
balance between reducing model run times and providing sufficient detail through frequent agent
position updating. Similarly, agent interaction parameters (neighbour force scale and range, and
obstacle force scale and range) were taken from the study by Seer, Rudloff, Matyus, and Brändle
50
(2014), which involved tracking pedestrians at the Massachusetts Institute of Technology as they
moved through a corridor featuring an obstacle. While the authors acknowledge that this scenario
did not involve high volumes or densities, the parameter set they found is one of few that includes
both neighbour and obstacle parameters, and the neighbour parameters obtained are similar to
those from other studies in transit station scenarios (Johansson, Helbing, & Shukla, 2007).
Figure 5.2: Impact of increasing relaxation parameter from 0.5 (left) to 0.75 (right)
The only parameter that differs from those found in the literature is the relaxation, which was
adjusted to ensure that pedestrians in the Social Force model exhibited a reasonable speed-density
relationship. Originally, this parameter was set to 0.5, which means that in a single time step,
agents will attempt to make up twice the difference between their current and desired velocity.
While this value is used by Helbing and Molnár (1995) and Helbing and Johansson (2009), initial
observations indicated that in combination with the chosen agent interaction parameters,
pedestrians were not forming any sort of congestion in a simple bottleneck scenario. Since it has
been noted that decreasing the relaxation parameter causes agents to walk “more aggressively”
(Helbing & Molnár, 1995), and that forces within the model can become unstable in densities
51
above 2 pedestrians/m2 (Wolinski, et al., 2014), it was decided that the parameter should be
increased slightly. In addition to reducing agent aggression by smoothing their acceleration, this
also serves to re-balance the forces that are summed in Equation 5.5, increasing the effect of other
agents and obstacles without directly adjusting their parameters. The impact of this change on
pedestrians in a simple bottleneck scenario is shown in Figure 5.2, where the slight increase in the
relaxation parameter produces a more expected behaviour at, and reduction in flow through, the
bottleneck.
5.3 Optimal Steps Model
The Optimal Steps model was originally proposed by Seitz and Köster (2012) as a hybrid between
the established Cellular Automata and Social Forces approaches. By combining the flexibility of
a model operating in continuous space with the simplicity of using discrete movements (both in
terms of direction and distance), the authors aimed to represent the stepwise movement of
pedestrians without relying on steering behaviours or force-type equations. Practically, this was
accomplished by assigning each agent a discrete step length based on their current speed, checking
a specified number of positions one ‘step’ away from the agent at every time step, and selecting
the position with the lowest potential value. The number of positions to be checked (𝑞) could
theoretically be infinite (to fully cover an equidistant circle around the agent), but Seitz and Köster
recommended checking between 8 and 32 points, and here a set of 18 points has been chosen to
reduce computation time. In this configuration of the model, each agent’s step length (𝜇𝑟) is simply
calculated by multiplying their current speed by the time step (∆𝑡). To prevent artifacts in agent
movement, these points are disturbed slightly by adding a uniformly distributed random noise term
(𝑢) when calculating each point’s (𝑘) location on the circle, as shown in Equation 5.11 and visually
presented in Figure 5.3. In addition to these points on a circle, the agent’s current position was also
checked at every time step, allowing them to remain in their current position if that was the most
desirable location.
𝜑 = 2𝜋
𝑞(𝑘 + 𝑢), 𝑢 ~ 𝑈(0,1) (5.11)
52
Figure 5.3: Determination of potential positions in Optimal Steps model
Potential values were assigned to these positions using Equation 5.12, which is similar to Equation
5.5 from the Social Forces model, since both equations sum three components based on attraction
to the agent’s goal, repulsion from nearby neighbours, and repulsion from nearby obstacles. In the
Optimal Steps model, the attractive goal potential is simply the distance to the agent’s next
waypoint in metres, which is provided by testing the floor field calculated by MassMotion at that
point. However, these fields are calculated in such a way that floor edges have a distance to
waypoint of 0, which can misguide and trap agents, so a check was implemented to identify and
avoid these undesirable locations. Neighbour avoidance was implemented using Equation 5.13,
where an arbitrarily high potential is added for each neighbour within one body diameter of the
tested position, or an exponentially decreasing potential is added for each neighbour outside one
body diameter, but within a specified cut-off. Similarly, Equation 5.14 was used to calculate
obstacle potentials, which were arbitrarily high if an obstacle was overlapping the new position,
and exponentially decreasing otherwise up to a specified cut-off. The values (cut-offs, parameters,
etc.) used in these equations are mostly the same as those used by Seitz and Köster (2012), aside
from the pedestrian torso diameter (set to match MassMotion) and time step, and can be found in
Table 5.4.
53
𝑃𝛼(𝑥) = 𝑃𝑔(𝑥) + ∑ 𝑃𝛼𝛽(𝑥)
𝛽≠𝛼
+∑𝑃𝛼𝑖(𝑥)
𝑖
(5.12)
𝑃𝛼𝛽(𝑥) = {
𝜇𝛽
𝜈𝛽 ∙ exp (−𝑎𝛽 ∙ 𝛿𝛼𝛽(𝑥)𝑏𝛽)
0
𝑖𝑓 𝛿𝛼𝛽(𝑥) ≤ 𝑔𝛽
𝑖𝑓 𝑔𝛽 < 𝛿𝛼𝛽(𝑥) ≤ 𝑔𝛽 + ℎ𝛽𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
(5.13)
𝑃𝛼𝑖(𝑥) = {
𝜇𝑖
𝜈𝑖 ∙ exp (−𝑎𝑖 ∙ 𝛿𝛼𝑖(𝑥)𝑏𝑖)
0
𝑖𝑓 𝛿𝛼𝑖(𝑥) ≤ 𝑔𝛽 / 2
𝑖𝑓 𝑔𝛽 / 2 < 𝛿𝛼𝑖(𝑥) ≤ ℎ𝑖𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
(5.14)
Table 5.4: Values of key parameters for Optimal Steps model
Parameter Value Description
∆𝑡 1.0 s Model time step
𝑞 18 Number of potential positions around agent
𝜇𝛽 1,000.0 Potential for agent-occupied location
𝜈𝛽 0.4 Neighbour potential scale
𝑎𝛽 1.0 Neighbour potential range
𝑏𝛽 0.2 Neighbour potential exponent
𝑔𝛽 0.5 m Pedestrian torso diameter (2 x 0.25 m)
ℎ𝛽 1.0 m Neighbour potential cut-off distance
𝜇𝑖 10,000.0 Potential for obstacle-occupied location
𝜈𝑖 0.2 Obstacle potential scale
𝑎𝑖 3.0 Obstacle potential range
𝑏𝑖 2.0 Obstacle potential exponent
ℎ𝑖 6.0 m Obstacle potential cut-off distance
Once all the component potential values for each position have been calculated and summed, the
lowest potential value is determined and the agent is moved to that position (or kept in the same
position, as appropriate). This is repeated for every agent once per time step, which was chosen as
one second – although Seitz and Köster recommend updating agent positions independently of a
time step, their event-driven model was not compatible with the MassMotion framework, so a
54
time-driven implementation was tested here. In addition to this change, some other modifications
were made to the Optimal Steps concept to ensure compatibility with MassMotion. First, to avoid
the dangers of parallel updating, such as two agents moving to the same open space, a sequential
agent position updating procedure has been implemented. The stepwise nature of this model also
makes it possible for agents to test points beyond their current waypoint, which leads to inaccurate
floor field and neighbour results from the MassMotion SDK. Accordingly, agents who are within
one step length of their waypoint do not calculate any potential values and are instead sent directly
to that waypoint. Similarly, Optimal Steps was not designed with three-dimensional geometry in
mind, so agents on stairs or escalators are moved directly towards their waypoint (either the top or
bottom landing of the element) one step at a time, ignoring neighbours. Finally, a few rules have
been implemented to move agents slightly forward if they do not have a current waypoint – this
slight repositioning helps MassMotion reassign the agent a goal if they happen to be without one.
5.4 MassMotion
The MassMotion Social Forces model is used by default when running a MassMotion pedestrian
simulation, and is included here to represent a state-of-the-art commercial simulation method. It is
based on the same Social Forces concept proposed by Helbing and Molnár (1995) and similarly
operates in continuous space. However, unlike the Social Forces approach detailed in Section 5.2,
MassMotion uses a finer time step of 0.2 seconds, adding to the computational burden of this
model but yielding smooth, lifelike animations. In addition to the goal and neighbour forces used
in basic Social Forces models, many others have been added to better represent pedestrian
movement and behaviours, which are detailed in Table 5.5. One area in which MassMotion
deviates from other Social Force models is that obstacles do not have a repulsive effect on agents
– instead, they constrain other forces to avoid collisions. Since MassMotion is closed-source
commercial software, few additional details on its calculation methods can be provided, although
some parameters are available in (Arup, 2015) and the MassMotion help documentation. To ensure
the performance of this model could be fairly compared against those implemented in the SDK, it
was also run through the SDK and was limited to use a single system thread.
55
Table 5.5: Component forces in MassMotion Social Forces model (Arup, 2015)
Component Force Colour* Description
Goal Bright
Green
Attractive force moving the agent towards its goal /
target at the desired travel speed.
Neighbour Bright
Yellow
Repulsive force from each neighbouring agent (to
maintain adequate separation between agents).
Drift Purple Repulsive force moving the agent in the direction of the
preferred bias when faced with oncoming agents.
Collision Veer Force Turquoise Repulsive force to prevent anticipated collisions with a
neighbouring agent.
Collision Yield Force Orange Repulsive force (and / or torque) causing the agent to
slow down and avoid a collision with a neighbouring
agent.
Cohesion White Attractive force moving the agent towards the centroid
of neighbouring agents with similar goals / targets.
Marshal / Orderly
Queueing
Grey Attractive force pushing the agent towards the middle
of a goal / target when approaching.
Corner Brown Repulsive force enabling the agent to navigate a corner.
Panic Pink Strong force pulling the agent back to a walk-able
surface (when the agent attempts to move outside the
boundaries of the walk-able surface).
Obstacle (Constrained
Net Force)
Blue Resulting net force.
Obstacle (Constrained
Velocity)
Black Resulting velocity.
* Colours are shown when visualizing all component forces in MassMotion debug mode
5.5 Cellular Automata Model
In addition to the four models that were successfully implemented within the MassMotion SDK
framework, attempts were made to configure a Cellular Automata model, however significant
challenges were encountered when attempting to create the ‘cells’ this model would use. Unlike
the other modelling approaches, CA does not operate in continuous space, but instead relies on a
discrete grid of cells between which agents can move in a stepwise manner. For contemporary CA
models, these cells are usually square, allowing for easy tessellation, and are generally the size of
one pedestrian/agent (between 40 and 50 cm per side) (Burstedde, Klauck, Schadschneider, &
56
Zittartz, 2001) (Blue & Adler, 2001). Although some recent models have attempted to refine
pedestrian locations by having each agent occupy multiple smaller cells (Lubás, Wąs, & Porzycki,
2016), a more traditional one-agent-per-cell approach was chosen for simplicity and clarity.
Instead of squares cells, hexagonal cells with a centroid-to-centroid distance of 50 cm were used
such that agents would have six equidistant locations in their near neighbourhood, as opposed to
four equidistant locations (Von Neumann neighbourhood) or eight varying distance locations
(Moore neighbourhood) (Rogsch & Klingsch, 2011). This type of grid is preferred by some
researchers due to the two additional ‘natural’ directions of motion provided over a Von Neumann
approach (Davidich & Köster, 2012). A simple comparison of these three neighbourhood options
is presented in Figure 5.4.
Figure 5.4: Comparison of hexagonal, Von Neumann, and Moore neighbourhoods
To create this cellular grid from each MassMotion station model, an algorithm was developed to
automatically discretize each walkable object in the model. The algorithm first randomly placed a
‘test agent’ on each walkable element, then recursively checked each of the six potential positions
around that agent to see if they were walkable (not off the floor and not obstructed by an obstacle).
For each position that was walkable, it was added to a master list along with a reference to each
adjacent point that was also walkable – for example, the central point shown in Figure 5.4 would
refer to each of the six surrounding points, and each of those points would refer to that central
point, as well as any others that were adjacent and walkable. The process was repeated on each
57
walkable object in the station until all walkable points were covered and added to the master list,
producing a set of centroids like that shown in Figure 5.5. Unfortunately, due to MassMotion’s
discrete handling of walkable objects, this meant that points on edges of connected links and floors
would not refer to each other in the master list, and due to the random start points of each agent in
the mapping algorithm, these points would also not be uniformly spaced throughout each model.
An example of this issue is shown in Figure 5.6, where the irregular spacing of centroids at a link-
floor junction is shown, as well as the reduced number of connections per centroid in this area.
Without an appropriate discretized space map featuring complete connections and uniform
spacing, it was not possible to properly run a CA model on these station scenarios.
Figure 5.5: Discretized corridor with centroids in Osgoode Station
Figure 5.6: Disconnected and irregularly spaced centroids at junction between elements
58
If issues with the creation of a cellular grid had not been encountered, a floor-field CA rule set
would have been used to govern agent movement. Such rule sets have been proposed by Dijkstra,
Jessurun, and Timmermans (2001), Blue and Adler (2001), and Lämmel and Flötteröd (2015), but
the model used by Schadschneider, Eilhardt, Nowak, and Will (2011) provides a traditional, yet
recently validated implementation. In this model, two floor fields are created using the complete
map of discretized space, with each field serving a different purpose. One static floor field (𝑆𝑖𝑗)
maps out the locations of walkable space and obstacles, while a dynamic floor field (𝐷𝑖𝑗) contains
traces of which cells were occupied in previous time steps. Using these two fields, transition
probabilities for each cell in the agent’s near neighbourhood can be calculated using Equation 5.15.
Here, 𝑁 is a normalization factor to ensure all probabilities sum to 1, 𝑛𝑖𝑗 represents whether a cell
is occupied (1) or free (0), 𝜁𝑖𝑗 represents whether a cell is walkable (1) or not (0), and 𝑘𝑆 and 𝑘𝐷
are sensitivity parameters for each floor field. The CA model operates by performing this
calculation for each agent at each time step, moving agents to the cells with the highest
probabilities as they proceed to their destinations.
𝑝𝑖𝑗 = 𝑁 ∙ exp (𝑘𝑆𝑆𝑖𝑗) ∙ exp (𝑘𝐷𝐷𝑖𝑗) ∙ (1 − 𝑛𝑖𝑗) ∙ 𝜁𝑖𝑗 (5.15)
59
Chapter 6
6 Model Calibration
Prior to comparing any of the pedestrian movement models or performing sensitivity analyses, it
was necessary to calibrate the chosen models to a subset of collected transit station data, ensuring
that no model had a distinct advantage or disadvantage. In the following sections, details of this
calibration process are provided, beginning with the selection of a Genetic Algorithm calibration
method to use with each of these models. The process and results of an initial calibration of the
Social Forces model are also described, highlighting the problems with the calibration of
microscopic agent interaction parameters against macroscopic flow data. Finally, the process of
calibrating model speed adjustment factors is described, and the calibrated speed factors are
provided and justified. The code used to calibrate these models is provided in Appendix C.
6.1 Calibration Method
As described in Section 2.2.1, there are many calibration methods that have been employed to
adjust parameters in vehicular and pedestrian simulation models, but one in particular stands out
due to its recurring use and success in calibrating a variety of models – the Genetic Algorithm
(GA). Practically speaking, the GA is an optimization method modelled after natural selection and
genetic inheritance, seeking an optimal solution to a given function by testing and manipulating
potential solutions (Abdulhai, 2015). It consists of a few components, namely:
• A ‘population’ of potential solutions, each of which is represented by a ‘chromosome’
(parameter set) consisting of ‘genes’ (individual parameters);
• A function to evaluate each potential solution and rank it based on its fitness; and
• Genetic operators to select ‘parents’ between which genetic information can be exchanged,
as well as operators to exchange and randomize that information.
Often, GAs will use binary strings as chromosomes so individual genes can be strung together
with ease, and any changes to the chromosomes or genes can be made by bit-flipping (0 to 1, or
60
vice-versa) or exchanging parts of two binary strings. Binary strings also ensure that each
chromosome is the same length regardless of its value, and allow for numerical or categorical
variables to be tested with minimal conversion.
Genetic Algorithms start their search for an optimal solution with a population of random potential
chromosomes. In this case, each chromosome represents a parameter or set of parameters that are
used in a pedestrian movement model (e.g. speed adjustment, neighbour force scale). Each
chromosome’s parameters are then implemented in the pedestrian movement model being tested,
the model is run for a specified scenario, and certain results (agent paths, flow rates, etc.) are
compared against a desired result or base case data set using a fitness function. This function
assigns a value between 0 and 1, where higher values are more desirable, based on how well the
model using that chromosome could match the provided data. The selection of a fitness function
often depends on the data being compared, but some common ones used in transportation
modelling are average absolute error (Cheu, Jin, Ng, Ng, & Srinivasan, 1998), point mean absolute
error, global relative error (Ma & Abdulhai, 2002), R-squared, and inverse sum-of-squares
(Davidich & Köster, 2012).
Following the calculation of a fitness value for each chromosome, the chromosomes are ranked
and proportionally assigned a selection probability – the fitter the chromosome, the more likely it
is to be chosen when ‘mating’ with another chromosome. This mating process mimics biological
reproduction in that parts of each chromosome are exchanged between two ‘parents,’ yielding a
new chromosome that ideally combines the best of both parents. In addition, the chromosome
resulting from this process can be modified via mutation, where one or two of its binary bits are
randomly changed. Finally, a small percentage of chromosomes are deemed ‘elite’ due to a high
fitness, and are accordingly copied into the next population unchanged. In this configuration, the
new population consists of these elite chromosome, a specified percentage of ‘child’ chromosomes
resulting from the crossover procedure, and a few new, random chromosomes. At this point, the
fitness evaluation and mating process is started again until a specified number of generations have
been created.
As noted by Cheu, Jin, Ng, Ng, and Srinivasan, Genetic Algorithms excel because “[the] use of
multiple points increases the robustness of search in a complex space. As the points are scattered
in the solution space, it reduces the likelihood of reaching a local maxima/minima and increases
61
the probability of finding the global optimum solution” (1998, p. 529). Considering the complex
behaviours simulated in pedestrian movement models, this attribute makes a GA very appealing,
as there is no guarantee of a smooth or continuous response of any model to a change in one or
more parameters. Using a GA also allows the search for optimal parameters to start with a few
established values or sets, such as those found in the literature, in addition to randomly selected
values. This ensures that an optimal result from the algorithm is no worse than an existing
parameter set. In practice, a GA has been implemented within the calibration code using the
Genetic Algorithm Framework for .Net (Newcombe, 2016), a freely available library of GA
functions that can be used in C# programs. In most cases, values and functions used (as specified
in Table 6.1) are based on recommended ranges provided in the GA Framework documentation
(Newcombe, 2017). The population size and number of generations were chosen based on values
used by Cheu, Jin, Ng, Ng, and Srinivasan (1998), but with fewer overall chromosomes tested due
to the initial inclusion of ‘good’ values. This number was also reduced due to the need to test each
chromosome multiple times to reduce the random effects of each MassMotion run, which
increased the total number of simulations run to 1,200 per pedestrian model.
Table 6.1: Key parameters and functions used in Genetic Algorithm
Parameter/Function Value Description
Population Size 20 Number of chromosomes per generation
Number of
Generations
10 Number of additional generations to create
before declaring an optimal solution
Crossover Type Single Point Selects one point in both binary strings and
switch the remaining parts
Crossover Probability 0.85 -
Mutation Type Binary Mutation Switches a bit in the binary string from 0 to 1, or
vice-versa, allowing duplicates to be formed
Mutation Probability 0.10 -
Elitism Percentage 5 Percent of chromosomes passed directly to the
next population
62
6.2 Preliminary Calibration and Revision
Initially, the Genetic Algorithm calibration method was tested by attempting to calibrate agent
interaction parameters in the Social Forces model to observed data from Osgoode Station.
However, it was quickly discovered that the macroscopic scale of the collected data did not provide
the type of information required to adjust these parameters, and microscopic agent behaviours
became unrealistic in the search for a better fit against macroscopic flow data.
6.2.1 Calibration of Social Forces Parameters
These first calibration tests used the GA method as described in Section 6.1, with each binary
string chromosome consisting of four genes – neighbour force scale, neighbour force range,
obstacle force scale, and obstacle force range. These parameters were represented by either a five-
or seven-bit string, which was decoded to represent a value between 0.15 and 10 for the scales and
0.10 and 6.0 for the ranges. These parameter ranges were selected based on existing parameter sets
found in (Helbing & Molnár, 1995) and (Seer, Rudloff, Matyus, & Brändle, 2014). Model fitness
was tested using the R-squared metric (also known as the coefficient of determination, shown in
Equation 6.1) as a fitness function. This function was applied to compare second-by-second
observed and simulated data summed from three screenlines in Osgoode Station (A through C) for
agents exiting the concourse between 4:55 and 5:10 PM.
𝑅2 = 1 −∑ (𝑦𝑜𝑏𝑠 − 𝑦𝑠𝑖𝑚)
2𝑖
∑ (𝑦𝑜𝑏𝑠 − �̅�𝑜𝑏𝑠)2𝑖 (6.1)
Upon iterating through the 10 generations of chromosomes and inspecting the fittest values, it was
found that the R-squared fit of the model had been raised to almost 99%. This seemed to indicate
a successful calibration, and compared to some default parameter sets found in the literature, as
shown in Table 6.2, this fitness value was about 1.5% higher. However, when plotting a half-hour
of cumulative outbound flow from this model for screenlines A-C, it was clear that many of the
defining details of this scenario had been lost. While the curve itself was closer to that of the
63
observations, as shown in Figure 6.1, many of the peaks in agent flow were flattened and spread,
and the overall profile of the curve was different. Worse still, when comparing all cumulative
outflows (A-E) in Figure 6.2, it was shown that the new parameter set caused the flow rate of
agents exiting the concourse to fall, reducing the exiting volume over 30 minutes by nearly 200
agents. Although results when using proven parameters from Helbing and Molnár did not present
as well considering the R-squared metric, it seemed that these more established parameter sets
better represented pedestrian movement and interactions, and more importantly held true to the
general trend of the observed data.
Table 6.2: Social Forces parameters from literature and calibration
Source Neighbour
Force Scale
Obstacle
Force Scale
Neighbour
Force Range
Obstacle
Force Range
Fitness
(R2)
Helbing and Molnár
(1995)
2.1 10 0.3 0.2 0.9748
Seer, Rudloff,
Matyus, and Brändle
(2014)
0.1845 1.9534 5.9334 0.1366 0.9732
GA Chromosome 1 2.787 1.313 1.448 0.3452 0.9889
GA Chromosome 2 3.097 2.399 1.265 0.4065 0.9865
GA Chromosome 3 2.244 4.338 1.571 0.2258 0.9863
An observation of visualized agent behaviours using these parameter sets further confirmed the
inappropriateness of the values found using the GA. While agents using the Helbing and Molnár
parameters would move smoothly through space, adjusting their trajectories slightly to avoid
conflict with others, agents using the GA parameters often took drastic steps to avoid other agents.
Worse still, when faced with even slight congestion, GA agents would bounce around, seemingly
wasting time until the congestion cleared and they had a wide-open path to their destination. Based
on the elevated neighbour force scale and range parameters chosen, these behaviours make some
sense, and such instabilities have also been noted by Wolinski, et al. (2014) in cases when agent
densities exceed 2 pedestrians/m2. Unfortunately, these findings indicated a fatal flaw with this
calibration procedure, and reinforced suspicions that calibrating microscopic parameters against
macroscopic data is generally infeasible.
64
Figure 6.1: Comparison of Social Forces parameter sets for street exit flows
Figure 6.2: Comparison of Social Forces parameters sets for all concourse exit flows
65
6.2.2 Developing a New Calibration Procedure
Based on these findings, it was decided that a new fitness function was required, and a longer
sample of Osgoode data would be used to calibrate each model to better capture the long-term
effects of any change. The idea of calibrating microscopic parameters, which varied between
models, was also abandoned. Instead, a single parameter for each model was calibrated, which
would be multiplied with each agent’s desired unconstrained speed to compensate for times when
agents had to wait for new waypoint assignments from the MassMotion SDK.
For this second calibration, individual screenline flows from Osgoode Station were used, lasting
for 30 minutes between 4:55 and 5:25 PM. This data was binned into 60-second individual flows
at each screenline, allowing for a memoryless comparison of pointwise deviations as opposed to a
comparison of cumulative flows. This method was chosen to help capture and compare more of
the short-term deviations in pedestrian flow that define this scenario. To assess fitness against this
binned data, the Global Relative Error (GRE) function was chosen, as shown in Equation 6.2, and
the GA was configured to minimize this term. This function was traditionally used by Ma and
Abdulhai (2002) for calibrating traffic simulation models, but here it offers a good method to
compare absolute deviations over the course of the simulation while compensating for varying
volumes at each screenline. To get a net fitness for each run, the GRE values from each exiting
screenline (A-E) were averaged over the half-hour time frame. Since the GRE term considers the
sum of pointwise deviations from observed values, as opposed to the match between cumulative
flow curves, higher error terms are expected due to data binning.
𝐺𝑅𝐸 = ∑ |𝑦𝑜𝑏𝑠 − 𝑦𝑠𝑖𝑚|𝑛𝑖=1
∑ 𝑦𝑜𝑏𝑠𝑛𝑖=1
(6.2)
When setting up the GA for this calibration, the same population size and parameters presented in
Table 6.1 were maintained, but the chromosomes were changed to consist of a single 7-bit gene
representing the speed adjustment parameter. Although adjusting a single parameter may not
necessitate using a GA, the algorithm’s resistance to the trap of local maxima and minima was still
seen as beneficial, especially considering that each model’s response to a new parameter was
generally unknown. The speed parameters tested ranged between 0.5 and 1.5, meaning speeds
66
could be halved, multiplied by 150%, or adjusted anywhere in between. The initial population
representing 20 potential parameters consisted of one manually selected value chosen as 1.0
(uncalibrated default, no change to speed), while the remaining 19 values were chosen randomly
within the range. Random effects from MassMotion were mitigated by performing five runs for
each parameter, each using a different random seed, and averaging the fitness values from each
run.
6.3 Final Calibration Results
Unlike the initial calibration of agent interaction parameters in the Social Forces model, which
yielded unrealistic agent behaviours, the calibration of speed adjustment parameters for each
pedestrian movement model was successful. With the Graph model, the GA calibration was able
to reduce the Global Relative Error by nearly 5%, bringing it from 44% (uncalibrated, parameter
= 1.00) down to nearly 39% (calibrated, parameter = 1.40), as shown in Table 6.3. This rather large
speed increase can be justified due to a few factors, namely the Graph model’s long, one second
time step, and the need for agents to wait one or two time steps between reaching a waypoint and
receiving their next goal assignment. Combining these two attributes causes agents to frequently
wait an additional second or two as they move through the model, practically reducing their net
speed, and this increase corrects for that unrealistic reduction.
Table 6.3: Uncalibrated and calibrated model speed parameters
Model Speed Parameter Value Global Relative Error
Graph 1.00 44.05%
1.40 39.08%
Social Forces 1.00 40.52%
1.10 39.72%
Optimal Steps 1.00 45.27%
1.34 39.76%
MassMotion - 40.57%
67
Figure 6.3: Observations vs. calibrated Graph model, 60 second flows
While a model error of 39% seems high, it is not unreasonable for this scenario since the GRE
term considers the cumulative impact of pointwise deviations. Since 30 minutes of data has been
divided into 60-second bins, the GRE term is sampling 30 differences per screenline, as opposed
to one net difference at the end of the time frame, capturing more micro-level inconsistencies and
increasing the error value. The difference between observed and simulated 60-second exiting
volumes at all screenlines are presented in Figure 6.3 for the Graph model, showing the general
match achieved by the calibrated model. It is also important to note that some of the deviations
between simulated and observed flows have to do with MassMotion’s random assignment of agent
destinations – although the origin-destination matrix associated with each model is adhered to,
individual trip (e.g. A to D) distributions over time vary. Accordingly, larger deviations, such as
that seen at 1680 seconds, can be expected and do not necessarily indicate inappropriate or
inaccurate model behaviours.
68
Figure 6.4: Observations vs. calibrated Social Forces model, 60 second flows
Similarly, calibration of the Social Forces model yielded a slight improvement in the GRE,
reducing it by nearly 1% to 39.72% (calibrated, parameter = 1.10). The speed for this model is
again increased, but by a much smaller amount than the Graph model – since the Social Forces
model uses a time step of 0.5 seconds, its agents wait a shorter time to receive their next goal
assignment and do not lose as much net speed. Additionally, Social Force-driven agents often step
beyond their waypoint line, which allows them to proceed further before stopping, as opposed to
Graph agents who go exactly to a waypoint and stop each time. Combined, these factors justify
the slight increase in agent speed for this model. The resulting calibration plot in Figure 6.4 shows
another reasonable match to the observed data, with some points (e.g. 1680 seconds) deviating less
than the Graph model.
69
Figure 6.5: Observations vs. calibrated Optimal Steps model, 60 second flows
Compared to the other two models tested and calibrated, selecting a parameter for Optimal Steps
was a more difficult task. While the top 10 fittest parameters for the Graph and Social Forces
models were clustered around the same values, the fittest parameters from the Optimal Steps
calibration varied. However, averaging the top 5 and top 10 parameters yielded the same mean
value of 1.34. Although it was not the parameter with the single highest fitness, it was the most
representative of the set of fittest parameters, reducing the GRE by roughly 5.5% to 39.76%. This
value was also a reasonable choice for the Optimal Steps model due to its one-second time step
and similar waiting behaviours to the Graph model. Surprisingly, the calibration plot from the
more advanced Optimal Steps model mimics that from the Graph model in a few places, namely
the large deviation at 1680 seconds (shown in Figure 6.5). However, it does show a better match
than the Graph model at 2040 s and 2520 s, demonstrating its more detailed representation of
pedestrian movement.
70
Figure 6.6: Observations vs. MassMotion, 60 second flows
Although it was not calibrated, the same plot was generated for the MassMotion pedestrian
movement model and presented in Figure 6.6, demonstrating its good fit with the data (GRE of
40.57%). Since the movement model was provided calibrated, validated, and integrated with the
MassMotion modelling framework (Arup, 2015), it was felt that any further adjustment would
change the model too much from its default state, making comparative efforts unfair. Accordingly,
the only change made when simulating scenarios with the MassMotion pedestrian model was
running the model via the MassMotion SDK, ensuring that it ran within the same computational
framework as the other models.
Overall, these calibrated models produce good fits to the observed data, matching many of the
trends in high and low pedestrian flows at the screenlines on a 60-second scale. In the following
chapter, the calibrated models are used to produce more detailed data in both the Osgoode and
Queen’s Park scenarios, allowing for a thorough validation of their performance and a comparison
of the differences between them.
71
Chapter 7
7 Model Comparison
With a set of pedestrian movement data from two local subway stations and a set of customized
and calibrated pedestrian movement models, all key pieces were in place to perform a comparison
of these models, considering both their accuracy and computational speed. First, the models were
used to simulate the observed base cases in each station, focusing on complete observed concourse
areas as well as some smaller station elements. Based on these results, increased volume scenarios
were devised and tested to get a better idea of how each model handled higher density pedestrian
flows and congestion, and to discover where the macroscopic results of each model began to
deviate from one another. Finally, these results were compiled to make a set of recommendations
for the conditions under which each model should be used, taking into account both its accuracy
and potential for time savings.
All the following runs were performed using identical computer hardware, specifically a Lenovo
T450s equipped with an Intel i7-5600U processor (2.60GHz, boost to 3.20GHz), 12.0GB of
DDR3L 1600MHz RAM, and a SATA III solid-state drive. To ensure that no preference was given
to parallelized code, all model operations were software-limited to use a single processor thread.
Additionally, to minimize the impact of random effects in MassMotion, each of the following
simulations was performed 10 times using 10 different random seeds, and the results of all runs
were averaged. The code used to perform these tests and time each model run is provided in
Appendix D.
7.1 Base Case Performance
The initial assessment of model performance consisted of running a slightly longer test using the
Osgoode Station scenario, as well as performing a validation of the calibrated models using the
Queen’s Park Station scenario. In all base case scenarios tested, agent volumes put in to the station
were set to match the inbound volumes observed when collecting pedestrian data, with the profile
of agents entering at each screenline configured to closely resemble the observations as well. For
each model run, second-by-second cumulative outflow curves at each screenline were recorded,
72
providing a much finer level of detail than the 60-second volumes used at the calibration stage.
These outflow curves were then averaged across all 10 runs, with the results compared to those
recorded in each station individually using the R-squared metric and the sum of squared errors
(SSE) metric. While the R-squared metric was dismissed for calibration, here it provided a
reasonable method for identifying how well each model matched the long-term trend at each
screenline and helped to quantify deviations between models.
7.1.1 Osgoode Station Concourse
To begin comparing each pedestrian movement model’s accuracy, runs of the Osgoode scenario
were examined, extending an additional 5 minutes from 4:55 PM to 5:30 PM. Although this was
not a major change from the data used to calibrate these models, and thus high accuracy was
expected, it was important to examine the individual screenline flows. Here, the simulated flows
at each screenline were compared to the observed flows with respect to cumulative volumes at the
second-by-second level, starting with flows exiting the station to street level at screenlines A, B,
and C, presented in Figure 7.1. Before examining the match between the models and observations,
it is interesting to note that for this scenario, all four tested models (Graph, Social Forces, Optimal
Steps, and MassMotion) produced extremely similar results, and thus overlap one another in the
plots. Arguably, this is due to the relatively low volumes encountered in the station, as noted in
Section 4.2.1, which indicates that these models all produce similar macroscopic results when there
is no congestion to complicate pedestrian behaviours.
Comparing the simulated cumulative flows to the observed flows, however, shows that there are
some notable differences between how pedestrian movements in the station were modelled (by all
models) and what was observed. While the curves at screenlines B and C match relatively well
with some deviations near the beginnings and ends of the model runs, results from screenline A
indicate less desirable behaviour. The plot from this screenline shows that the models match the
high and low periods of flow observed, but overestimate volumes until the final five minutes, when
the modelled flow rate slows down and the observed flow rate increases to reach similar final
volumes. The relative weakness of model fits at screenline A is also confirmed when comparing
fitness values in Table 7.1, where this screenline has the lowest R-squared (~96%) and highest
SSE (~400,000) compared to all other screenlines. Although this does not indicate ideal behaviour,
73
the fact that all four models exhibit such a trend suggests that it may have more to do with the
timetable configured for this scenario, including distributions of agents to and from screenlines
throughout the simulation. Since only agent origins were captured in the data collection process,
the MassMotion core proportionally, but randomly, assigns each agent a destination. This leads to
some agents taking shorter paths than others (e.g. screenline A to C) earlier in the simulation than
in reality, shifting the interim volumes at the screenlines while maintaining final volumes.
Figure 7.1: Comparison of observations and all model flows at screenlines A-C
74
Table 7.1: R-squared and SSE fits for all models, Osgoode scenario
Model Screenline
A B C D E
R-Squared (%) Average
Graph 96.16 98.94 97.55 99.75 99.93 98.46
Social Forces 96.24 98.89 99.77 99.89 99.89 98.48
Optimal Steps 96.09 99.05 97.63 99.76 99.92 98.49
MassMotion 96.37 98.94 97.38 99.75 99.91 98.47
SSE (1000s) Total
Graph 412 35 126 28 159 760
Social Forces 404 37 122 25 250 838
Optimal Steps 421 31 122 26 184 784
MassMotion 386 34 132 27 212 791
Inspecting the results of screenlines D and E, which capture agents exiting the concourse to the
platform level, it appears that the models all match the observed data quite well, as shown in Figure
7.2. Although there are some spots where a simulated cumulative flow varies from the
observations, such as between 1620 and 1800 seconds for Social Forces, the overall fits are better
for screenlines D and E (to platform) than A, B, and C (to street). This is again confirmed when
looking at fitness values in Table 7.1, where screenlines D and E have R-squared values well above
99%, as opposed to values between 96 and 99%. Further, the deviation in the Social Forces model
is confirmed to negatively impact its fit with the observed data, as its SSE for screenline E is
around 250,000, compared to values between 160–210,000 for the other models.
Since the cumulative flow curves for these four models are, in many cases, visually
indistinguishable from one another, the R-squared and SSE metrics help indicate which model
produced the best match against the observed data. Comparing total SSE across the models tested,
it appears that the Graph model has the least overall error, mostly due to its very low error at
screenline E. However, the Graph model did not have the best fit at all screenlines – in fact, each
model tested had the best fit at one screenline and the worst fit at another, suggesting that no model
is truly better than another at simulating this scenario. Comparing the R-squared values tells a
similar story, as the average fits for each model are within 0.02 percent of one another.
75
Figure 7.2: Comparison of observations and all model flows at screenlines D & E
Finally, comparing the project setup, simulation setup, and model run times for each of the four
simulation models, as shown in Table 7.2, helps differentiate the models based on their
computational performance, given that their accuracies are similar. First, comparing the project
and simulation setup times (means and standard deviations) for the models shows little to no
difference, as these steps are largely dominated by the MassMotion core and do not differ for any
model. Contrarily, the differences between model run times are distinct, with some models taking
nearly six times as long to complete as others (Optimal Steps versus Graph Model). As such, it is
hard to recommend either the Optimal Steps or MassMotion movement models, since a very
similar set of flow results can be produced using the Graph model within a much shorter time. If a
user wishes to visualize the agent paths, then the simple Social Forces model can be used, taking
twice as long as the Graph model but generating detailed pedestrian movements.
76
Table 7.2: Setup and run times for all pedestrian models, Osgoode scenario
Model Project Setup (s) Simulation Setup (s) Model Run (s)
µ σ µ σ µ σ
Graph 0.09 0.02 13.64 0.53 9.96 0.27
Social Forces 0.09 0.02 13.37 0.09 20.63 0.45
Optimal Steps 0.10 0.02 13.35 0.11 53.93 0.57
MassMotion 0.09 0.01 13.44 0.23 35.00 0.39
7.1.2 Queen’s Park Station Concourse
Unlike the Osgoode scenario, which was almost identical to the scenario used to calibrate each of
the models, Queen’s Park presented a new validation challenge for the models featuring both new
geometry and higher pedestrian volumes. Again, the tested models yielded very similar cumulative
flow curves, especially for the four screenlines exiting to the street (A, B, C, and D), presented in
Figure 7.3. Compared to the observed data, these models tended to provide a better match in this
scenario than for the Osgoode scenario. Although screenlines A and B show the models
underestimating the agent volumes, the differences are smaller than those for screenline A in the
Osgoode scenario, with R-squared fits around 97.2% and 98.0%, as shown in Table 7.3.
Screenlines C and D provided even better fits, both visually and in terms of the two fitness metrics,
and the models all appeared to replicate the plateaus and peaks in agent flow. However, there are
some spots in the plot for screenline A that do not include as much detail as the observations,
instead ‘smoothing’ the agent flows – as with the Osgoode scenario, this is likely due to the random
assignment of agent destinations by the MassMotion core.
77
Figure 7.3: Comparison of observations and all model flows at screenlines A-D
Inspecting screenlines E and F shows some much higher SSE values, indicating mathematically
worse fits, but this is mainly due to the much higher volumes of agents moving past these
screenlines. Since the R-squared values are still quite high for all models (99.9% and 99.8%,
respectively), the high SSE values can be overlooked and the model fits deemed acceptable. Figure
7.4 shows these screenline flows visually, and although all lines nearly overlap for screenline E,
screenline F shows some interesting deviations for the Social Forces model between 2100 and
3060 seconds. Comparing the SSE values for this screenline, it is clear that the deviation seen in
the Social Forces model helps it better match the observed data, contributing to its overall lowest
78
error value. Unlike with the Osgoode scenario, where the Graph model had the lowest total error,
here the more advanced Social Forces model is by far the best choice, while the Graph model is in
fact the worst performing of the four.
Table 7.3: R-squared and SSE fits for all models, Queen’s Park scenario
Model Screenline
A B C D E F
R-Squared (%) Average
Graph 97.22 98.97 99.23 99.52 99.89 99.77 99.10
Social Forces 97.21 99.00 99.36 99.41 99.89 99.84 99.12
Optimal Steps 97.39 98.99 99.32 99.48 99.89 99.81 99.15
MassMotion 97.24 99.02 99.27 99.51 99.90 99.81 99.12
SSE (1000s) Total
Graph 228 166 71 27 653 4,706 5,851
Social Forces 228 161 59 33 660 3,242 4,382
Optimal Steps 215 165 63 29 665 4,001 5,137
MassMotion 226 159 67 27 633 4,044 5,157
Figure 7.4: Comparison of observations and all model flows at screenlines E & F
79
Finally, the timed components of each simulation run are compared, telling a similar story to the
Osgoode scenario. Again, the Graph model is by far the fastest to run, followed by Social Forces,
MassMotion, and Optimal steps, and the project and simulation setup times were very similar for
all four models, as shown in Table 7.4. An interesting point to note is the comparatively high
standard deviation of run time for the Social Forces model, suggesting that random variations in
agent routing and arrival times have a larger impact on this model’s run time than others.
Table 7.4: Setup and run times for all models, Queen’s Park scenario
Model Project Setup (s) Simulation Setup (s) Model Run (s)
µ σ µ σ µ σ
Graph 0.08 0.02 8.59 0.16 24.87 0.50
Social Forces 0.09 0.01 8.67 0.11 75.13 4.25
Optimal Steps 0.08 0.01 8.59 0.10 132.87 1.45
MassMotion 0.09 0.02 8.60 0.18 101.43 0.93
7.1.3 Station Elements
Since macroscopic results only tell part of the story when comparing pedestrian movement models,
it is important to investigate how agents move through various geometric elements in both
scenarios. For the following comparisons, results from only three microscopic models (Social
Forces, Optimal Steps, and MassMotion) are shown, as the mesoscopic Graph model does not
produce agent paths or densities. It is also important to note that much like the MassMotion and
Legion comparative work presented in Section 3.2, there is no observed data against which these
results can be compared as only flow counts were collected from each station. Accordingly, these
Level of Service maps can provide a better understanding of how each model’s agents behave, but
not a confirmation of which model is closest to reality.
The simplest element agents can traverse is a straight, featureless corridor, which can be found in
both Osgoode and Queen’s Park stations leading from the main concourse to street level. In Figure
7.5, experienced density maps from each of the three microscopic models are presented for the
Osgoode Station corridor between screenlines A and F. In all cases, this corridor supports
bidirectional flows, leading to some higher densities where the flows interact. However, much of
80
the increased density in the corridor is due to how the agents spread out and move through the
space – in the Social Forces model, higher densities occur more to one side of the corridor, whereas
the Optimal Steps and MassMotion models show more balanced flows. It is also worth noting that
agents in the MassMotion model spread all the way to the edges of the corridor, while agents in
the other simple models do not use all the available space. This is likely caused by the larger steps
these agents take between moves (based on the model time steps), leading to less of the available
space being used, and the difference in how MassMotion uses obstacles and floor edges to
constrain other forces, as opposed to having obstacles directly repel agents.
Figure 7.5: Experienced density maps of Osgoode corridor
Another interesting element present in Queen’s Park station is a T-junction, where agents come
from the trunk and proceed to one of the two legs, or vice-versa, and it is located between
screenlines A, B, and G. As shown in Figure 7.6, the three models each move agents through this
element slightly differently, with the most obvious difference coming from the Optimal Steps
model where agents reach a higher experienced density at the link leading into the T. Here, agents
in this model must pause and wait one or two time steps for their next goal to be updated, leading
to some congestion around this area. The Social Forces and MassMotion models, on the other
81
hand, allow agents to pass more smoothly across this boundary, allowing for lower densities. As
with the simple corridor geometry, the ability of MassMotion agents to use more available space
(to avoid other agents, etc.) shows, as the light blue and green areas, signifying higher agent traffic,
extend further towards the top of the T. In fact, the paths made by Social Forces agents in this
element are very tight to the corners of the junction, demonstrating that this movement model
prioritizes reducing the distance travelled by each agent.
Figure 7.6: Experienced density maps of Queen’s Park T-junction
On a larger scale, maximum densities reached in the concourses of both Osgoode and Queen’s
Park stations can be compared, showing differences both in the peak densities reached and the
paths agents take through the spaces. In Osgoode station, shown in Figure 7.7, the Social Forces
model’s tendency to prioritize paths with shorter distances is again confirmed, leading to higher
densities on a select number of straight line paths through the concourse. On the other hand,
Optimal Steps agents spread out more, leading to some more congested areas near fare gates and
links, but otherwise a lower density in the space. Finally, MassMotion agents are funnelled through
a few spots in the concourse; namely at the entrance to the left-hand escalator and stairs and near
82
one wide fare gate. Agents in the Queen’s Park scenario produce similar patterns in the concourse,
as shown in Figure 7.8, but here the higher density paths produced by the Social Forces model
echo those from MassMotion, whereas Optimal Steps agents are again spread out to the point of
lowering their peak density.
Overall, these maps provide useful insight into the behaviour of agents in the Social Forces,
Optimal Steps, and MassMotion models. One of the main conclusions is that agents in the Social
Forces model prioritize minimizing the distance to their destination, leading to distinct linear paths
through space and tight paths around corners, where the other two models allow agents to spread
apart more. It was also shown that MassMotion agents tend to use more available space than those
simulated in either simple model, leading to somewhat lower, but more uniform, densities in small
elements such as corridors. Comparisons of experienced agent densities in the T-junction also
shows the micro-level impact of the Optimal Steps model’s long time step, leading to some
congestion in locations that would otherwise produce free-flow conditions.
83
Figure 7.7: Maximum density maps of Osgoode concourse
84
Figure 7.8: Maximum density maps for Queen’s Park concourse
85
7.2 Increased Volume Scenarios
Since all base case scenarios produced acceptable, and surprisingly equivalent, model fits, higher
volume and flow scenarios were required to identify the limits where these models begin to deviate
from one another. Unfortunately, since neither set of observational data contained especially high
volumes or dense congestion, new scenarios were created using these station geometries and
observed data sets by retaining the origin-destination ratios while increasing the total inflow
volumes in 10% increments. While this produced usable input data for the models, there was no
corresponding outflow data available, and as such a different ‘ground truth’ had to be used for
comparison purposes. Instead of measuring errors against observations, outflows produced by
MassMotion simulations were used as a trusted base, since the software has successfully been used
to simulate other high volume TTC stations (King, Srikukenthiran, & Shalaby, 2014) (Hoy,
Morrow, & Shalaby, 2016). Again, to ensure that observed trends were not a product of varying
random seeds, a series of 10 runs were performed for each movement model in each scenario, all
using the same computer setup, and the results were averaged for each screenline. In all cases,
model fits were assessed using the R-squared and SSE metrics, comparing each of the Graph,
Social Forces, and Optimal Steps models to MassMotion, as well as via visual comparison of
second-by-second cumulative flow plots.
7.2.1 Osgoode Station Concourse
To establish a baseline for R-squared and SSE values obtained when comparing Graph, Social
Forces, or Optimal Steps model results to those from MassMotion, these values were first obtained
from a base (100% volume) run of the Osgoode scenario. As demonstrated in Section 7.1.1 and
reaffirmed in Table 7.5, each of these three models agrees very well with MassMotion, with R-
squared values nearing 100% at all screenlines and low total SSE values. Accordingly, the inbound
volumes for this scenario were increased in 10% increments, still yielding good fits up to 160% of
base volume, where more noticeable deviations in the R-squared values for the Graph and Optimal
Steps model were found. Comparing SSE values for the three models, it is clear that Social Forces
is matching MassMotion much more closely than either of the other models. This is also confirmed
in cumulative volume plots of any exiting screenlines in the model – a plot from screenline A is
shown in Figure 7.9, while a plot from screenline E is shown in Figure 7.10. In both plots, it is
86
clear that the Graph and Optimal Steps models, which tend to overlap one another, are not
especially sensitive to congestion in the station that is reducing the outflow of agents, while the
Social Forces model is responding to this congestion and reducing agent flow rates appropriately.
Unfortunately, the sizeable increase in SSE in all cases suggests that no simple model is closely
matching the flows modelled by MassMotion in these increased volume scenarios.
Table 7.5: Model R-squared and SSE against MassMotion, Osgoode scenarios
Scenario Model Average R2
(%)
Total SSE
(1000s)
Worst R2
(%)
Worst SSE
(1000s)
Osgoode 100%
(base)
Graph 99.97 34 99.95 26
Social Forces 99.98 60 99.96 55
Optimal Steps 99.97 18 99.93 9
Osgoode 160% Graph 99.47 6,351 99.00 6,098
Social Forces 99.91 1,369 99.76 1,341
Optimal Steps 99.52 5,684 99.10 5,465
Osgoode 170% Graph 99.14 15,173 97.85 14,785
Social Forces 99.87 3,150 99.49 3,123
Optimal Steps 99.24 13,071 98.15 12,657
87
Figure 7.9: Cumulative exit plot at Osgoode screenline A, 160% volume
Figure 7.10: Cumulative exit plot at Osgoode screenline E, 160% volume
88
Increasing the volume one step further to 170% of observed values, it becomes clear that the Graph
and Optimal steps models are not picking up on the congestion slowing down flows in the Osgoode
model, especially at screenline E as shown in Figure 7.11. Where the MassMotion model shows a
general trend of decreasing flows, ending with a very low rate of agents exiting the model, the
simpler models overlook these effects. It is worth noting that the Optimal Steps model has a
marginally better fit than the Graph model, as indicated by the R-squared and SSE metrics.
Unfortunately, both are noticeably worse than the Social Forces model, which maintains a high
average R-squared value and a total SSE that is less than a quarter of the other models’ values.
The Social Forces model also better matches the reduced flow rate simulated by MassMotion until
2700 seconds, after which its reduced sensitivity to severe flow obstruction begins to show. While
the poor performance of the Graph model was expected, such similar results from the Optimal
Steps model are surprising – although it is a more advanced model, it seems that the one-second
time step used may be a limiting factor on its performance.
Figure 7.11: Cumulative exit plot at Osgoode screenline E, 170% volume
89
Finally, comparing run times for each model, which are presented in Table 7.6, the Graph model
is shown to hold a distinct speed advantage over all other movement models. This model
consistently runs at least five times faster than MassMotion and Optimal Steps and four times
faster than Social Forces, which gives it a major advantage for lower- and medium-volume
scenarios where it can produce accurate results. While still slower than Social Forces, the Optimal
Steps model also begins to show a slight speed advantage over MassMotion in the 170% volume
scenario, but it is beaten in terms of both speed and accuracy by Social Forces. As in the Osgoode
base case simulations, the project and simulation setup times for all models are close enough that
any differences may be ignored.
Table 7.6: Set-up and run times for all models in higher-volume Osgoode scenarios
Scenario Model Project Setup
(s)
Simulation
Setup (s)
Model Run
(s)
µ σ µ σ µ σ
Osgoode 160% MassMotion 0.08 0.01 13.19 0.12 94.33 14.01
Graph 0.09 0.02 13.37 0.16 15.70 0.45
Social Forces 0.09 0.02 13.61 0.26 64.92 4.03
Optimal Steps 0.11 0.03 13.15 0.14 91.82 1.25
Osgoode 170% MassMotion 0.08 0.01 13.29 0.09 125.56 20.40
Graph 0.09 0.02 13.34 0.10 15.91 0.42
Social Forces 0.09 0.02 13.50 0.11 76.42 3.10
Optimal Steps 0.10 0.02 13.17 0.05 99.75 1.98
7.2.2 Queen’s Park Station Concourse
As with the Osgoode concourse scenarios, initial comparisons of base volumes (100%) in the
Queen’s Park scenario showed excellent fits of each model against MassMotion, with very high
R-squared values and low SSE totals as presented in Table 7.7. When operating at or below 170%
of base volumes, the Graph and Optimal steps models performed well, continuing to have very
high R-squared values and suitably low SSE totals, accounting for differences in total volumes
simulated. However, the Social Forces model steadily declined in terms of accuracy, mainly due
to an oversensitivity to congestion in this station model. In the 170% volume scenario, some
90
screenlines were matched well by the Social Forces model, such as C and E shown in Figure 7.12,
but others such as screenline F (Figure 7.13) demonstrated more distinct deviations over the
majority of the simulation run. Unlike the Osgoode scenario, where this simulated reduction in
flow matched MassMotion, here the Graph and Optimal Steps models produced more reasonable
results.
Table 7.7: Model R-squared and SSE against MassMotion, Queen’s Park scenarios
Scenario Model Average R2
(%)
Total SSE
(1000s)
Worst R2
(%)
Worst SSE
(1000s)
Queen’s Park
100% (base)
Graph 99.99 53 99.99 41
Social Forces 99.98 54 99.97 53
Optimal Steps 99.99 21 99.99 9
Queen’s Park
170%
Graph 99.98 309 99.97 172
Social Forces 99.75 31,143 99.48 30,100
Optimal Steps 99.98 303 99.95 191
Queen’s Park
180%
Graph 94.77 429,887 90.97 234,084
Social Forces 97.04 240,607 91.85 70,770
Optimal Steps 94.96 401,001 91.45 217,073
Figure 7.12: Cumulative exit plot at Queen’s Park screenlines C and E, 170% volume
91
Figure 7.13: Cumulative exit plot at Queen’s Park screenline F, 170% volume
Once volumes in Queen’s Park station were increased to 180% of base values, some MassMotion
simulation runs began to demonstrate a complete breakdown of flow around screenline E, while
others predicted a less severe reduction in flow. At screenline B, as shown in Figure 7.14, the
Graph and Optimal Steps models remain insensitive to the reduction in flow rate, and while the
Social Forces model comes closer to the MassMotion results (with an average R-squared of 97%
as opposed to 94.9%), it still does not match the overall trend. Comparing each model’s results at
screenline E further confirms that these models were not behaving adequately in this high-volume
test, as shown in Figure 7.15, since none of the models came close to showing the reduced flow
rate. It is also worth noting that by increasing the volume by 10%, each of the models’ worst R-
squared values fell nearly 9%, and SSE values increased enormously, indicating that none of the
simple models are suitable for simulating breakdown conditions in a station or facility.
92
Figure 7.14: Cumulative exit plot at Queen’s Park screenline B, 180% volume
Figure 7.15: Cumulative exit plot at Queen’s Park screenline E, 180% volume
93
As with the Osgoode scenarios, project and simulation setup times for each model were very close,
as presented in Table 7.8, while the model run times showed the largest differences between each
model’s performance. At 170% of base volumes, the Graph model was again the fastest of all
models tested, simulating the Queen’s Park scenario more than four times faster than MassMotion
and producing very similar macroscopic results. While the Optimal Steps model also produced
results that aligned well with MassMotion, it was unfortunately much slower than MassMotion
and thus less desirable. At 180% of base volumes, none of the models produced acceptable fits
against MassMotion, so regardless of their faster average run times, they cannot be recommended.
Table 7.8: Set-up and run times for all models in higher-volume Queen’s Park scenarios
Scenario Model Project Setup
(s)
Simulation
Setup (s)
Model Run
(s)
µ σ µ σ µ σ
Queen’s Park
170%
MassMotion 0.09 0.02 8.18 0.04 200.76 2.36
Graph 0.09 0.02 8.50 0.05 44.48 0.26
Social Forces 0.09 0.02 8.97 0.09 266.64 14.22
Optimal Steps 0.10 0.01 8.26 0.02 286.29 3.39
Queen’s Park
180%
MassMotion* 0.09 0.02 8.21 0.06 1,056.1 -
Graph 0.09 0.02 8.53 0.03 47.39 0.56
Social Forces 0.10 0.02 9.11 0.05 358.23 33.64
Optimal Steps 0.10 0.01 8.36 0.07 319.47 4.88
* Some MassMotion runs at 180% volume lead to complete blockages, significantly increasing
some run times and leading to a bimodal distribution. Accordingly, a standard deviation is
inappropriate and not provided.
94
Chapter 8
8 Conclusions and Future Work
After completing a series of base- and increased-volume comparisons of the Graph, Social Forces,
Optimal Steps, and MassMotion pedestrian movement models, considering both their accuracy
and computational demands, a number of conclusions about when and how to appropriately use
these models can be made. Additionally, lessons learned from the process of setting up these
comparisons, including data collection efforts, model development, and model calibration, are
presented. Suggestions for how future research may take advantage of the work done and lessons
learned in this study are also described.
8.1 Conclusions
Overall, this research has provided a lot of insight into the operation and behaviours of the Graph,
Social Forces, Optimal Steps, and MassMotion pedestrian movement models. Based on the
comparison of both Osgoode and Queen’s Park scenario simulations, these models have all
demonstrated their ability to accurately reproduce flows observed in low- to medium-volume
subway stations. While results from Osgoode Station would be expected to show excellent fits, as
the calibration data was drawn from this scenario, simulations of Queen’s Park station also yielded
very good fits from all the models, validating their calibration and operation. Although there were
some times when none of the models aligned well with observed flow curves, these did not
necessarily indicate a bad pedestrian movement model. Instead, these instances demonstrated the
impact of random agent destination assignment and scenario data that was not sufficiently specific.
These low volume tests also demonstrated that the Social Forces model can simulate a scenario in
60-75% of the time required for a MassMotion simulation, while the Graph model operates even
more quickly, completing simulations three to four times faster than the default MassMotion
model.
Additionally, the comparison of agent paths and levels of service in geometric elements of these
scenarios has shown that the three microscopic models all represent pedestrian movement
differently. Even when comparing agent movements in a simple corridor, Social Forces agents
95
reach higher average densities than their Optimal Steps or MassMotion counterparts, as agents in
these models are more willing to spread out into the available space. Comparing agent paths
through station concourses again demonstrated this behaviour, as paths formed by Social Forces
agents were surprisingly linear when compared to those from the other models, suggesting that the
Social Forces model prioritizes reducing each agent’s distance travelled. On the other hand,
Optimal Steps agents had a tendency to spread even further apart than MassMotion agents, in part
due to their long time step leading to a sizeable distance between ‘steps’. While maximum densities
achieved by these agents in all elements were similar, some idiosyncratic congestion tended to
occur around links and fare gates, where Optimal Steps agents were forced to wait for a second or
two to update their next goal.
Higher-volume sensitivity testing of these models demonstrated two main things, namely that the
observed scenarios did not come close to ‘pushing the limits’ of any model, and that all of the
tested models have limits to their reasonable and accurate operation. Increasing Osgoode volumes
to 160% of observed values showed that the Social Forces model is much more sensitive to
congestion and its effect on agent flow rates than either the Graph or Optimal Steps models,
providing a better match with results from the MassMotion movement model. In this scenario, the
Graph and Optimal Steps models continued to simulate free-flow conditions when both
MassMotion and Social Forces models identified flow-reducing congestion in the station. At 170%
of observed volumes, the flow breakdowns simulated by MassMotion and the Social Forces model
were not echoed by the Graph or Optimal Steps models, discouraging their use regardless of their
potential for time savings. Queen’s Park scenarios, on the other hand, suggested that only the
MassMotion model was capable of accurately simulating congestion effects – the Social Forces
model began showing flow reductions much earlier than MassMotion, leading to much higher
errors than the Graph model at 170% volumes. At 180% volumes, MassMotion began showing
complete flow breakdowns in parts of the station, which no other model came close to simulating.
Based on these findings, it seems that for any higher-volume scenarios, MassMotion’s default
pedestrian model is the best choice given its ability to simulate flow reductions and congestion
where other models cannot. However, if a scenario with some congestion and flow reductions is
being tested, then the Social Forces model may be used – although it has its limitations, this model
can still provide some time savings compared to MassMotion and produce reasonable macroscopic
results. Finally, if simulating low- to medium-volume scenarios, where significant congestion or
96
flow breakdowns are not present and microscopic results are not required, the Graph model is a
good choice. In such cases, the model has been found to run faster than any other model
implemented through the MassMotion SDK, and much faster than the built-in MassMotion
pedestrian model running on a single CPU thread. Given the Graph model’s relative simplicity,
the words of renowned psychologist Daniel Kahneman seem to summarise these findings quite
well: “The important conclusion from this research is that an algorithm that is constructed on the
back of an envelope is often good enough to compete with an optimally weighted formula” (2013,
p. 226).
8.2 Future Work
Although this research project successfully met a number of its initial objectives, there are a few
ways in which this work could be changed or extended to provide even more detailed insight into
the four models examined and others.
First, the calibration of these models was a challenging process, mainly due to the lack of
microscopic pedestrian movement data. Although relying on macroscopic measures and flow plots
yielded usable results, a proper microscopic calibration of paths generated by the Social Forces
and Optimal Steps models would have been preferred. Simply comparing Social Forces parameters
from the literature, such as neighbour force scale and range, shows that these values vary
depending on the scenario against which they are calibrated, so having a specific set of parameters
for these Toronto subway station scenarios could improve the accuracy of the models. To obtain
the data required for such a calibration, video data collection would be necessary, and could be
supplemented with Bluetooth or Wi-Fi pedestrian tracking. By using these technologies together,
an agent-by-agent origin-destination matrix could be assembled in real time while pedestrian
movements and behaviours in specific areas could be monitored. Combined with some manual
counts for validation, this technique has the potential to supply all the necessary data for higher-
fidelity model calibration and scenario building.
In addition to using new data collection techniques in the same station scenarios, data should also
be collected from these and other stations under higher volume and density conditions. Not only
would this provide a ‘ground truth’ against which to compare simulated results in sensitivity tests,
97
higher volume transit station data would help to calibrate the pedestrian models under more
challenging conditions. This data could also provide a more interesting set of element-level model
behaviours to compare, as many differences between models only become apparent in high-density
scenarios.
Another set of challenges that may be overcome in the future surrounded the integration of
externally-coded models with the MassMotion core and Software Development Kit. In its current
state, the MassMotion SDK can only supply single pieces of data at a time to external code, which
limits the models that can be implemented and hinders the performance of others that generally
rely on bulk data. By allowing complete maps and other large-scale pieces of data to be accessible
in their entirety through the SDK, models such as Cellular Automata and fluid flow could be
implemented, and others such as Optimal Steps could run with a reduced computational penalty.
Some idiosyncratic issues caused by MassMotion’s discretized model geometry, such as the need
for Optimal Steps agents to pause their movement on waypoints, could also be resolved through
changes or additions of functions to the SDK’s library, allowing externally-modelled agents to
treat the space as a continuous environment. Finally, the MassMotion core and pedestrian
movement model are both optimized for multi-threaded processing – while all models were limited
to a single thread for fair comparisons in this work, future research should take advantage of a
multi-threaded approach. By configuring externally-coded models to run on more than one
computational thread, speeds could be increased and a better comparison against MassMotion and
other advanced models could be made.
98
References
Abdulhai, B. (2015, November 2). Topics in artificial intelligence: Genetic algorithms. Toronto,
Ontario, Canada.
Alexandersson, S., & Johansson, E. (2013). Pedestrians in microscopic traffic simulation:
Comparison between software Viswalk and Legion for Aimsun. Chalmers University of
Technology, Department of Civil and Environmental Engineering, Division of
GeoEngineering, Road and Traffic Research Group. Gothenburg: Chalmers
Reproservice. Retrieved from
http://publications.lib.chalmers.se/records/fulltext/184582/184582.pdf
AlGhadi, S. A., & Mahmassani, H. S. (1991). Simulation of crowd behavior and movement:
Fundamental relations and application. Transportation Research Record: Journal of the
Transportation Research Board(1320), 260-268.
Aliari, Y., & Haghani, A. (2012). Bluetooth sensor data and ground truth testing of reported
travel times. Transportation Research Record: Journal of the Transportation Research
Board(2308), 167-172. doi:10.3141/2308-18
Arup. (2015, August 10). MassMotion: The verification and validation of MassMotion for
evacuation modelling. Retrieved from Oasys: http://www.oasys-
software.com/media/2015-08-10_MassMotion_Evacuation_V&V_Issue_01a.pdf
Auberlet, J.-M., Bhaskar, A., Ciuffo, B., Farah, H., Hoogendoorn, R., & Leonhardt, A. (2015).
Data collection techniques. In W. Daamen, C. Buisson, & S. P. Hoogendoorn (Eds.),
Traffic simulation and data: Validation methods and applications (pp. 5-32). Boca
Raton: CRC Press.
Ball, P. (2007, January 19). Crowd researchers make pilgrimage safer.
doi:10.1038/news070115-13
Batty, M. (2001). Editorial: Agent-based pedestrian modeling. Environment and Planning B:
Planning and Design, 28, 321-326. doi:10.1068/b2803ed
99
Bauer, D. (2011). Comparing pedestrian movement simulation models for a crossing area based
on real world data. In R. D. Peacock, E. D. Kuligowski, & J. D. Averill (Ed.), Pedestrian
and Evacuation Dynamics (pp. 547-556). Springer Science+Business Media.
doi:10.1007/978-1-4419-9725-8_49
Beaulieu, A., & Farooq, B. (2016, July 31). Large scale multi-sensor monitoring of pedestrian
dynamics in public spaces: Preliminary results. TRB 95th Annual Meeting Compendium
of Papers (pp. 1-19). Washington, D.C.: Transportation Research Board.
Bera, A., Galoppo, N., Sharlet, D., Lake, A., & Manocha, D. (2014). AdaPT: Real-time adaptive
pedestrian tracking for crowded scenes. Proceedings of IEEE International Conference
on Robotics and Automation 2014 (pp. 1-8). Hong Kong: IEEE.
doi:10.1109/ICRA.2014.6907095
Berrou, J. L., Beecham, J., Quaglia, P., Kagarlis, M. A., & Gerodimos, A. (2007). Calibration
and validation of the Legion simulation model using empirical data. In N. Waldau, P.
Gattermann, H. Knoflacher, & M. Schreckenberg (Eds.), Pedestrian and Evacuation
Dynamics (pp. 167-181). New York: Springer Berlin Heidelberg. doi:10.1007/978-3-540-
47064-9_15
Blue, V. J., & Adler, J. L. (2001, March). Cellular automata microsimulation for modeling bi-
directional pedestrian walkways. Transportation Research Part B: Methodological,
35(3), 293-312. doi:10.1016/S0191-2615(99)00052-1
Borgers, A., & Timmermans, H. (1986, April). A model of pedestrian route choice and demand
for retail facilities within inner-city shopping areas. Geographical Analysis, 18(2), 115-
128. doi:10.1111/j.1538-4632.1986.tb00086.x
Box, G. E., & Draper, N. R. (1987). Empirical model-building and response surfaces. New
York: John Wiley & Sons.
Bullock, D., Haseman, R., Wasson, J., & Spitler, R. (2010). Automated measurement of wait
times at airport security: Deployment at Indianapolis International Airport, Indiana.
Transportation Research Record: Journal of the Transportation Research Board(2177),
60-68. doi:10.3141/2177-08
100
Burstedde, C., Klauck, K., Schadschneider, A., & Zittartz, J. (2001, June 15). Simulation of
pedestrian dynamics using a two-dimensional cellular automaton. Physica A: Statistical
Mechanics and its Applications, 295(3-4), 507-525. doi:10.1016/S0378-4371(01)00141-8
Campanella, M. C., Hoogendoorn, S. P., & Daamen, W. (2011). A methodology to calibrate
pedestrian walker models using multiple-objectives. In R. D. Peacock, E. D. Kuligowski,
& J. D. Averill (Ed.), Pedestrian and Evacuation Dynamics (pp. 755-759). Boston:
Springer Science+Business Media. doi:10.1007/978-1-4419-9725-8_69
Casas, J., Ferrer, J. L., Garcia, D., Perarnau, J., & Torday, A. (2010). Traffic simulation with
Aimsun. In J. Barceló (Ed.), Fundamentals of Traffic Simulation (pp. 173-232). New
York: Springer Science+Business Media. doi:10.1007/978-1-4419-6142-6_5
Castle, C. J., Waterson, N. P., Pellissier, E., & Le Bail, S. (2011). A comparison of grid-based
and continuous space pedestrian modelling software: Analysis of two UK train stations.
In R. D. Peacock, E. D. Kuligowski, & J. D. Averill (Ed.), Pedestrian and Evacuation
Dynamics (pp. 433-446). Boston: Springer Science+Business Media. doi:10.1007/978-1-
4419-9725-8_39
Cheu, R., Tan, Y., & Lee, D.-H. (2003). Comparison of PARAMICS and GETRAM/AIMSUN
microscopic traffic simulation tools. 83rd Meeting of the Transportation Research Board.
Cheu, R.-L., Jin, X., Ng, K.-C., Ng, Y.-L., & Srinivasan, D. (1998). Calibration of FRESIM for
Singapore expressway using genetic algorithm. Journal of Transportation Engineering,
124(6), 526-535. doi:10.1061/(ASCE)0733-947X(1998)124:6(526)
Cristiani, E., Piccoli, B., & Tosin, A. (2011). Multiscale modeling of granular flows with
application to crowd dynamics. Multiscale Modeling & Simulation, 9(1), 155-182.
doi:10.1137/100797515
Davidich, M., & Köster, G. (2012). Towards automatic and robust adjustment of human
behavioural parameters in a pedestrian stream model to measured data. Safety Science,
50, 1253-1260. doi:10.1016/j.ssci.2011.12.024
101
Dietrich, F., & Köster, G. (2014). Gradient navigation model for pedestrian dynamics. Physical
Review E, 89(6), 1-8. doi:10.1103/PhysRevE.89.062801
Dijkstra, J., Jessurun, J., & Timmermans, H. (2001). A multi-agent cellular automata model of
pedestrian movement. In M. Schreckenberg, & S. Sharma (Eds.), Pedestrian and
evacuation dynamics (pp. 173-181). Berlin: Springer-Verlag.
Duives, D. C., Daamen, W., & Hoogendoorn, S. P. (2013, December). State-of-the-art crowd
motion simulation models. Transportation Research Part C: Emerging Technologies, 37,
193-209. doi:10.1016/j.trc.2013.02.005
Fellendorf, M., & Vortisch, P. (2010). Microscopic Traffic Simulator VISSIM. In J. Barceló
(Ed.), Fundamentals of Traffic Simulation (Vol. 145, pp. 63-93). New York: Springer.
doi:10.1007/978-1-4419-6142-6_2
Forschungszentrum Jülich. (2017, April 26). Database. Retrieved from Institute for Advanced
Simulation (IAS): Jülich Supercomputing Centre (JSC): http://www.fz-
juelich.de/ias/jsc/EN/Research/ModellingSimulation/CivilSecurityTraffic/PedestrianDyn
amics/Activities/database/databaseNode.html
Friesen, M. R., & McLeod, R. D. (2015, September). Bluetooth in intelligent transportation
systems: A survey. International Journal of Intelligent Transportation Systems Research,
13(3), 143-153. doi:10.1007/s13177-014-0092-1
Fruin, J. J. (1971). Pedestrian planning and design. Mobile, AL: Elevator World.
Haghani, A., Hamedi, M., Sadabadi, K. F., Young, S., & Tarnoff, P. (2009). Data collection of
freeway travel time with Bluetooth sensors. Transportation Research Record: Journal of
the Transportation Research Board(2160), 60-68. doi:10.3141/2160-07
Helbing, D. (1992). A fluid dynamic model for the movement of pedestrians. Complex Systems,
6, 391-415.
Helbing, D., & Johansson, A. (2009). Pedestrian, crowd, and evacuation dynamics. In R. A.
Meyers (Ed.), Encyclopedia of Complexity and Systems Science (pp. 6476-6495). New
York: Springer-Verlag.
102
Helbing, D., & Molnár, P. (1995, May). Social force model for pedestrian dynamics. Physical
Review E, 51(5), 4282-4286. doi:10.1103/PhysRevE.51.4282
Helbing, D., Farkas, I. J., Molnár, P., & Vicsek, T. (2002). Simulation of pedestrian crowds in
normal and evacuation situations. In M. Schreckenberg, & S. Sharma (Ed.), Pedestrian
and Evacuation Dynamics (pp. 21-58). Springer-Verlag Berlin Heidelberg.
Hoogendoorn, S. P., & Daamen, W. (2007). Microscopic calibration and validation of pedestrian
models: Cross-comparison of models using experimental data. Traffic and Granular
Flow '05 (pp. 329-340). Berlin: Springer Berlin Heidelberg. doi:10.1007/978-3-540-
47641-2_29
Hoy, G., Morrow, E., & Shalaby, A. (2016). Use of agent-based crowd simulation to investigate
the performance of large-scale intermodal facilities: Case study of Union Station in
Toronto, Ontario, Canada. Transportation Research Record: Journal of the
Transportation Research Board(2540), 20-29. doi:10.3141/2540-03
Ji, X., Zhang, J., Hu, Y., & Ran, B. (2016, May 15). Pedestrian movement analysis in transfer
station corridor: Velocity-based and acceleration-based. Physica A: Statistical Mechanics
and its Applications, 450, 416-434. doi:10.1016/j.physa.2015.12.139
Johansson, A., Helbing, D., & Shukla, P. K. (2007, December). Specification of the social force
pedestrian model by evolutionary adjustment to video tracking data. Advances in
Complex Systems, 10(supp02), 271-288. doi:10.1142/S0219525907001355
Kahneman, D. (2013). Thinking, fast and slow. Anchor Canada.
Karamouzas, I., & Overmars, M. (2012, March). Simulating and evaluating the local behavior of
small pedestrian groups. IEEE Transactions on Visualization and Computer Graphics,
18(3), 394-406. doi:10.1109/TVCG.2011.133
King, D., Srikukenthiran, S., & Shalaby, A. (2014). Using simulation to analyze crowd
congestion and mitigation at Canadian subway interchanges: Case of Bloor-Yonge
Station, Toronto, Ontario. Transportation Research Record(2417), 27-36.
doi:10.3141/2417-04
103
Lämmel, G., & Flötteröd, G. (2015). A CA model for bidirectional pedestrian streams. Procedia
Computer Science, 52, 950-955. doi:10.1016/j.procs.2015.05.171
Legion Limited. (n.d.). Dusseldorf Arena. Retrieved from Legion: Science in Motion:
http://www.legion.com/case-studies/dusseldorf-arena
Leung, Y. (2017, June 27). Counter +. Retrieved from iTunes Preview:
https://itunes.apple.com/us/app/counter/id478557426?mt=8
Løvås, G. G. (1994, December). Modeling and simulation of pedestrian traffic flow.
Transportation Research Part B: Methodological, 28B(6), 429-443. doi:10.1016/0191-
2615(94)90013-2
Lubás, R., Wąs, J., & Porzycki, J. (2016, June). Cellular Automata as the basis of effective and
realistic agent-based models of crowd behaviour. The Journal of Supercomputing, 72(6),
2170-2196. doi:10.1007/s11227-016-1718-7
Ma, T., & Abdulhai, B. (2002). Genetic algorithm-based optimization approach and generic tool
for calibrating traffic microscopic simulation parameters. Transportation Research
Record(1800), 6-15. doi:10.3141/1800-02
Maciejewski, M. (2010). A comparison of microscopic traffic flow simulation systems for an
urban area. Transport Problems (Problemy Transportu), 5(4), 27-38.
National Cooperative Highway Research Program. (2014). NCHRP report 797: Guidebook on
pedestrian and bicycle volume data collection. Washington, D.C.: Transportation
Research Board of the National Academies. doi:10.17226/22223
Newcombe, J. (2016, December 24). The Genetic Algorithm Framework for .Net. Retrieved May
30, 2017, from The Genetic Algorithm Framework for .Net:
https://gaframework.org/wiki/index.php/The_Genetic_Algorithm_Framework_for_.Net
Newcombe, J. (2017, May 2). Genetic operators. Retrieved May 30, 2017, from The Genetic
Algorithm Framework for .Net:
https://gaframework.org/wiki/index.php/Genetic_Operators
104
Oasys Limited. (n.d.). Stadium fire evacuation planning. Retrieved from Oasys Software:
http://www.oasys-software.com/casestudies/casestudy/stadium_fire_evacuation_planning
Oasys Ltd. (2017). Project Workflow. MassMotion Help.
Oasys Ltd. (2017). Stair. MassMotion Help.
Papadimitriou, E., Yannis, G., & Golias, J. (2009). A critical assessment of pedestrian behaviour
models. Transportation Research Part F: Traffic Psychology and Behaviour, 12(3), 242-
255. doi:10.1016/j.trf.2008.12.004
Poucin, G., Farooq, B., & Patterson, Z. (2016). Pedestrian activity pattern mining in WiFi-
network connection data. TRB 95th Annual Meeting Compendium of Papers (pp. 1-20).
Washington, D.C.: Transportation Research Board.
PTV Group. (2013). Final assessment of PTV Viswalk. Retrieved from PTV Group: http://vision-
traffic.ptvgroup.com/fileadmin/files_ptvvision/Downloads/1_Products/1_VISION_SUIT
E/3_PTV_Viswalk/HB_PTV_Viswalk_en.pdf
Remias, S. M., Hainen, A. M., & Bullock, D. M. (2013). Leveraging probe data to assess
security checkpoint wait times. Transportation Research Record: Journal of the
Transportation Research Board(2325), 63-75. doi:10.3141/2325-07
Rogsch, C., & Klingsch, W. (2011). To see behind the curtain - A methodical approach to
identify calculation methods of closed-source evacuation software tools. In R. D.
Peacock, E. D. Kuligowski, & J. D. Averill (Ed.), Pedestrian and Evacuation Dynamics
(pp. 567-576). Boston: Springer Science+Business Media. doi:10.1007/978-1-4419-
9725-8_51
Ronald, N., Sterling, L., & Kirley, M. (2007). An agent-based approach to modelling pedestrian
behaviour. International Journal of Simulation, 8(1), 25-38. Retrieved from
http://ijssst.info/Vol-08/No-1/Paper-4.pdf
Rudloff, C., Matyus, T., Seer, S., & Bauer, D. (2011). Can walking behavior be predicted?
Analysis of calibration and fit of pedestrian models. Transportation Research Record:
Journal of the Transportation Research Board(2264), 101-109. doi:10.3141/2264-12
105
Schadschneider, A., Chowdhury, D., & Nishinari, K. (2011). Pedestrian dynamics. In A.
Schadschneider, D. Chowdhury, & K. Nishinari, Stochastic Transport in Complex
Systems (pp. 407-460). Amsterdam: Elsevier. doi:10.1016/B978-0-444-52853-7.00011-7
Schadschneider, A., Eilhardt, C., Nowak, S., & Will, R. (2011). Towards a calibration of the
floor field Cellular Automaton. In R. D. Peacock, E. D. Kuligowski, & J. D. Averill
(Ed.), Pedestrian and Evacuation Dynamics (pp. 557-566). Boston: Springer
Science+Business Media. doi:10.1007/978-1-4419-9725-8_50
Seer, S., Rudloff, C., Matyus, T., & Brändle, N. (2014). Validating social force based models
with comprehensive real world motion data. Transportation Research Procedia, 2, 724-
732. doi:10.1016/j.trpro.2014.09.080
Seitz, M. J., & Köster, G. (2012). Natural discretization of pedestrian movement in continuous
space. Physical Review E, 86(4), 1-8. doi:10.1103/PhysRevE.86.046108
Seitz, M. J., Dietrich, F., Köster, G., & Bungartz, H.-J. (2016). The superposition principle: A
conceptual perspective on pedestrian stream simulations. Collective Dynamics, 1(A2), 1-
19. doi:10.17815/CD.2016.2
Srikukenthiran, S., & Shalaby, A. (2017). Enabing large-scale transit microsimulation for
disruption response support using the Nexus platform: Proof-of-concept case study of the
Greater Toronto Area transit network. Public Transport, 9, 411-435. doi:10.1007/s12469-
017-0158-y
Srikukenthiran, S., Fisher, D., Shalaby, A., & King, D. (2013). Pedestrian route choice of vertical
facilities in subway stations. Transportation Research Record: Journal of the
Transportation Research Board(2351), 115-123. doi:10-3141/2351-13
Steffen, B., & Seyfried, A. (2009). Modelling of pedestrian movement around 90 and 180 bends.
Proceedings of the Advanced Research Workshop "Fire Protection and Life Safety in
Buildings and Transportation Systems", (pp. 243-253). Santander. Retrieved from
https://www.researchgate.net/profile/Bernhard_Steffen2/publication/269153061_Modelli
ng_of_Pedestrian_Movement_Around_90_and_180_Bends/links/5703b0fb08aeade57a25
ab89.pdf
106
Toronto Transit Commission. (2015, December 31). TTC ridership - Subway/Scarborough RT
station usage. Retrieved June 15, 2016, from Open Data Toronto:
https://www1.toronto.ca/City%20Of%20Toronto/Information%20&%20Technology/Ope
n%20Data/Data%20Sets/Assets/Files/2015%20Subway%20Platform%20Usage%20Open
%20Data%20Toronto.xlsx
Toronto Transit Commission. (2017). Osgoode Station: Station description. Retrieved from
TTC: http://www.ttc.ca/Subway/Stations/Osgoode/station.jsp#StationDescription_
Toronto Transit Commission. (2017). Queen's Park Station: Station description. Retrieved from
TTC: http://www.ttc.ca/Subway/Stations/Queens_Park/station.jsp#StationDescription_
Toronto Transit Commission. (2017, May). TTC Subway Map. Retrieved August 13, 2017, from
TTC: https://ttc.ca/PDF/Maps/Subway_Map.pdf
Viswanathan, V., Lee, C., Lees, M. H., Cheong, S., & Sloot, P. M. (2014, February).
Quantitative comparison between crowd models for evacuation planning and evaluation.
The European Physical Journal B, 87(2), 1-11. doi:10.1140/epjb/e2014-40699-x
von Sivers, I., & Köster, G. (2013). Realistic stride length adaptation in the Optimal Steps model.
In M. Chraibi, M. Boltes, A. Schadschneider, & A. Seyfried (Ed.), Traffic and Granular
Flow ‘13 (pp. 171-178). Springer, Cham. doi:10.1007/978-3-319-10629-8_20
Williamson, J. R., & Williamson, J. (2014). Guidelines for pedestrian tracking video. Retrieved
October 18, 2016, from Pedestrian Tracking 1.2 documentation:
http://juliericowilliamson.com/PedestrianTracking/videoguideline.html#vantage-point
Wolinski, D., Guy, S. J., Oliver, A.-H., Lin, M., Manocha, D., & Pettré, J. (2014). Parameter
estination and comparative evaluation of crowd simulations. Computer Graphics Forum,
33(2), 303-312. doi:10.1111/cgf.12328
Xi, H., Son, Y.-J., & Lee, S. (2010). An integrated pedestrian behavior model based on extended
decision field theory and social force model. Proceedings of the 2010 Winter Simulation
Conference (pp. 824-836). Baltimore: IEEE. doi:10.1109/WSC.2010.5679108
107
Ying Wen Technologies. (2015, September 16). Advanced tally counter. Retrieved from Google
Play: https://play.google.com/store/apps/details?id=com.yingwen.counter&hl=en
Zheng, X., Zhong, T., & Liu, M. (2009). Modeling crowd evacuation of a building based on
seven methodological approaches. Building and Environment, 44(3), 437-445.
doi:10.1016/j.buildenv.2008.04.002
108
Appendix A – Osgoode Station Calibration Data
Values in the table below represent 60-second flows crossing screenlines as observed in Osgoode
Station on August 25th, 2016. Columns without colour are 60-second sums of second-by-second
data used as input to the Osgoode simulation model, while columns highlighted in green were
used to calibrate each pedestrian movement model.
Time
(s)
Screenline A Screenline B Screenline C Screenline D Screenline E
In Out In Out In Out In Out In Out
900 0 0 0 0 0 0 0 0 0 0
960 12 0 7 1 6 1 4 5 23 5
1020 15 1 12 3 6 6 1 19 26 11
1080 21 19 15 13 10 6 3 14 24 22
1140 11 14 13 11 7 8 8 7 40 5
1200 18 8 14 0 7 5 6 5 25 7
1260 9 5 9 4 10 3 4 9 34 5
1320 12 7 12 5 9 7 5 5 25 3
1380 16 2 12 2 6 2 12 1 19 1
1440 20 2 9 0 15 0 6 0 21 1
1500 14 0 15 2 15 0 10 1 40 0
1560 28 3 16 1 12 1 10 4 28 4
1620 26 2 17 1 17 1 7 12 41 10
1680 26 4 7 5 18 16 9 2 46 21
1740 22 12 15 4 8 3 8 0 37 0
1800 25 3 8 0 15 1 6 0 44 0
1860 13 1 17 0 8 0 9 0 37 0
1920 23 0 9 2 16 1 11 8 31 13
1980 18 9 6 6 20 22 4 22 39 24
2040 17 17 7 13 13 10 9 6 23 8
2100 24 7 16 2 4 1 2 0 46 0
2160 22 1 5 3 11 1 7 5 38 11
2220 22 2 10 4 8 9 10 0 25 0
2280 26 2 11 0 7 0 6 0 27 0
2340 17 0 5 0 8 2 6 4 38 16
2400 22 15 12 2 11 9 8 22 30 11
2460 13 11 10 14 10 16 11 11 26 23
2520 9 15 7 7 10 7 6 2 33 7
2580 34 1 11 10 12 1 6 5 23 12
109
Time
(s)
Screenline A Screenline B Screenline C Screenline D Screenline E
In Out In Out In Out In Out In Out
2640 31 5 12 3 6 8 6 10 34 21
2700 26 21 13 4 9 16 5 10 35 9
2760 15 8 11 6 20 5 10 0 31 2
2820 34 1 8 1 6 1 8 6 39 33
2880 14 10 4 9 8 7 6 1 39 34
2940 18 15 14 1 11 4 7 6 24 14
3000 11 45 4 5 12 5 9 0 30 1
Total 684 268 373 144 371 185 245 202 1121 334
110
Appendix B – Pedestrian Model Code
// Pedestrian Model Code - Integrates pedestrian movement models (graph, Social // Forces, and Optimal Steps) with the MassMotion SDK // Copyright (C) 2017 Oasys Ltd. and University of Toronto // // This program is free software: you can redistribute it and/or modify // it under the terms of the GNU General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU General Public License for more details. // // You should have received a copy of the GNU General Public License // along with this program. If not, see <http://www.gnu.org/licenses/>. // The following code uses functions from the Math.NET Numerics library, which is // licenced as follows: // // Copyright (c) 2002-2015 Math.NET // // Permission is hereby granted, free of charge, to any person obtaining a copy of // this software and associated documentation files (the "Software"), to deal in the // Software without restriction, including without limitation the rights to use, // copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the // Software, and to permit persons to whom the Software is furnished to do so, // subject to the following conditions: // // The above copyright notice and this permission notice shall be included in all // copies or substantial portions of the Software. // // THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR // IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS // FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. // IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, // DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, // ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER // DEALINGS IN THE SOFTWARE. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Diagnostics; using MathNet.Numerics; using MathNet.Numerics.Distributions; using MathNet.Numerics.Random; using Oasys.MassMotion;
111
namespace PedModels { public class PedModels { // Parameters shared between model types public class ModelParams { public double frameLength; public int currentFrame; public double agentRadius; public double meanSpeed; public double stdevSpeed; public double maxSpeed; public double minSpeed; public double ascendStairFactor; public double descendStairFactor; public double escalatorSpeed; public MersenneTwister randomGen; public ModelParams() { agentRadius = 0.25; meanSpeed = 1.35; stdevSpeed = 0.25; maxSpeed = 2.05; minSpeed = 0.65; ascendStairFactor = 0.402; descendStairFactor = 0.536; escalatorSpeed = 0.8; randomGen = new MersenneTwister(4); } } public ModelParams modelParameters; // Parameters specific to the Simple Network model (also inherits from ModelParams) public class GraphParams : ModelParams { public Dictionary<Agent, AgentDataN> agentDict; public double speedAdjustment; public GraphParams() { agentDict = new Dictionary<Agent, AgentDataN>(); speedAdjustment = 1.398; } }
112
// Performs initialization actions for Graph Model public void InitGraph(Project sdkProject, Simulation sdkSimulation) { modelParameters = new GraphParams(); GraphParams gParams = (GraphParams)modelParameters; gParams.frameLength = sdkSimulation.GetFrameLength(); gParams.currentFrame = sdkSimulation.GetCurrentFrame(); } // Performs one frame of agent movement for Graph Model public void AdvanceGraph(Project sdkProject, Simulation sdkSimulation) { GraphParams gParams = (GraphParams)modelParameters; // Assume control of new agents and add them to the dictionary foreach (Agent newAgent in sdkSimulation.GetCreatedAgents()) { newAgent.AssumeControl(); newAgent.DisallowAdjustment(); double desiredSpeed = Normal.Sample(gParams.randomGen, gParams.meanSpeed, gParams.stdevSpeed); while (desiredSpeed > gParams.maxSpeed || desiredSpeed < gParams.minSpeed) { desiredSpeed = Normal.Sample(gParams.randomGen, gParams.meanSpeed, gParams.stdevSpeed); } AgentDataG dictValue = new AgentDataG(false, desiredSpeed, null, 0.0); gParams.agentDict.Add(newAgent, dictValue); } // Remove any deleted agents from the dictionary foreach (Agent delAgent in sdkSimulation.GetDeletedAgents()) { gParams.agentDict.Remove(delAgent); } // Check agents in the dictionary foreach (Agent thisAgent in gParams.agentDict.Keys.ToList()) { if (thisAgent.Exists()) { AgentDataG dictData = new AgentDataG(false, 0.0, null, 0.0); gParams.agentDict.TryGetValue(thisAgent, out dictData); // If the agent does not have a waypoint in the dictionary, see if they now have one in MassMotion if (dictData.HasWaypoint == false) { // If the agent has a MassMotion waypoint, update their dictionary data with the waypoint location and arrival time if (thisAgent.HasTargetWaypoint()) {
113
SimObject currentObject = sdkProject.GetObject (thisAgent.GetCurrentFloorId()); Vec3d waypointLocation = thisAgent.GetTargetWaypointGoalLine ().GetMidpoint(); TupleVec waypointVec = TVConvert(waypointLocation) - TVConvert(thisAgent.GetPosition()); double waypointDist = waypointVec.Magnitude(); TupleVec waypointDirection = waypointVec.Normalized(); double agentSpeed = dictData.AgentSpeed * snParams.speedAdjustment; // If agent is on escalator or stair, adjust their desired speed if (currentObject is Escalator) { agentSpeed = gParams.escalatorSpeed * Math.Sqrt(1 - Math.Pow(waypointDirection.Yvalue, 2)); } else if (currentObject is Stair) { if (waypointDirection.Yvalue > 0) { agentSpeed = agentSpeed * gParams.ascendStairFactor; } else { agentSpeed = agentSpeed * gParams.descendStairFactor; } } // Calculate the agent's move time and update their dictionary data double moveTime = gParams.currentFrame + ((1 / gParams.frameLength) * (waypointDist / agentSpeed)); AgentDataG newDictData = new AgentDataG(true, dictData.AgentSpeed, waypointLocation, moveTime); gParams.agentDict[thisAgent] = newDictData; } } // If the agent has a target waypoint, check to see if their arrival time has passed else { // If the agent's arrival time has passed, move them to the waypoint and reset their dictionary data if (gParams.currentFrame >= dictData.ArrivalTime) { thisAgent.MoveTo(dictData.WaypointLocation); AgentDataG newDictData = new AgentDataG(false, dictData.AgentSpeed, null, 0.0); gParams.agentDict[thisAgent] = newDictData; } } }
114
} // Step the simulation forward by one frame sdkSimulation.Step(); gParams.currentFrame++; } // Built on Helbing and Johansson (2009) SF Model, using elliptical II specification with Seer (2012) parameters // Parameters specific to the Social Forces model (also inherits from ModelParams) public class SocialForcesParams : ModelParams { public double maxAcceleration; public double maxNeighbourRange; public double maxObstacleRange; public double nForceStrength; public double nForceRange; public double oForceStrength; public double oForceRange; public double relaxation; public double lambda; public double speedAdjustment; public Dictionary<GlobalId, AgentDataSF> agentPositions; public SocialForcesParams() { maxAcceleration = 3.0; maxNeighbourRange = 3.0; maxObstacleRange = 1.0; nForceStrength = 0.1845; nForceRange = 5.9334; oForceStrength = 1.9534; oForceRange = 0.1366; relaxation = 0.75; lambda = 0.1; speedAdjustment = 1.098; agentPositions = new Dictionary<GlobalId, AgentDataSF>(); } } // Performs initialization actions for Social Forces public void InitSocialForces(Project sdkProject, Simulation sdkSimulation) { modelParameters = new SocialForcesParams(); SocialForcesParams sfParams = (SocialForcesParams)modelParameters; sfParams.frameLength = sdkSimulation.GetFrameLength(); sfParams.currentFrame = sdkSimulation.GetCurrentFrame(); } // Performs one frame of agent movement for Social Forces
115
public void AdvanceSocialForces(Project sdkProject, Simulation sdkSimulation) { SocialForcesParams sfParams = (SocialForcesParams)modelParameters; // Assume control of new agents and add them to the dictionary foreach (Agent newAgent in sdkSimulation.GetCreatedAgents()) { newAgent.AssumeControl(); newAgent.DisallowAdjustment(); Vec3d direction = newAgent.GetVelocity().GetNormalized(); double desiredSpeed = Normal.Sample(sfParams.randomGen, sfParams.meanSpeed, sfParams.stdevSpeed); while (desiredSpeed > sfParams.maxSpeed || desiredSpeed < sfParams.minSpeed) { desiredSpeed = Normal.Sample(sfParams.randomGen, sfParams.meanSpeed, sfParams.stdevSpeed); } AgentDataSF dictData = new AgentDataSF(newAgent.GetPosition(), (desiredSpeed * direction), desiredSpeed); sfParams.agentPositions.Add(newAgent.GetId(), dictData); } // Remove any deleted agents from the dictionary foreach (Agent delAgent in sdkSimulation.GetDeletedAgents()) { sfParams.agentPositions.Remove(delAgent.GetId()); } // Look up each agent and move them one by one foreach (Agent thisAgent in sdkSimulation.GetAllAgents()) { // Locate the agent and determine their current velocity GlobalId agentId = thisAgent.GetId(); string agentName = thisAgent.GetName(); AgentDataSF dictData = new AgentDataSF(null, null, 0.0); sfParams.agentPositions.TryGetValue(agentId, out dictData); TupleVec currentPosition = TVConvert(thisAgent.GetPosition()); TupleVec currentVelocity = TVConvert(thisAgent.GetVelocity()); // Check if the agent currently has a target waypoint if (thisAgent.HasTargetWaypoint()) { // Check if the agent has a path to their target waypoint if (thisAgent.HasPathToTargetWaypoint()) { TupleVec goalDirection = TVConvert(thisAgent. GetDirectionToTargetWaypoint()); double desiredSpeed = dictData.AgentSpeed * sfParams.speedAdjustment; SimObject currentObject = sdkProject.GetObject( thisAgent.GetCurrentFloorId());
116
bool escalatorFlag = false; // If the agent's current object is an escalator or stair, adjust their desired speed if (currentObject is Escalator) { desiredSpeed = sfParams.escalatorSpeed * Math.Sqrt(1 - Math.Pow(goalDirection.Yvalue, 2)); escalatorFlag = true; } else if (currentObject is Stair) { if (goalDirection.Yvalue > 0) { desiredSpeed = desiredSpeed * sfParams.ascendStairFactor; } else { desiredSpeed = desiredSpeed * sfParams.descendStairFactor; } } // Calculate the base acceleration TupleVec baseAcceleration = ((desiredSpeed * goalDirection) - currentVelocity) / sfParams.relaxation; // Get neighbour forces TupleVec netNeighbourForce = new TupleVec(0, 0, 0); // Calculate the neighbour force applied to this agent using elliptical specification II foreach (Agent thisNeighbour in sdkSimulation.GetAgentsNearPoint( currentPosition.Revert(), sfParams.maxNeighbourRange, 1.0)) { GlobalId neighbourId = thisNeighbour.GetId(); string neighbourName = thisNeighbour.GetName(); if (neighbourId != agentId) { // Find the neighbour's location and velocity AgentDataSF neighbourData = new AgentDataSF(null, null, 0.0); sfParams.agentPositions.TryGetValue(neighbourId, out neighbourData); TupleVec neighbourLocation = TVConvert( neighbourData.AgentLocation); TupleVec neighbourVelocity = TVConvert( neighbourData.AgentVelocity); // Calculate the distance between agents, difference in velocities, and other terms TupleVec distBetweenAgents = currentPosition - neighbourLocation; TupleVec neighbourVelocityDelta = (neighbourVelocity - currentVelocity) * sfParams.frameLength;
117
TupleVec neighbourDelta = distBetweenAgents - neighbourVelocityDelta; double neighbourSpeedSqr = Math.Pow(neighbourVelocityDelta .Magnitude(), 2); double pedEllipse = Math.Sqrt(Math.Pow((distBetweenAgents. Magnitude() + neighbourDelta.Magnitude()), 2) - neighbourSpeedSqr); // Calculate the impact of the agent's visual field on the neighbour force double cosPhi = currentVelocity.Normalized().Dot((-1) * distBetweenAgents.Normalized()); double visionFactor = sfParams.lambda + ((1 - sfParams. lambda) * ((1 + cosPhi) / 2)); // Calculate the neighbour force components double nForceTerm1 = sfParams.nForceStrength * Math.Exp( pedEllipse / (-2 * sfParams.nForceRange)); double nForceTerm2 = (distBetweenAgents.Magnitude() + neighbourDelta.Magnitude()) / pedEllipse; TupleVec nForceTerm3 = (0.5) * (distBetweenAgents. Normalized() + neighbourDelta.Normalized()); TupleVec nForce = visionFactor * nForceTerm1 * nForceTerm2 * nForceTerm3; // Add this neighbour force to the net neighbour force netNeighbourForce = netNeighbourForce + nForce; } } // Get available space around the agent TupleVec netObsForce = new TupleVec(0, 0, 0); AvailableSpace agentSpace = new AvailableSpace(); agentSpace = thisAgent.GetAvailableSpace( sfParams.maxObstacleRange); // Calculate force from each nearby obstacle using elliptical specification II for (int segment = 0; segment < 32; segment++) { double obsDist = agentSpace.GetSegmentRadius(segment); // If obstacle is present for this segment, calculate force if (obsDist < sfParams.maxObstacleRange) { TupleVec obstacleLocation = obsDist * TVConvert( agentSpace.GetSegmentDirection(segment)); // Calculate distance between agent and obstacle, and other factors TupleVec distBetweenObs = currentPosition - obstacleLocation; TupleVec obsVelocityDelta = ((-1) * currentVelocity) * sfParams.frameLength;
118
TupleVec obsDelta = distBetweenObs - obsVelocityDelta; double obsSpeedSqr = Math.Pow(obsVelocityDelta. Magnitude(), 2); double obsEllipse = Math.Sqrt(Math.Pow((distBetweenObs. Magnitude() + obsDelta.Magnitude()), 2) - obsSpeedSqr); // Calculate terms of obstacle force double oForceTerm1 = sfParams.oForceStrength * Math.Exp( obsEllipse / (-2 * sfParams.oForceRange)); double oForceTerm2 = (distBetweenObs.Magnitude() + obsDelta.Magnitude()) / obsEllipse; TupleVec oForceTerm3 = (0.5) * (distBetweenObs. Normalized() + obsDelta.Normalized()); TupleVec oForce = oForceTerm1 * oForceTerm2 * oForceTerm3; // Add the current force to the net obstacle force netObsForce = netObsForce + oForce; } } // Sum all agent forces and calculate the agent's next velocity TupleVec netAgentForce = baseAcceleration + netNeighbourForce + netObsForce; double magnitude = netAgentForce.Magnitude(); TupleVec nextVelocity = currentVelocity + (netAgentForce * sfParams.frameLength); // If the agent's next velocity is higher than their maximum acceptable velocity, scale it down if (nextVelocity.Magnitude() > (1.3 * desiredSpeed)) { if (escalatorFlag) { nextVelocity = nextVelocity.Normalized() * desiredSpeed; escalatorFlag = false; } else { nextVelocity = nextVelocity.Normalized() * desiredSpeed * 1.3; } } Vec3d nextVelocityV = nextVelocity.Revert(); // Calculate the agent's next position, and move them there with their next velocity TupleVec nextPosition = currentPosition + (nextVelocity * sfParams.frameLength); Vec3d nextPositionV = nextPosition.Revert(); thisAgent.MoveTo(nextPositionV, nextVelocityV); sfParams.agentPositions[agentId] = new AgentDataSF(nextPositionV, nextVelocityV, dictData.AgentSpeed);
119
} // If the agent does not have a path to their target waypoint, find the nearest open space and move them there else { Vec3d mmPosition = thisAgent.GetOpenPointClosestTo( currentPosition.Revert(), 1.0); thisAgent.MoveTo(mmPosition, dictData.AgentVelocity); sfParams.agentPositions[thisAgent.GetId()] = new AgentDataSF( mmPosition, dictData.AgentVelocity, dictData.AgentSpeed); } } } // Step the simulation forward by one frame sdkSimulation.Step(); } // Built on Seitz and Koster (2012) Optimal Steps Model, with modifications // Parameters specific to the Optimal Steps model (also inherits from ModelParams) public class OptimalStepsParams : ModelParams { public int q; public double mu_p; public double nu_p; public double a_p; public double b_p; public double g_p; public double h_p; public double mu_o; public double nu_o; public double a_o; public double b_o; public double h_o; public double beta_0; public double beta_1; public double speedAdjustment; public Dictionary<GlobalId, AgentDataOS> agentPositions; public OptimalStepsParams() { q = 18; mu_p = 1000.0; nu_p = 0.4; a_p = 1.0; b_p = 0.2; g_p = 0.5; h_p = 1.0;
120
mu_o = 10000.0; nu_o = 0.2; a_o = 3.0; b_o = 2.0; h_o = 6.0; beta_0 = 0.462; beta_1 = 0.235; speedAdjustment = 1.343; agentPositions = new Dictionary<GlobalId, AgentDataOS>(); } } // Performs initialization actions for Optimal Steps public void InitOptimalSteps(Project sdkProject, Simulation sdkSimulation) { modelParameters = new OptimalStepsParams(); OptimalStepsParams osParams = (OptimalStepsParams)modelParameters; osParams.frameLength = sdkSimulation.GetFrameLength(); osParams.currentFrame = sdkSimulation.GetCurrentFrame(); } // Performs one frame of agent movement for Optimal Steps public void AdvanceOptimalSteps(Project sdkProject, Simulation sdkSimulation) { OptimalStepsParams osParams = (OptimalStepsParams)modelParameters; // Add any new agents to the dictionary foreach (Agent newAgent in sdkSimulation.GetCreatedAgents()) { newAgent.AssumeControl(); newAgent.DisallowAdjustment(); GlobalId agentId = newAgent.GetId(); TupleVec newPosition = TVConvert(newAgent.GetPosition()); double desiredSpeed = Normal.Sample(osParams.randomGen, osParams. meanSpeed, osParams.stdevSpeed); while (desiredSpeed > osParams.maxSpeed || desiredSpeed < osParams. minSpeed) { desiredSpeed = Normal.Sample(osParams.randomGen, osParams.meanSpeed, osParams.stdevSpeed); } AgentDataOS dictData = new AgentDataOS(desiredSpeed, newPosition, null); osParams.agentPositions.Add(agentId, dictData); } // Remove any deleted agents from the dictionary foreach (Agent delAgent in sdkSimulation.GetDeletedAgents()) { GlobalId agentId = delAgent.GetId(); osParams.agentPositions.Remove(agentId);
121
} // Look at each agent in the dictionary and update their position foreach (GlobalId thisAgentId in osParams.agentPositions.Keys.ToList()) { Agent thisAgent = sdkSimulation.GetAgent(thisAgentId); AgentDataOS dictData = new AgentDataOS(0.0, null, null); osParams.agentPositions.TryGetValue(thisAgentId, out dictData); double desiredSpeed = dictData.AgentSpeed * osParams.speedAdjustment; SimObject currentObject = sdkProject.GetObject( thisAgent.GetCurrentFloorId()); // Locate the agent (both as TupleVec and Vec3d) TupleVec cPosition = dictData.AgentLocation; Vec3d cPositionV = cPosition.Revert(); TupleVec nextPosition = new TupleVec(0, 0, 0); Vec3d nextPositionV = new Vec3d(); bool moveSetFlag = false; bool shiftWPcheck = false; int numOfPositions = osParams.q + 1; // Calculate step length based on time step double mu_r = osParams.frameLength * desiredSpeed; // Ensures agent actually visited their last waypoint (instead of bouncing arbitrarily between waypoints) if (dictData.LastGoal != null) { if (thisAgent.HasPreviousWaypoint()) { if (thisAgent.GetPreviousWaypointId() == dictData.LastGoal) { dictData = new AgentDataOS(dictData.AgentSpeed, dictData. AgentLocation, null); osParams.agentPositions[thisAgentId] = dictData; } else { shiftWPcheck = true; } } else { dictData = new AgentDataOS(dictData.AgentSpeed, dictData. AgentLocation, null); osParams.agentPositions[thisAgentId] = dictData; } } // Check if agent has a target waypoint - if so, check if they have a path to that waypoint if (thisAgent.HasTargetWaypoint()) {
122
if (thisAgent.HasPathToTargetWaypoint()) { // Just here for values in order to pick out agents and times int frameNum = sdkSimulation.GetCurrentFrame(); GlobalId targetWaypoint = thisAgent.GetTargetWaypointId(); string wpName = sdkProject.GetObject(targetWaypoint).GetName(); string agentName = thisAgent.GetName(); double distToWaypoint = thisAgent.GetDistanceToTargetWaypoint(); TupleVec dirToWaypoint = TVConvert(thisAgent. GetDirectionToTargetWaypoint()); TupleVec waypointYOffset = new TupleVec(0, (TVConvert(thisAgent. GetTargetWaypointGoalLine().GetMidpoint()) - cPosition). Normalized().Yvalue, 0); dirToWaypoint = dirToWaypoint + waypointYOffset; if (shiftWPcheck) { nextPosition = cPosition + (dirToWaypoint * 0.1); moveSetFlag = true; } // If agent is targeting a portal and is less than one step away, move them directly to their target if (!moveSetFlag && sdkProject.GetObject(thisAgent. GetTargetWaypointId()) is Portal && distToWaypoint < mu_r) { nextPosition = cPosition + (dirToWaypoint * distToWaypoint); moveSetFlag = true; } // If agent is on stair or escalator, move them one scaled step length towards their waypoint if (!moveSetFlag && (currentObject is Stair || currentObject is Escalator || currentObject is Ramp)) { double verticalTransportSpeed = mu_r; // Scaling for vertical transport speed if (currentObject is Escalator) { verticalTransportSpeed = osParams.frameLength * osParams. escalatorSpeed * Math.Sqrt(1 - Math.Pow(dirToWaypoint. Yvalue, 2)); } if (currentObject is Stair) { if (dirToWaypoint.Yvalue > 0) { verticalTransportSpeed = mu_r * osParams. ascendStairFactor; } else { verticalTransportSpeed = mu_r * osParams. descendStairFactor;
123
} } nextPosition = cPosition + (dirToWaypoint * verticalTransportSpeed); moveSetFlag = true; } // If agent has not been moved and has less than a step to their target, move them to their target directly if (!moveSetFlag && distToWaypoint < mu_r) { dictData = new AgentDataOS(dictData.AgentSpeed, dictData. AgentLocation, targetWaypoint); osParams.agentPositions[thisAgentId] = dictData; nextPosition = cPosition + (dirToWaypoint * distToWaypoint); moveSetFlag = true; } // If agent hasn't been directly moved, set and evaluate positions for them if (!moveSetFlag) { double[,] potentials = new double[osParams.q + 1, 6]; potentials[0, 0] = cPosition.Xvalue; potentials[0, 1] = cPosition.Zvalue; for (int k = 0; k < osParams.q; k++) { double phi = (2 * Math.PI / osParams.q) * (k + osParams. randomGen.NextDouble()); potentials[(k + 1), 0] = cPosition.Xvalue + (mu_r * Math.Cos(phi)); potentials[(k + 1), 1] = cPosition.Zvalue + (mu_r * Math.Sin(phi)); } double minimumPotential = 1000000000; int minLocation = osParams.q + 1; double directionScale = Math.PI / (2 * mu_r); AgentList neighbours = sdkSimulation.GetAgentsNearPoint( cPositionV, (osParams.g_p + osParams.h_p + mu_r + mu_r), 1.0); List<Tuple<GlobalId, TupleVec>> neighbourPositions = new List<Tuple<GlobalId, TupleVec>>(); foreach (Agent neighbour in neighbours) { GlobalId neighbourId = neighbour.GetId(); AgentDataOS neighbourData = new AgentDataOS(0.0, null, null);
124
osParams.agentPositions.TryGetValue(neighbourId, out neighbourData); neighbourPositions.Add(new Tuple<GlobalId, TupleVec>(neighbourId, neighbourData.AgentLocation)); } for (int k = 0; k <= osParams.q; k++) { TupleVec newPosition = new TupleVec(potentials[k, 0], cPosition.Yvalue, potentials[k, 1]); Vec3d newPositionV = newPosition.Revert(); double newDist = thisAgent. GetDistanceToTargetWaypointFrom(newPositionV); if (Math.Abs(distToWaypoint - newDist) <= (1.2 * mu_r)) { potentials[k, 2] = newDist; } else { potentials[k, 2] = osParams.mu_o; } double neighbourPotential = 0.0; foreach (Tuple<GlobalId, TupleVec> neighbourData in neighbourPositions.ToList()) { if (neighbourData.Item1 != thisAgentId) { double distance = (neighbourData.Item2 - newPosition).Magnitude(); if (distance <= (osParams.g_p + osParams.h_p)) { if (distance <= osParams.g_p) { neighbourPotential = neighbourPotential + osParams.mu_p; } else { neighbourPotential = neighbourPotential + (osParams.nu_p + (Math.Exp(-1 * osParams.a_p * (Math.Pow(distance, osParams.b_p))))); } } } } potentials[k, 3] = neighbourPotential; // Find the nearest obstacles of interest double obstaclePotential = 0.0; AvailableSpace agentSpace = new AvailableSpace();
125
agentSpace = thisAgent.GetAvailableSpaceAround( newPositionV, osParams.h_o); for (int segment = 0; segment < 32; segment++) { double obsDist = agentSpace.GetSegmentRadius(segment); if (obsDist < osParams.h_o) { if (obsDist <= (osParams.g_p / 2)) { obstaclePotential = obstaclePotential + osParams.mu_o; } else { obstaclePotential = obstaclePotential + (osParams.nu_o * (Math.Exp(-1 * osParams.a_o * Math.Pow(obsDist, osParams.b_o)))); } } } potentials[k, 4] = obstaclePotential; potentials[k, 5] = potentials[k, 2] + potentials[k, 3] + potentials[k, 4]; minimumPotential = Math.Min(minimumPotential, potentials[k, 5]); if (minimumPotential == potentials[k, 5]) { minLocation = k; } } nextPosition = new TupleVec(potentials[minLocation, 0], cPosition.Yvalue, potentials[minLocation, 1]); } } else { // No path to target waypoint? Step slightly forward if (thisAgent.HasCurrentFloor()) { double heading = thisAgent.GetHeading(); nextPosition = cPosition + (new TupleVec(Math.Sin(heading), cPosition.Yvalue, Math.Cos(heading)).Normalized() * 0.1); } } nextPositionV = nextPosition.Revert(); // If next point doesn't have a path to the target waypoint, reposition the agent
126
if (!thisAgent.HasPathToTargetWaypoint(nextPositionV) && !moveSetFlag) { nextPositionV = thisAgent. GetPointWithPathToTargetWaypointClosestTo(nextPositionV, mu_r); if (nextPositionV.IsValid()) { nextPosition = TVConvert(nextPositionV); } else { nextPositionV = nextPosition.Revert(); } } // Move agent to their next position (minimum potential on the circle) osParams.agentPositions[thisAgentId] = new AgentDataOS( dictData.AgentSpeed, nextPosition, dictData.LastGoal); thisAgent.MoveTo(nextPositionV); } // No target waypoint? No movement. } // End of agent movement procedure // Step the simulation forward by one frame sdkSimulation.Step(); } // Parameters specific to MassMotion public class MassMotionParams : ModelParams { // No specific parameters } // Performs initialization actions for MassMotion public void InitMassMotion(Project sdkProject, Simulation sdkSimulation) { // Nothing to initialize } // Performs one frame of agent movement for MassMotion public void AdvanceMassMotion(Project sdkProject, Simulation sdkSimulation) { sdkSimulation.Step(); } public TupleVec TVConvert(Vec3d input) { if (input.IsValid()) { return new TupleVec(input.GetX(), input.GetY(), input.GetZ()); } else { return null;
127
} } } // Class to manage dictionary data for Graph Model public class AgentDataG { public bool HasWaypoint { get; set; } public double AgentSpeed { get; set; } public Vec3d WaypointLocation { get; set; } public double ArrivalTime { get; set; } public AgentDataG(bool hasWP, double speed, Vec3d waypoint, double moveTime) { HasWaypoint = hasWP; AgentSpeed = speed; WaypointLocation = waypoint; ArrivalTime = moveTime; } } // Class to manage dictionary data for Social Forces model public class AgentDataSF { public Vec3d AgentLocation { get; set; } public Vec3d AgentVelocity { get; set; } public double AgentSpeed { get; set; } public AgentDataSF(Vec3d location, Vec3d velocity, double speed) { AgentLocation = location; AgentVelocity = velocity; AgentSpeed = speed; } } // Class to manage dictionary data for Optimal Steps model public class AgentDataOS { public double AgentSpeed { get; set; } public TupleVec AgentLocation { get; set; } public GlobalId LastGoal { get; set; } public AgentDataOS(double speed, TupleVec location, GlobalId lastGoal) { AgentSpeed = speed; AgentLocation = location; LastGoal = lastGoal; } }
128
// Class to manage vectors outside of MassMotion SDK (bypasses Vec3d, except for Revert() command) public class TupleVec { public double Xvalue { get; set; } public double Yvalue { get; set; } public double Zvalue { get; set; } public TupleVec(double xval, double yval, double zval) { Xvalue = xval; Yvalue = yval; Zvalue = zval; } public Vec3d Revert() { return new Vec3d(Xvalue, Yvalue, Zvalue); } public static TupleVec operator +(TupleVec value1, TupleVec value2) { return new TupleVec((value1.Xvalue + value2.Xvalue), (value1.Yvalue + value2.Yvalue), (value1.Zvalue + value2.Zvalue)); } public static TupleVec operator -(TupleVec value1, TupleVec value2) { return new TupleVec((value1.Xvalue - value2.Xvalue), (value1.Yvalue - value2.Yvalue), (value1.Zvalue - value2.Zvalue)); } public static TupleVec operator *(double value1, TupleVec value2) { return new TupleVec((value1 * value2.Xvalue), (value1 * value2.Yvalue), (value1 * value2.Zvalue)); } public static TupleVec operator *(TupleVec value1, double value2) { return new TupleVec((value1.Xvalue * value2), (value1.Yvalue * value2), (value1.Zvalue * value2)); } public static TupleVec operator /(TupleVec value1, double value2) { if (value2 == 0) return null; else return new TupleVec((value1.Xvalue / value2), (value1.Yvalue / value2), (value1.Zvalue / value2)); }
129
public double Magnitude() { return Math.Sqrt((Xvalue * Xvalue) + (Yvalue * Yvalue) + (Zvalue * Zvalue)); } public TupleVec Normalized() { double length = Magnitude(); if (length == 0) return new TupleVec(0.0, 0.0, 0.0); else return new TupleVec((Xvalue / length), (Yvalue / length), (Zvalue / length)); } public TupleVec Rotate(double angle) { double cosAngle = Math.Cos(angle); double sinAngle = Math.Sin(angle); return new TupleVec((cosAngle * Xvalue) + (sinAngle * Zvalue), Yvalue, (cosAngle * Zvalue) + (sinAngle * Xvalue)); } public double Dot(TupleVec value2) { return (Xvalue * value2.Xvalue) + (Yvalue * value2.Yvalue) + (Zvalue * value2.Zvalue); } public TupleVec Cross(TupleVec value2) { return new TupleVec( (Yvalue * value2.Zvalue) - (Zvalue * value2.Yvalue), (Zvalue * value2.Xvalue) - (Xvalue * value2.Zvalue), (Xvalue * value2.Yvalue) - (Yvalue * value2.Xvalue) ); } public TupleVec Round() { return new TupleVec(Math.Round(Xvalue, 3), Math.Round(Yvalue, 3), Math.Round(Zvalue, 3)); } } }
130
Appendix C – Model Calibration Code
// Pedestrian Model Calibration Code - Uses Genetic Algorithm method to determine // optimal calibration parameter for pedestrian models // Copyright (C) 2017 Oasys Ltd. and University of Toronto // // This program is free software: you can redistribute it and/or modify // it under the terms of the GNU General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU General Public License for more details. // // You should have received a copy of the GNU General Public License // along with this program. If not, see <http://www.gnu.org/licenses/>. // The following code uses functions from the Genetic Algorithm Framework for .Net // library. // // Copyright (C) John Newcombe and licensed under the GNU Lesser General Public // License // (Version 3, June 29, 2007). // // The full text of the GNU LGPL follows this code. // using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using System.Threading.Tasks; using System.Diagnostics; using System.IO; using Oasys.MassMotion; using PedModels; using GAF; using GAF.Operators; namespace PedModelDebugging { public class PedModelDebugging { // Enumerator for different ped models public enum PedModelType { graphModel, socialForces, optimalSteps,
131
massMotion } // Values that can be accessed across the entire class public class ModelGlobals { public int threadCount; public int seedCount; public int[] randomSeeds; public PedModelType modelType; public string projectFilepath; public string simFilepath; public string simName; public string sourceData; public string chromosomeData; public static int numOfChromosomes; public static int numOfColumns; public static int numOfRows; public static string[,] header; public static int[,] recordedData; public static int runSet; public int cutoffVolume; public int cutoffRuntime; public bool stopFlag; public List<FitnessVal> modelFitness; public ModelGlobals() { // Set the number of threads and number of seeds to test threadCount = 1; seedCount = 5; randomSeeds = new int[10] { 7248, 3247, 5775, 0297, 6408, 4402, 2296, 7895, 0092, 1481 }; modelType = PedModelType.simpleNetwork; // Provide the filepaths for project (don't include .mm), simulation results (directory only), source data (full file), and chomosomes (full file) projectFilepath = @"D:\Greg\Modelling\Calibration\Osgoode_Calibration"; simFilepath = @"D:\Greg\Modelling\Calibration\Social_Forces\"; simName = @"Osgoode_SF"; sourceData = @"D:\Greg\Modelling\Calibration\Osgoode_Data_60.csv"; chromosomeData = @"D:\Greg\Modelling\MM_Skeleton\SF_Chromosomes.txt"; cutoffVolume = 200; cutoffRuntime = 50000; stopFlag = false; modelFitness = new List<FitnessVal>(); } }
132
public ModelGlobals modelVars = new ModelGlobals(); // Class to manage list data for fitness results public class FitnessVal { public int Generation { get; set; } public int Chromosome { get; set; } public string RunType { get; set; } public string ChromosomeString { get; set; } public double Fitness { get; set; } public FitnessVal (int gen, int chrom, string type, string chromStr, double fit) { Generation = gen; Chromosome = chrom; RunType = type; ChromosomeString = chromStr; Fitness = fit; } } static void Main(string[] args) { PedModels.PedModels pedModels = new PedModels.PedModels(); PedModelDebugging modelRun = new PedModelDebugging(); // Launch the SDK and set up the project Sdk.Init(); modelRun.ReadFileData(modelRun.modelVars.sourceData); ModelGlobals.runSet = 0; // Set up Genetic Algorithm parameters const double crossoverProbability = 0.85; const double mutationProbability = 0.10; const int elitePercent = 5; Population population = new Population(); List<Chromosome> chromosomes = new List<Chromosome>(); // Read chomosomes in to list and population var openFile = File.OpenRead(modelRun.modelVars.chromosomeData); var fileReader = new StreamReader(openFile); ModelGlobals.numOfChromosomes = 0; while (!fileReader.EndOfStream) { string line = fileReader.ReadLine(); chromosomes.Add(new Chromosome(line)); population.Solutions.Add(new Chromosome(line)); ModelGlobals.numOfChromosomes++; }
133
fileReader.Close(); openFile.Close(); Elite elite = new Elite(elitePercent); Crossover crossover = new Crossover(crossoverProbability, true) { CrossoverType = CrossoverType.SinglePoint }; BinaryMutate mutation = new BinaryMutate(mutationProbability, true); GeneticAlgorithm ga = new GeneticAlgorithm(population, modelRun.RunSimulation); ga.Operators.Add(elite); ga.Operators.Add(crossover); ga.Operators.Add(mutation); ga.Run(Terminate); foreach (FitnessVal value in modelRun.modelVars.modelFitness) { Console.WriteLine("G{0}, C{1} - {2} Run: Binary Chromosome {3} returns fitness {4}", value.Generation, value.Chromosome, value.RunType, value.ChromosomeString, value.Fitness); } Console.WriteLine("Press any key to continue . . ."); Console.ReadKey(); // Close the SDK Sdk.Fini(); } // Termination condition for Genetic Algorithm public static bool Terminate(Population population, int currentGen, long currentEval) { return currentGen > 10; } // Reads a given .csv file, setting model variables based on the dimensions of the data provided and returning the main data table public void ReadFileData(string sourceFilepath) { ModelGlobals modelVars = new ModelGlobals(); var openFile = File.OpenRead(sourceFilepath); var fileReader = new StreamReader(openFile); // Read in the header lines, providing the names of link-floor transition elements string[] line1 = fileReader.ReadLine().Split(','); string[] line2 = fileReader.ReadLine().Split(','); ModelGlobals.numOfColumns = line1.Length; ModelGlobals.header = new string[2, ModelGlobals.numOfColumns]; ModelGlobals.recordedData = new int[100, ModelGlobals.numOfColumns];
134
for (int column = 0; column < ModelGlobals.numOfColumns; column++) { ModelGlobals.header[0, column] = line1[column]; ModelGlobals.header[1, column] = line2[column]; } // Read in the rest of the data int numOfRows = 0; while (!fileReader.EndOfStream) { string[] line = fileReader.ReadLine().Split(','); for (int column = 0; column < ModelGlobals.numOfColumns; column++) { ModelGlobals.recordedData[numOfRows, column] = Int32.Parse(line[column]); } numOfRows++; } fileReader.Close(); openFile.Close(); ModelGlobals.numOfRows = numOfRows; } public double RunSimulation(Chromosome testChromosome) { PedModels.PedModels pedModels = new PedModels.PedModels(); string chromosomeStr = testChromosome.ToBinaryString(); int generation = ModelGlobals.runSet / ModelGlobals.numOfChromosomes; int chromosomeNum = ModelGlobals.runSet - (generation * ModelGlobals.numOfChromosomes); double[] fitnessValues = new double[modelVars.seedCount]; for (int run = 0; run < modelVars.seedCount; run++) { fitnessValues[run] = 0.0; } for (int run = 0; run < modelVars.seedCount; run++) { // Create arrays to hold simulated data int[,] simulatedData = new int[(ModelGlobals.numOfRows + 25), ModelGlobals.numOfColumns]; int[] screenlineCounts = new int[ModelGlobals.numOfColumns]; int lastFrame = 0; modelVars.stopFlag = false; for (int column = 0; column < ModelGlobals.numOfColumns; column++) { simulatedData[0, column] = 0; screenlineCounts[column] = 0; }
135
Stopwatch runTimer = new Stopwatch(); // Open the project and set the project and simulation options Project project = Project.Open(modelVars.projectFilepath + @".mm"); project.SetFrameRate(PedModelFramerate(modelVars.modelType)); SimulationOptions simOptions = new SimulationOptions(); simOptions.SetRandomSeed(modelVars.randomSeeds[run]); simOptions.SetThreadCount(modelVars.threadCount); string simName = modelVars.simName + ModelGlobals.runSet.ToString() + run.ToString(); string simFullPath = modelVars.simFilepath + simName + @".mmdb"; Simulation simulation = Simulation.Create(project, simName, simFullPath, simOptions); int startClockTime = simulation.GetCurrentFrame(); InitPedModel(modelVars.modelType, pedModels, project, simulation, chromosomeStr); while (!simulation.IsDone()) { runTimer.Start(); AdvanceSimulation(modelVars.modelType, pedModels, project, simulation); runTimer.Stop(); RecordData60(project, simulation, startClockTime, simulatedData, screenlineCounts); if (simulation.GetAllAgents().ToList().Count > modelVars.cutoffVolume || runTimer.ElapsedMilliseconds > modelVars.cutoffRuntime) { simulation.Stop(); modelVars.stopFlag = true; } lastFrame++; } double thisFitness = EvaluateFitness60(ModelGlobals.recordedData, simulatedData, (lastFrame / PedModelFramerate(modelVars.modelType) )); if (thisFitness > 0) { fitnessValues[run] += thisFitness; } FitnessVal fitnessValue = new FitnessVal(generation, chromosomeNum, "Single", chromosomeStr, fitnessValues[run]); modelVars.modelFitness.Add(fitnessValue); project.Close(); } Thread.Sleep(1000); double averageFitness = fitnessValues.Sum() / modelVars.seedCount;
136
FitnessVal avgValue = new FitnessVal(generation, chromosomeNum, "Average", chromosomeStr, averageFitness); modelVars.modelFitness.Add(avgValue); string resultsFile = modelVars.simFilepath + @"Fitness_Results.csv"; File.Delete(resultsFile); StreamWriter dataWriter = new StreamWriter(resultsFile); foreach (FitnessVal item in modelVars.modelFitness) { dataWriter.WriteLine(item.Generation.ToString() + "," + item.Chromosome.ToString() + "," + item.RunType + "," + item.ChromosomeString + "," + item.Fitness.ToString()); } dataWriter.Close(); ModelGlobals.runSet++; Console.WriteLine("Chromosome tested {0} times", modelVars.seedCount); return averageFitness; } // Records a timestep of screenline data from the current simulation into the specified array public void RecordData60(Project project, Simulation simulation, int startTime, int[,] simulatedData, int[] screenlineCounts) { PedModels.PedModels pedModels = new PedModels.PedModels(); ModelGlobals modelVars = new ModelGlobals(); // Gets lists of agents that exited and entered each specified object in the last frame, compares the lists, and increments the screenline count by 1 for each agent on both lists for (int location = 1; location < ModelGlobals.numOfColumns; location++) { AgentList cordonExit = simulation.GetAgentsThatExited(project.GetObject( ModelGlobals.header[0, location]).GetId()); AgentList cordonEnter = simulation.GetAgentsThatEntered( project.GetObject(ModelGlobals.header[1, location]).GetId()); foreach (Agent thisAgent in cordonExit) { if (thisAgent != null) { foreach (Agent otherAgent in cordonEnter) { if (otherAgent != null) { if (otherAgent.GetId() == thisAgent.GetId()) { screenlineCounts[location]++; } } } } }
137
} // Checks if the current frame is divisible by 60 seconds - if so, writes the cordon count totals to the main array and resets the cordon totals int currentFrame = simulation.GetCurrentFrame(); if (currentFrame % (PedModelFramerate(modelVars.modelType) * 60) == 0) { int secondsFromInception = (currentFrame - startTime) / PedModelFramerate(modelVars.modelType); int rowNum = secondsFromInception / 60; simulatedData[rowNum, 0] = secondsFromInception; for (int location = 1; location < ModelGlobals.numOfColumns; location++) { simulatedData[rowNum, location] = screenlineCounts[location]; screenlineCounts[location] = 0; } } } // Calculates fitness for the model run public double EvaluateFitness60(int[,] recordedData, int[,] simulatedData, int lastFrame) { if (modelVars.stopFlag) { return 0; } double fitnessValue = 0; double[] fitnessValues = new double[5] { 0, 0, 0, 0, 0 }; int startIndex = 960 / 60; int endIndex = 2700 / 60; if ((lastFrame / 60) > startIndex) { if ((lastFrame / 60) > endIndex) { lastFrame = endIndex; } double[] GREtop = new double[5] { 0, 0, 0, 0, 0 }; double[] GREbot = new double[5] { 0, 0, 0, 0, 0 }; for (int row = startIndex; row < lastFrame; row++) { double[] recordedPoint = new double[5] { recordedData[row, 2], recordedData[row, 4], recordedData[row, 6], recordedData[row, 7], recordedData[row, 9] }; double[] simulatedPoint = new double[5] { simulatedData[row, 2], simulatedData[row, 4], simulatedData[row, 6], simulatedData [row, 7], simulatedData[row, 9] }; for (int point = 0; point < 5; point ++) { GREtop[point] += System.Math.Abs(recordedPoint[point] - simulatedPoint[point]);
138
GREbot[point] += recordedPoint[point]; } } for (int i = 0; i < 5; i++) { fitnessValues[i] = 1 - (GREtop[i] / GREbot[i]); if (fitnessValues[i] > 0) { fitnessValue += fitnessValues[i]; } } } return fitnessValue / 5; } // Writes the given header and array to the specified filepath as a .csv file public void WriteToFile(int[,] inputMatrix, int runNum) { ModelGlobals modelVars = new ModelGlobals(); string filepath = modelVars.simFilepath + @"Osgoode_GA" + runNum.ToString("0") + @".csv"; var fileWriter = new StreamWriter(filepath); // Writes the two header lines to file for (int row = 0; row < 2; row++) { string fileLine = ""; for (int column = 0; column < ModelGlobals.numOfColumns; column++) { fileLine += ModelGlobals.header[row, column] + ","; } fileWriter.WriteLine(fileLine); } // Writes the remaining data lines to file for (int row = 0; row < ModelGlobals.numOfRows; row++) { string fileLine = ""; for (int column = 0; column < ModelGlobals.numOfColumns; column++) { fileLine += inputMatrix[row, column].ToString("0") + ","; } fileWriter.WriteLine(fileLine); } fileWriter.Close(); } // Returns the framerate according to the model type public int PedModelFramerate(PedModelType mType) {
139
switch (mType) { case PedModelType.graphModel: return 1; case PedModelType.socialForces: return 2; case PedModelType.optimalSteps: return 1; case PedModelType.massMotion: return 5; default: return 5; } } // Initializes the simulation model according to the model type public void InitPedModel(PedModelType mType, PedModels.PedModels pedModels, Project project, Simulation simulation, string chromosomeStr = null) { switch (mType) { case PedModelType.graphModel: pedModels.InitGraph(project, simulation, chromosomeStr); break; case PedModelType.socialForces: pedModels.InitSocialForces(project, simulation, chromosomeStr); break; case PedModelType.optimalSteps: pedModels.InitOptimalSteps(project, simulation, chromosomeStr); break; case PedModelType.massMotion: pedModels.InitMassMotion(project, simulation); break; default: break; } } // Advances the simulation one step according to the model type public void AdvanceSimulation(PedModelType mType, PedModels.PedModels pedModels, Project project, Simulation simulation) { switch (mType) { case PedModelType.graphModel: pedModels.AdvanceGraph(project, simulation); break; case PedModelType.socialForces: pedModels.AdvanceSocialForces(project, simulation); break; case PedModelType.optimalSteps: pedModels.AdvanceOptimalSteps(project, simulation); break; case PedModelType.massMotion: pedModels.AdvanceMassMotion(project, simulation);
140
break; default: break; } } } }
GNU LESSER GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright (C) 2007 Free Software Foundation, Inc. http://fsf.org/ Everyone is permitted to copy and distribute
verbatim copies of this license document, but changing it is not allowed.
This version of the GNU Lesser General Public License incorporates the terms and conditions of version 3 of the
GNU General Public License, supplemented by the additional permissions listed below.
0. Additional Definitions.
As used herein, "this License" refers to version 3 of the GNU Lesser General Public License, and the "GNU GPL"
refers to version 3 of the GNU General Public License.
"The Library" refers to a covered work governed by this License, other than an Application or a Combined Work as
defined below.
An "Application" is any work that makes use of an interface provided by the Library, but which is not otherwise
based on the Library. Defining a subclass of a class defined by the Library is deemed a mode of using an interface
provided by the Library.
A "Combined Work" is a work produced by combining or linking an Application with the Library. The particular
version of the Library with which the Combined Work was made is also called the "Linked Version".
The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined
Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the
Application, and not on the Linked Version.
The "Corresponding Application Code" for a Combined Work means the object code and/or source code for the
Application, including any data and utility programs needed for reproducing the Combined Work from the
Application, but excluding the System Libraries of the Combined Work.
1. Exception to Section 3 of the GNU GPL.
You may convey a covered work under sections 3 and 4 of this License without being bound by section 3 of the
GNU GPL.
2. Conveying Modified Versions.
If you modify a copy of the Library, and, in your modifications, a facility refers to a function or data to be supplied
by an Application that uses the facility (other than as an argument passed when the facility is invoked), then you
may convey a copy of the modified version:
141
a) under this License, provided that you make a good faith effort to ensure that, in the event an Application does not
supply the function or data, the facility still operates, and performs whatever part of its purpose remains meaningful,
or
b) under the GNU GPL, with none of the additional permissions of this License applicable to that copy.
3. Object Code Incorporating Material from Library Header Files.
The object code form of an Application may incorporate material from a header file that is part of the Library. You
may convey such object code under terms of your choice, provided that, if the incorporated material is not limited to
numerical parameters, data structure layouts and accessors, or small macros, inline functions and templates (ten or
fewer lines in length), you do both of the following:
a) Give prominent notice with each copy of the object code that the Library is used in it and that the Library and its
use are covered by this License.
b) Accompany the object code with a copy of the GNU GPL and this license document.
4. Combined Works.
You may convey a Combined Work under terms of your choice that, taken together, effectively do not restrict
modification of the portions of the Library contained in the Combined Work and reverse engineering for debugging
such modifications, if you also do each of the following:
a) Give prominent notice with each copy of the Combined Work that the Library is used in it and that the Library
and its use are covered by this License.
b) Accompany the Combined Work with a copy of the GNU GPL and this license document.
c) For a Combined Work that displays copyright notices during execution, include the copyright notice for the
Library among these notices, as well as a reference directing the user to the copies of the GNU GPL and this license
document.
d) Do one of the following:
0) Convey the Minimal Corresponding Source under the terms of this License, and the Corresponding
Application Code in a form suitable for, and under terms that permit, the user to recombine or relink the Application
with a modified version of the Linked Version to produce a modified Combined Work, in the manner specified by
section 6 of the GNU GPL for conveying Corresponding Source.
1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a)
uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly
with a modified version of the Library that is interface-compatible with the Linked Version.
e) Provide Installation Information, but only if you would otherwise be required to provide such information under
section 6 of the GNU GPL, and only to the extent that such information is necessary to install and execute a
modified version of the Combined Work produced by recombining or relinking the Application with a modified
version of the Linked Version. (If you use option 4d0, the Installation Information must accompany the Minimal
Corresponding Source and Corresponding Application Code. If you use option 4d1, you must provide the
Installation Information in the manner specified by section 6 of the GNU GPL for conveying Corresponding
Source.)
5. Combined Libraries.
You may place library facilities that are a work based on the Library side by side in a single library together with
other library facilities that are not Applications and are not covered by this License, and convey such a combined
library under terms of your choice, if you do both of the following:
a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other
library facilities, conveyed under the terms of this License.
142
b) Give prominent notice with the combined library that part of it is a work based on the Library, and explaining
where to find the accompanying uncombined form of the same work.
6. Revised Versions of the GNU Lesser General Public License.
The Free Software Foundation may publish revised and/or new versions of the GNU Lesser General Public License
from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the Library as you received it specifies that a certain
numbered version of the GNU Lesser General Public License "or any later version" applies to it, you have the option
of following the terms and conditions either of that published version or of any later version published by the Free
Software Foundation. If the Library as you received it does not specify a version number of the GNU Lesser General
Public License, you may choose any version of the GNU Lesser General Public License ever published by the Free
Software Foundation.
If the Library as you received it specifies that a proxy can decide whether future versions of the GNU Lesser
General Public License shall apply, that proxy's public statement of acceptance of any version is permanent
authorization for you to choose that version for the Library.
143
Appendix D – Model Evaluation Code
// Pedestrian Model Timing Code - Performs multiple runs of pedestrian models and // times parts of the process // Copyright (C) 2017 Oasys Ltd. and University of Toronto // // This program is free software: you can redistribute it and/or modify // it under the terms of the GNU General Public License as published by // the Free Software Foundation, either version 3 of the License, or // (at your option) any later version. // // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU General Public License for more details. // // You should have received a copy of the GNU General Public License // along with this program. If not, see <http://www.gnu.org/licenses/>. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using System.Threading.Tasks; using System.Diagnostics; using System.IO; using Oasys.MassMotion; using PedModels; namespace PedModelTiming { public class PedModelTiming { // Enumerator for different ped models public enum PedModelType { graphModel, socialForces, optimalSteps, massMotion } static void Main(string[] args) { // Create stopwatches to time components of model runs Stopwatch projectSetup = new Stopwatch(); Stopwatch simulationSetup = new Stopwatch(); Stopwatch modelInit = new Stopwatch(); Stopwatch modelRun = new Stopwatch(); // Launch the SDK and set up the project Sdk.Init();
144
PedModels.PedModels pedModels = new PedModels.PedModels(); // Provide the filepaths for project (don't include .mm), simulation results (directory only), source data (full file), and chomosomes (full file) string projectFilepath = @"C:\Users\Greg\Documents\U of T - Transport MASc\Research\Modelling\Timing\"; string projectName = @"Queens_Park_Testing_2017-07-25_100"; string simFilepath = @"C:\Users\Greg\Documents\U of T - Transport MASc\Research\Modelling\Timing\Simple_NetworkQP\"; // Set the number of threads, provide the random seeds to test, and initialize the timing array int threadCount = 1; int numOfRuns = 10; int[] randomSeeds = new int[10] { 7248, 3247, 5775, 0297, 6408, 4402, 2296, 7895, 0092, 1481 }; double[,] timings = new double[numOfRuns, 5]; // Declare the pedestrian model to test PedModelType modelType = PedModelType.simpleNetwork; int frameRate = PedModelFramerate(modelType); // Perform multiple runs of the model, each with its own random seed, name, and filepath for (int runNumber = 0; runNumber < numOfRuns; runNumber++) { // Begin timing the project/model setup (open the project and set the frame rate) projectSetup.Start(); Project project = Project.Open(projectFilepath + projectName + @".mm"); project.SetFrameRate(frameRate); projectSetup.Stop(); timings[runNumber, 1] = projectSetup.ElapsedMilliseconds; projectSetup.Reset(); Console.WriteLine("Running simulation number {0}", runNumber + 1); // Begin timing the simulation setup (create the simulation with the specified random seed and thread count) simulationSetup.Start(); int randomSeed = randomSeeds[runNumber]; timings[runNumber, 0] = randomSeed; SimulationOptions simOptions = new SimulationOptions(); simOptions.SetRandomSeed(randomSeed); simOptions.SetThreadCount(threadCount); string simName = PedModelName(modelType) + "_" + runNumber.ToString() + "_" + projectName; string simPath = simFilepath + simName + @".mmdb";
145
Simulation simulation = Simulation.Create(project, simName, simPath, simOptions); simulationSetup.Stop(); timings[runNumber, 2] = simulationSetup.ElapsedMilliseconds; simulationSetup.Reset(); // Begin timing model initialization (initialize as per PedModels) modelInit.Start(); InitPedModel(modelType, pedModels, project, simulation); modelInit.Stop(); timings[runNumber, 3] = modelInit.ElapsedMilliseconds; modelInit.Reset(); // Begin timing model run modelRun.Start(); // Run the simulation step by step until it is complete while (!simulation.IsDone()) { AdvanceSimulation(modelType, pedModels, project, simulation); } modelRun.Stop(); timings[runNumber, 4] = modelRun.ElapsedMilliseconds; modelRun.Reset(); // Close the project and wait 5 seconds project.Close(); Thread.Sleep(5000); } // Write timing data to file string timingFilepath = simFilepath + PedModelName(modelType) + @"_ModelTimings.csv"; WriteToFile(timingFilepath, timings, numOfRuns); Console.WriteLine("{0} runs of {1} model completed", numOfRuns.ToString(), PedModelName(modelType)); Console.WriteLine("Press any key to continue . . ."); Console.ReadKey(); // Close the SDK Sdk.Fini(); } // Writes the given header and array to the specified filepath as a .csv file public static void WriteToFile(string filepath, double[,] inputMatrix, int numOfRows) { // Initialize a new stream writer and write the header to file StreamWriter fileWriter = new StreamWriter(filepath); fileWriter.WriteLine("Run Number,Seed,Project Setup,Simulation Setup,Model Initialization,Model Run");
146
// Go through the provided matrix and write each row to file as comma- separated values for (int row = 0; row < numOfRows; row++) { string fileLine = row.ToString() + ","; for (int column = 0; column < 5; column++) { fileLine += inputMatrix[row, column].ToString() + ","; } fileWriter.WriteLine(fileLine); } // Close the stream writer fileWriter.Close(); } // Returns the framerate according to the model type public static int PedModelFramerate(PedModelType mType) { switch (mType) { case PedModelType.graphModel: return 1; case PedModelType.socialForces: return 2; case PedModelType.optimalSteps: return 2; case PedModelType.massMotion: return 5; default: return 5; } } // Returns the two character model name according to the model type public static string PedModelName(PedModelType mType) { switch (mType) { case PedModelType.graphModel: return "GR"; case PedModelType.socialForces: return "SF"; case PedModelType.optimalSteps: return "OS"; case PedModelType.massMotion: return "MM"; default: return "??"; } } // Initializes the simulation model according to the model type
147
public static void InitPedModel(PedModelType mType, PedModels.PedModels pedModels, Project project, Simulation simulation) { switch (mType) { case PedModelType.graphModel: pedModels.InitGraph(project, simulation); break; case PedModelType.socialForces: pedModels.InitSocialForces(project, simulation); break; case PedModelType.optimalSteps: pedModels.InitOptimalSteps(project, simulation); break; case PedModelType.massMotion: pedModels.InitMassMotion(project, simulation); break; default: break; } } // Advances the simulation one step according to the model type public static void AdvanceSimulation(PedModelType mType, PedModels.PedModels pedModels, Project project, Simulation simulation) { switch (mType) { case PedModelType.graphModel: pedModels.AdvanceGraph(project, simulation); break; case PedModelType.socialForces: pedModels.AdvanceSocialForces(project, simulation); break; case PedModelType.optimalSteps: pedModels.AdvanceOptimalSteps(project, simulation); break; case PedModelType.massMotion: pedModels.AdvanceMassMotion(project, simulation); break; default: break; } } } }
148
Appendix E – GNU General Public License
GNU GENERAL PUBLIC LICENSE
Version 3, 29 June 2007
Copyright © 2007 Free Software Foundation, Inc. <http://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not
allowed.
Preamble
The GNU General Public License is a free, copyleft license for software and other kinds of works.
The licenses for most software and other practical works are designed to take away your freedom to share and
change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and
change all versions of a program--to make sure it remains free software for all its users. We, the Free Software
Foundation, use the GNU General Public License for most of our software; it applies also to any other work released
this way by its authors. You can apply it to your programs, too.
When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to
make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you
receive source code or can get it if you want it, that you can change the software or use pieces of it in new free
programs, and that you know you can do these things.
To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights.
Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it:
responsibilities to respect the freedom of others.
For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the
recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source
code. And you must show them these terms so they know their rights.
Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2)
offer you this License giving you legal permission to copy, distribute and/or modify it.
For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software.
For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their
problems will not be attributed erroneously to authors of previous versions.
Some devices are designed to deny users access to install or run modified versions of the software inside them,
although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom
to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use,
which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit
the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this
provision to those domains in future versions of the GPL, as needed to protect the freedom of users.
Finally, every program is threatened constantly by software patents. States should not allow patents to restrict
development and use of software on general-purpose computers, but in those that do, we wish to avoid the special
danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures
that patents cannot be used to render the program non-free.
The precise terms and conditions for copying, distribution and modification follow.
149
TERMS AND CONDITIONS
0. Definitions.
“This License” refers to version 3 of the GNU General Public License.
“Copyright” also means copyright-like laws that apply to other kinds of works, such as semiconductor masks.
“The Program” refers to any copyrightable work licensed under this License. Each licensee is addressed as “you”.
“Licensees” and “recipients” may be individuals or organizations.
To “modify” a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission,
other than the making of an exact copy. The resulting work is called a “modified version” of the earlier work or a
work “based on” the earlier work.
A “covered work” means either the unmodified Program or a work based on the Program.
To “propagate” a work means to do anything with it that, without permission, would make you directly or
secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a
private copy. Propagation includes copying, distribution (with or without modification), making available to the
public, and in some countries other activities as well.
To “convey” a work means any kind of propagation that enables other parties to make or receive copies. Mere
interaction with a user through a computer network, with no transfer of a copy, is not conveying.
An interactive user interface displays “Appropriate Legal Notices” to the extent that it includes a convenient and
prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no
warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under
this License, and how to view a copy of this License. If the interface presents a list of user commands or options,
such as a menu, a prominent item in the list meets this criterion.
1. Source Code.
The “source code” for a work means the preferred form of the work for making modifications to it. “Object code”
means any non-source form of a work.
A “Standard Interface” means an interface that either is an official standard defined by a recognized standards body,
or, in the case of interfaces specified for a particular programming language, one that is widely used among
developers working in that language.
The “System Libraries” of an executable work include anything, other than the work as a whole, that (a) is included
in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves
only to enable use of the work with that Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A “Major Component”, in this context, means a major
essential component (kernel, window system, and so on) of the specific operating system (if any) on which the
executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it.
The “Corresponding Source” for a work in object code form means all the source code needed to generate, install,
and (for an executable work) run the object code and to modify the work, including scripts to control those activities.
However, it does not include the work's System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but which are not part of the work. For example,
Corresponding Source includes interface definition files associated with source files for the work, and the source
code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such
as by intimate data communication or control flow between those subprograms and other parts of the work.
The Corresponding Source need not include anything that users can regenerate automatically from other parts of the
Corresponding Source.
150
The Corresponding Source for a work in source code form is that same work.
2. Basic Permissions.
All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable
provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the
unmodified Program. The output from running a covered work is covered by this License only if the output, given
its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as
provided by copyright law.
You may make, run and propagate covered works that you do not convey, without conditions so long as your license
otherwise remains in force. You may convey covered works to others for the sole purpose of having them make
modifications exclusively for you, or provide you with facilities for running those works, provided that you comply
with the terms of this License in conveying all material for which you do not control copyright. Those thus making
or running the covered works for you must do so exclusively on your behalf, under your direction and control, on
terms that prohibit them from making any copies of your copyrighted material outside their relationship with you.
Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not
allowed; section 10 makes it unnecessary.
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling
obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting
or restricting circumvention of such measures.
When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to
the extent such circumvention is effected by exercising rights under this License with respect to the covered work,
and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the
work's users, your or third parties' legal rights to forbid circumvention of technological measures.
4. Conveying Verbatim Copies.
You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you
conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating
that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all
notices of the absence of any warranty; and give all recipients a copy of this License along with the Program.
You may charge any price or no price for each copy that you convey, and you may offer support or warranty
protection for a fee.
5. Conveying Modified Source Versions.
You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of
source code under the terms of section 4, provided that you also meet all of these conditions:
a) The work must carry prominent notices stating that you modified it, and giving a relevant date.
b) The work must carry prominent notices stating that it is released under this License and any conditions added
under section 7. This requirement modifies the requirement in section 4 to “keep intact all notices”.
c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy.
This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work,
and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any
other way, but it does not invalidate such permission if you have separately received it.
d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program
has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so.
151
A compilation of a covered work with other separate and independent works, which are not by their nature
extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a
volume of a storage or distribution medium, is called an “aggregate” if the compilation and its resulting copyright
are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit.
Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate.
6. Conveying Non-Source Forms.
You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also
convey the machine-readable Corresponding Source under the terms of this License, in one of these ways:
a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium),
accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software
interchange.
b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium),
accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or
customer support for that product model, to give anyone who possesses the object code either (1) a copy of the
Corresponding Source for all the software in the product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no more than your reasonable cost of physically
performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no
charge.
c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source.
This alternative is allowed only occasionally and noncommercially, and only if you received the object code with
such an offer, in accord with subsection 6b.
d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent
access to the Corresponding Source in the same way through the same place at no further charge. You need not
require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code
is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that
supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to
find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to
ensure that it is available for as long as needed to satisfy these requirements.
e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code
and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d.
A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System
Library, need not be included in conveying the object code work.
A “User Product” is either (1) a “consumer product”, which means any tangible personal property which is normally
used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling.
In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a
particular product received by a particular user, “normally used” refers to a typical or common use of that class of
product, regardless of the status of the particular user or of the way in which the particular user actually uses, or
expects or is expected to use, the product. A product is a consumer product regardless of whether the product has
substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use
of the product.
“Installation Information” for a User Product means any methods, procedures, authorization keys, or other
information required to install and execute modified versions of a covered work in that User Product from a
modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning
of the modified object code is in no case prevented or interfered with solely because modification has been made.
If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the
conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred
to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this
requirement does not apply if neither you nor any third party retains the ability to install modified object code on the
User Product (for example, the work has been installed in ROM).
152
The requirement to provide Installation Information does not include a requirement to continue to provide support
service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product
in which it has been modified or installed. Access to a network may be denied when the modification itself
materially and adversely affects the operation of the network or violates the rules and protocols for communication
across the network.
Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a
format that is publicly documented (and with an implementation available to the public in source code form), and
must require no special password or key for unpacking, reading or copying.
7. Additional Terms.
“Additional permissions” are terms that supplement the terms of this License by making exceptions from one or
more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though
they were included in this License, to the extent that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately under those permissions, but the entire Program
remains governed by this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option remove any additional permissions from that
copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases
when you modify the work.) You may place additional permissions on material, added by you to a covered work, for
which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized
by the copyright holders of that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the
Appropriate Legal Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be
marked in reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or authors of the material; or
e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or
modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these
contractual assumptions directly impose on those licensors and authors.
All other non-permissive additional terms are considered “further restrictions” within the meaning of section 10. If
the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along
with a term that is a further restriction, you may remove that term. If a license document contains a further
restriction but permits relicensing or conveying under this License, you may add to a covered work material
governed by the terms of that license document, provided that the further restriction does not survive such
relicensing or conveying.
If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a
statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated
as exceptions; the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly provided under this License. Any attempt
otherwise to propagate or modify it is void, and will automatically terminate your rights under this License
(including any patent licenses granted under the third paragraph of section 11).
However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated
(a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b)
153
permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days
after the cessation.
Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies
you of the violation by some reasonable means, this is the first time you have received notice of violation of this
License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of
the notice.
Termination of your rights under this section does not terminate the licenses of parties who have received copies or
rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not
qualify to receive new licenses for the same material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation
of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise
does not require acceptance. However, nothing other than this License grants you permission to propagate or modify
any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or
propagating a covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to
run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by
third parties with this License.
An “entity transaction” is a transaction transferring control of an organization, or substantially all assets of one, or
subdividing an organization, or merging organizations. If propagation of a covered work results from an entity
transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the
work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of
the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with
reasonable efforts.
You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For
example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License,
and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent
claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it.
11. Patents.
A “contributor” is a copyright holder who authorizes use under this License of the Program or a work on which the
Program is based. The work thus licensed is called the contributor's “contributor version”.
A contributor's “essential patent claims” are all patent claims owned or controlled by the contributor, whether
already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of
making, using, or selling its contributor version, but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For purposes of this definition, “control” includes
the right to grant patent sublicenses in a manner consistent with the requirements of this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential
patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its
contributor version.
In the following three paragraphs, a “patent license” is any express agreement or commitment, however
denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for
patent infringement). To “grant” such a patent license to a party means to make such an agreement or commitment
not to enforce a patent against the party.
154
If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is
not available for anyone to copy, free of charge and under the terms of this License, through a publicly available
network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3)
arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream
recipients. “Knowingly relying” means you have actual knowledge that, but for the patent license, your conveying
the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more
identifiable patents in that country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring
conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work
authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered work and works based on it.
A patent license is “discriminatory” if it does not include within the scope of its coverage, prohibits the exercise of,
or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License.
You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of
distributing software, under which you make payment to the third party based on the extent of your activity of
conveying the work, and under which the third party grants, to any of the parties who would receive the covered
work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you
(or copies made from those copies), or (b) primarily for and in connection with specific products or compilations
that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to
28 March 2007.
Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to
infringement that may otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of
this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as
to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a
consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty
for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms
and this License would be to refrain entirely from conveying the Program.
13. Use with the GNU Affero General Public License.
Notwithstanding any other provision of this License, you have permission to link or combine any covered work with
a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to
convey the resulting work. The terms of this License will continue to apply to the part which is the covered work,
but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through
a network will apply to the combination as such.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from
time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address
new problems or concerns.
Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of
the GNU General Public License “or any later version” applies to it, you have the option of following the terms and
conditions either of that numbered version or of any later version published by the Free Software Foundation. If the
Program does not specify a version number of the GNU General Public License, you may choose any version ever
published by the Free Software Foundation.
155
If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be
used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for
the Program.
Later license versions may give you additional or different permissions. However, no additional obligations are
imposed on any author or copyright holder as a result of your choosing to follow a later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER
PARTIES PROVIDE THE PROGRAM “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER
EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE
QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE
DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY
COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM
AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL,
INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE
THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED
INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE
PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY
HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according
to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil
liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the
Program in return for a fee.
END OF TERMS AND CONDITIONS