exploiting human steering models for path prediction -...

8
Exploiting Human Steering Models for Path Prediction Bulent Tastan Computer Science University of Central Florida Orlando, FL 32816-2362, U.S.A [email protected] Gita Sukthankar Computer Science University of Central Florida Orlando, FL 32816-2362, U.S.A [email protected] Abstract – The ability to predict the path of a mov- ing human is a crucial element in a wide range of ap- plications, including video surveillance, assisted living environments (smart homes), and simulation environ- ments. Two tasks, tracking (finding the user’s current location) and goal prediction (identifying the final desti- nation) are particularly relevant to many problems. Al- though standard path planning approaches can be used to predict human behavior at a macroscopic level, they do not accurately model human path preferences. In this paper, we demonstrate an approach for path prediction based on a model of visually-guided steering that has been validated on human obstacle avoidance data. By basing our path prediction on egocentric features that are known to affect human steering preferences, we can improve on strictly geometric models such as Voronoi diagrams. Our approach outperforms standard motion models in a particle-filter tracker and can also be used to discriminate between multiple user destinations. 1 Introduction To track humans with sensor networks, detect be- havior anomalies, and offer effective navigational assis- tance, we need to be able to predict the trajectory that a human will follow in an environment. Although hu- man paths can be approximated by a minimal distance metric, humans often exhibit counter-intuitive behav- iors; for instance, human paths can be non-symmetric and depend on the direction of path traversal (e.g., humans walking one route and returning via a differ- ent one). Obviously tracking and goal prediction algo- rithms that assume distance-minimizing behavior will generate errors in environments where the humans’ be- havior diverges from this model. To address this problem, we sought a psychologically- grounded model of human steering and obstacle behav- ior to incorporate into our tracking and goal predic- tion system. Our selected model, originally proposed by Fajen et al. [3] incorporates environmental features that are accessible to the human vision system into a second-order dynamical model; all calculations are based on local perceptual information and do not re- quire global knowledge of the environment. In this pa- per, we demonstrate that a particle-filter tracking sys- tem based on this human steering model outperforms other commonly used motion models. Although there are obvious applications for human path prediction in video surveillance systems, we ex- amine the usage of path prediction in a virtual envi- ronment. Being able to predict the future path of a human navigating a virtual environment facilitates 1) rapid rendering of future game destinations, 2) oppo- nent modeling in tactical games, and 3) more accurate tracking of subjects during periods of network latency. In this paper, we use the virtual environment, Second Life, for our path prediction experiments and retrain the Fajen et al. human steering model (originally de- signed for human locomotion) to handle cases in which the subject is directing an avatar with keyboard and mouse. We show that the modified model more ac- curately predicts subjects’ paths than other plausible models. The remainder of the paper is organized as follows. Section 2 gives an overview of related work on path prediction, tracking, and modeling human transporta- tion routines. Section 3 describes the original human steering model and our approach for retuning the pa- rameters for navigation in virtual environments. Sec- tion 6 presents results for a variety of path prediction models in Second Life. Section 7 outlines our parti- cle filter tracking approach, which we evaluate against other commonly used particle filter variants. We con- clude the paper by discussing other potential extensions and applications for our work. 12th International Conference on Information Fusion Seattle, WA, USA, July 6-9, 2009 978-0-9824438-0-4 ©2009 ISIF 1722

Upload: lykhuong

Post on 04-Jul-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Exploiting Human Steering Models for Path Prediction

Bulent TastanComputer Science

University of Central FloridaOrlando, FL 32816-2362, U.S.A

[email protected]

Gita SukthankarComputer Science

University of Central FloridaOrlando, FL 32816-2362, U.S.A

[email protected]

Abstract – The ability to predict the path of a mov-ing human is a crucial element in a wide range of ap-plications, including video surveillance, assisted livingenvironments (smart homes), and simulation environ-ments. Two tasks, tracking (finding the user’s currentlocation) and goal prediction (identifying the final desti-nation) are particularly relevant to many problems. Al-though standard path planning approaches can be usedto predict human behavior at a macroscopic level, theydo not accurately model human path preferences. In thispaper, we demonstrate an approach for path predictionbased on a model of visually-guided steering that hasbeen validated on human obstacle avoidance data. Bybasing our path prediction on egocentric features thatare known to affect human steering preferences, we canimprove on strictly geometric models such as Voronoidiagrams. Our approach outperforms standard motionmodels in a particle-filter tracker and can also be usedto discriminate between multiple user destinations.

1 IntroductionTo track humans with sensor networks, detect be-

havior anomalies, and offer effective navigational assis-tance, we need to be able to predict the trajectory thata human will follow in an environment. Although hu-man paths can be approximated by a minimal distancemetric, humans often exhibit counter-intuitive behav-iors; for instance, human paths can be non-symmetricand depend on the direction of path traversal (e.g.,humans walking one route and returning via a differ-ent one). Obviously tracking and goal prediction algo-rithms that assume distance-minimizing behavior willgenerate errors in environments where the humans’ be-havior diverges from this model.

To address this problem, we sought a psychologically-grounded model of human steering and obstacle behav-ior to incorporate into our tracking and goal predic-

tion system. Our selected model, originally proposedby Fajen et al. [3] incorporates environmental featuresthat are accessible to the human vision system intoa second-order dynamical model; all calculations arebased on local perceptual information and do not re-quire global knowledge of the environment. In this pa-per, we demonstrate that a particle-filter tracking sys-tem based on this human steering model outperformsother commonly used motion models.

Although there are obvious applications for humanpath prediction in video surveillance systems, we ex-amine the usage of path prediction in a virtual envi-ronment. Being able to predict the future path of ahuman navigating a virtual environment facilitates 1)rapid rendering of future game destinations, 2) oppo-nent modeling in tactical games, and 3) more accuratetracking of subjects during periods of network latency.In this paper, we use the virtual environment, SecondLife, for our path prediction experiments and retrainthe Fajen et al. human steering model (originally de-signed for human locomotion) to handle cases in whichthe subject is directing an avatar with keyboard andmouse. We show that the modified model more ac-curately predicts subjects’ paths than other plausiblemodels.

The remainder of the paper is organized as follows.Section 2 gives an overview of related work on pathprediction, tracking, and modeling human transporta-tion routines. Section 3 describes the original humansteering model and our approach for retuning the pa-rameters for navigation in virtual environments. Sec-tion 6 presents results for a variety of path predictionmodels in Second Life. Section 7 outlines our parti-cle filter tracking approach, which we evaluate againstother commonly used particle filter variants. We con-clude the paper by discussing other potential extensionsand applications for our work.

12th International Conference on Information FusionSeattle, WA, USA, July 6-9, 2009

978-0-9824438-0-4 ©2009 ISIF 1722

2 Related Work

Tracking is often formulated as a state estimationproblem and addressed with standard filtering tech-niques such as Kalman [10] or particle filters [9]. Inthis case, the problem is to determine the true state ofa set of hidden location variables from a sequence ofobservable sensor data. These state estimation meth-ods rely on having a reasonable motion model for pre-dicting the human’s movement; examples of commonlyused models include random (Brownian) motion andconstant velocity. In more complex state estimationschemes for human transportation routines, the type ofmotion model can be modeled as a hidden state vari-able; Liao et al [5] demonstrated a method for track-ing and predicting human movements from GPS sensordata in which the mode of transportation (walking, bus,car) was inferred using a hierarchical dynamic Bayesnetwork. Non state-estimation approaches to predict-ing GPS transportation routines have also been demon-strated; for instance, Ziebart et al. [11] used inverse re-inforcement learning to learn the reward function thatthe human subjects used when selecting driving routes.Learning the reward function from data is the inverseto standard reinforcement learning in which the rewardis known and the policy is learned.

In cases where simple velocity and acceleration pro-files are not adequate, motion planning methods havebeen used. Bruce and Gordon [1] introduced a particlefilter variant that used motion planning to plan pathsto common destinations in the environment; they showthis approach to be superior to the standard Brown-ian motion model for tracking people in cases wherethe sensor trace is occluded. Traversing a Voronoi di-agram has also emerged as a popular path planningmethod for safely avoiding obstacles. A Voronoi seg-ment consists of all the points in the region that areequidistant to two obstacles; nodes are points equidis-tant to three or more obstacles. By following a Voronoipath, a robot maximizes the distance between itself andobstacles in the environment. A couple of state estima-tion models have leveraged Voronoi diagrams for track-ing people in indoor environments as a replacement forroad networks [4]. Both plan planning and Voronoimotion models are superior to the simpler velocity pro-files because they incorporate knowledge of geometry ofthe environment (modeled as goals and obstacles) intothe path planning. However, these methods are notbased on actual human motion profiles, nor are theyeasily adapted to new subjects. In the next section,we present a steering model that is based on humanmotion profiles and show how it can be tuned for newsubjects. In this paper, we only present tracking re-sults from using a particle filter in combination with

Figure 1: A human subject avoiding stationary obsta-cles in our Second Life obstacle course. Position datais acquired using the halo object worn by the user’savatar. The goal location is marked in red and the ob-stacles are represented as low cylindrical columns.

the steering model, but we believe that we can general-ize this approach to other state estimation techniques.

3 Human Steering ModelAn alternate approach to the global motion planning

methods discussed in the previous section is to assumethat the walker makes decisions solely based on localfeatures, without constructing a complete path to thegoal. A popular local planning technique, potential-field based control, models the influence of the envi-ronment as an imaginary set of attractive and repul-sive forces generated from visible goals and obstacles.The walker’s direction and speed are determined by thecombined influence of the forces. Since potential fieldsare computationally inexpensive and straightforward toimplement, they are a useful steering model for guidingautonomous characters in games and simulations [7].However, the trajectories generated by a potential fieldplanner can fail to match human locomotion data, par-ticularly when the planner generates abrupt changesin angular velocity, which is rarely observed in humanwalkers.

For our study, we selected a human steering modelbased on 1) local perceptual features that are eas-ily calculated by the human vision system and 2) asecond-order dynamical control system that guaranteesa smooth angular velocity profile. This model was in-troduced by Fajen et al. [3] to model visually-guidedhuman locomotion. In their studies, human subjectshad to avoid virtual stationary obstacles while wearing

1723

a head-mounted display. This model has been gener-alized to handle other steering problems such as targetinterception; here, we focus on the steering model forstationary obstacles and goals.

In this steering model, human locomotion is assumedto have a constant translational speed (s); goals and ob-stacles affect the subject’s angular acceleration (φ̈) ac-cording to a set of differential equations. Objects in theenvironment (goals and obstacles) are represented in anegocentric coordinate frame using features that are eas-ily extracted by the human visual system. The subject’sheading direction (φ) is defined with respect to a fixedreference axis. The agent’s movement is governed bythree components: damping, goal, and obstacle func-tions. The first component, damping, is function of an-gular velocity but is independent of the agent’s presentangle (φ); it prevents angular oscillations and modelshumans’ preference for straight movement. The goalcomponent serves as an attractor to the agent and takesas input the goal angle (φ− ψ) and goal distance (dg).The third and final component, the obstacle function,repels the agent from obstacles; its inputs are obstacleangles (φ−ψo) and obstacle distances (do). The overallformula for obstacle avoidance is:

φ̈ = −fd(φ̇)− fg(φ− ψg, dg) +#obst∑i=1

fo(φ− ψoi, doi

),

where fd, fg and fo are the damping, goal and obstaclecomponents, respectively. The component functions arelinearly combined, resulting in the following expressionfor the total angular acceleration function:

φ̈ = −b(φ̇)− kg(φ− ψg)(e−c1dg + c2)

+#obst∑i=1

ko(φ− ψoi)(e−c3|φ−ψoi

|)(e−c4doi ).

The parameters for the components are fitted using theprocedure described in Section 5. Although we felt thatthe form of the model would generalize from the actualhuman locomotion task to the related task of humansubjects steering avatars using keyboard and mouse,the parameters needed to be refitted to accommodatethe task differences.

4 Data CollectionData for this study was collected from subjects steer-

ing avatars in Second Life. Second Life is a user-constructed virtual environment that allows the usersto construct their own 3D world. The game supportsa number of personal modes of travel (walking, flying,teleporting) in addition to allowing users to create theirown vehicles. Second Life is laid out in a 2D space,

Figure 2: The heading φ is calculated with respect tothe reference axis (dotted line). The goal (marked inred) component is calculated using the distance to goal,dg, and goal angle ψg. An obstacle (marked in grey)affects the walker (marked by bird eye view of subject)through the distance to the obstacle, do and the angleψo. Figure adapted from Fajen et al. [3].

SLGrid, composed of regions with each region beinghosted on its own server and offers a fully featured 3Denvironment that is built and shaped by the users whoown the land.

To collect human trajectory data from Second Life,we built a monitoring object using the Linden Script-ing Language (LSL). The object appears as a halo thatis worn on the avatar and monitors the user’s currentx,y,z location. The script commences operation whenthe user clicks on the object and records points on theuser’s path along with timestamp information. At pe-riodic intervals, all of the collected information in XMLformat is emailed to an external account. Due to net-work latency, it can take several minutes for the mailto be delivered.

The virtual obstacle course to be navigated by thesubject was created using a set of scripted objects tomark the start, finish, and obstacles (shown in Fig-ure 1). These script objects send their current locationsvia email. A multi-threaded Java email parser reads theemail using the POP gmail interface and stores it in adatabase. The particle-filter tracker and human steer-ing model (implemented in Matlab) make their pathpredictions based on the database.

1724

5 Parameter FittingTo apply the model to the problem of predicting

virtual avatar movement, we retrained the parameterswith a set of short (5–6 second) movement traces col-lected from one subject in Second Life using the proce-dure described in Section 4. Goal parameters and ob-stacle parameters were trained separately. For the goalparameters (b, kg, c1 and c2) we collected five traceswith same distance but different angles and four traceswith the same angle but different distances. For theobstacle parameters (ko, c3 and c4) we used four tracesin which the goal was placed directly ahead and a singleobstacle offset slightly to the left at different distances.This parameter fitting procedure follows Fajen et al.,as described in [3].

Initially, we used the original model parameters asa starting point to determine reasonable values andranges and performed a grid search to identify the set ofparameters that minimized the sum-squared error be-tween the trace set and the predictions. The resultingparameters are: b = 3.45, kg = 10, c1 = 1, c2 = 1,ko = 300, c3 = 4.5, c4 = 0.6. This process took 7 hourson Core-Quad 2.4GHz machine. Later, using a nonlin-ear optimization technique for least square error min-imization [6], we confirmed that the parameters wereclose to the grid search results.

6 Path PredictionBefore implementing the tracker, we evaluated the

performance of the human steering model for predictingthe complete paths used by the human subjects in aset of Second Life obstacle courses. We compared theresults of the human steering model against two othercommonly used path prediction models: shortcut andVoronoi graph.

Shortcut: calculates the most direct path to the goalignoring the obstacle.

Voronoi: searches for the path to the goal that min-imizes a path along the Voronoi graph (assumingthat the start and goal are projected to the closestVoronoi segments).

Human Steering Model (HSM): uses the pathspredicted by our retrained version of the visually-guided obstacle model.

The data used for testing the motion models wasgathered from three different randomly generated lay-outs with varying numbers of obstacles in a Second Lifeenvironment. The results of path planning using thesethree different models on obstacle courses with vary-ing numbers of obstacles is shown in Figure 3. Table 1shows the error results averaged across all subject traces

Table 1: Comparison of error (SSD) between groundtruth to path predicted by different motion models.

Obstacles 1 3 9

HSM (our method) 19.4115 15.5329 23.1993Shortcut 56.3101 49.7871 59.3962Voronoi 131.5580 197.4965 124.3457

Table 2: Number of matched subject traces in x and ydimensions according to Kolmogorov-Smirnov test.

Obstacles 1 3 9

Human steering model x 5 6 6y 10 10 10

Shortcut x 0 3 3y 0 5 10

Voronoi x 0 1 3y 0 4 9

for different numbers of obstacles. All error results arereported in Second Life meters. The error is computedby summing the square of distances between a point ona subject trace and its closest corresponding point onthe predicted path. The results indicate that the hu-man steering model outperforms the other two modelsfor any number of obstacles.

We statistically tested the goodness of fit betweenthe different models and the original traces using a two-sample Kolmogorov-Smirnov test [2] for evaluating theequality of one-dimensional probability distributions.The null hypothesis is that both paths (the human sub-ject trace and the prediction model) are drawn fromsame distribution. Since the Kolmogorov-Smirnov testcan be applied only on one dimension, the test is appliedto both x and y axes separately for different obstaclelayouts using ten subject trials for each layout.

The results of the test are presented in Table 2. Thehuman steering model is a good match for all ten of thesubject traces in the y coordinate frame and > 5 tracesin the x dimension. Note that for this particular obsta-cle layout, prediction in the x dimension is harder thanprediction in the y dimension since most of the obsta-cle avoidance involves maneuvering in the x dimension,whereas prediction in the y dimension mainly requireshaving a good estimate of the subject’s translationalvelocity. The fit of the other two models is quite poor,especially for the single obstacle case.

1725

7 Particle Filter TrackerTo track the movement of players in Second Life,

we implemented a particle filter tracking system. Un-like in vision applications, the position data exportedfrom Second Life is highly accurate, but the latency ofemail delivery has an effect analogous to camera oc-clusion, creating long periods of time during which nosensor data is received. By using our trained human-steering model as a motion model for the particle filter,we can robustly track the human subjects during thesenetwork-induced “occlusions”.

A particle filter is a non-parametric Bayesian tech-nique for estimating posterior distributions; each par-ticle represents a hypothesis of the true world state attime t. The particle filter algorithm approximates thebelief bel(xt) using a set of particles Xt; the particle setXt is constructed recursively from the previous stateset Xt−1, control trajectory ut and observation zt. Themain steps of the algorithm are as follows [9]:

1. The hypothetical state Xt is generated for time tfrom previous particle set Xt−1 and control trajec-tory ut:

Xt ∼ p(xt|ut, xt−1).

2. The importance factor is calculated by incorporat-ing observation zt into particle set Xt s.t.wt = p(zt|xt).

3. The particle set is resampled using the importanceweights. After resampling, particles are sampledapproximately according to posterior belief:bel(xt) = ηp(zt|xt)bel(xt).

7.1 Method

Assuming that the locations of goal and obstacles areknown, we can construct a path using our trained hu-man steering model to serve as the control trajectoryfor the particle filter. The particle filter is then used totrack the person’s location between the sparse positionupdates, when there is no sensor information; the statedescription in our particle filter is simply the (x, y) co-ordinates of the tracked user.

We evaluated the tracking performance of four differ-ent motion models during the sensor black-out period:

Shortcut: generates the most direct path to the goal,ignoring obstacles.

Voronoi: searches for the path to the goal that mini-mizes a path along the Voronoi graph (projectingstart and goal to their nearest segments). Such apath moves towards the goal while staying awayfrom obstacles.

Table 3: Particle filter using different motion models.

Motion Models Error (SSD)

HSM 3.0116Voronoi 10.9907Shortcut 24.2757Constant Velocity 134.5305

Constant Velocity: maintains the last observed ve-locity, thus extrapolating during sensor blackoutusing a linear model.

Human Steering Model (HSM): generates a pathusing our retrained version of the visually-guidedobstacle model.

Figure 3 shows paths generated from three worlds(with varying numbers of obstacles) by each of the fourmotion models, compared to the ground truth dataobtained from Second Life after the subject finishedtraversing the obstacle course. Circles in the figurerepresent the obstacles and the goal is marked by aplus sign. The path starts at (0,0) and proceeds inupward direction. At the point of sensor blackout, thepaths predicted by the different motion models diverge.In particular, one can see that the constant velocitymodel is adequate in the absence of obstacles or whenthe duration of the sensor blackout is short. Particlefilters employing the different motion models performvery differently. We detail the operation as follows.

At each time step, for each particle, the closest seg-ment on the path generated by the motion model isfound, and the particle’s position is updated so that itmoves parallel to this segment at the predicted speed.The position of the particle is then perturbed usingzero-mean Gaussian noise. When observations areavailable, the weight (likelihood) of each particle is de-termined using a Gaussian sensor model; in the absenceof observations, all of the particles receive an equalweight.

In the resampling step, weights are first sorted in as-cending order and normalized to unit sum. Then thecumulative sum of the sorted weights is calculated. Auniform random point is picked on the cumulative sumand the particle corresponding to that point on cumu-lative sum is added to the posterior particle set. Thisstep is repeated for the number of particles to constructthe new particle set. The process is summarized in Al-gorithm 1.

1726

Figure 3: Paths predicted by the four motion models for a variety of obstacle courses. The standard motionmodel (constant velocity) is adequate only for short durations or in areas of low obstacle density. The proposedmodel (HSM) generates paths that better approximate the user’s ground truth trajectory.

Algorithm 1 Particle Filter(Xt−1, ut, zt)Xt−1: previous particle set (location of particles)ut: control trajectory (generated by motion model)zt: observed location of avatar, when available

Q← ∅for each particle x in Xt−1 do

[u1, u2]← closest points to x on path utθ ← angle to u2 from u1

v ∼ N(|u2 − u1|, |u2 − u1|2)x← x+ (v cos θ, v sin θ) +N(0, σ2)add x to Qif exists(zt) then

weightsx ← 1/||zt − x||else

weightsx ← 1end if

end for

weights=weights/sum(weights)[s, si]=sort(weights)cumdf=cumsum(s)

Xt ← ∅for m = 1 to |Xt−1| dor = rand(0, 1)i← index that cumdf(i) is closest to radd Q(si(i)) to Xt

end for

return Xt

7.2 Results

Figure 4 shows sample traces from the particle filterusing the constant velocity and HSM motion models.Clearly, the popular linear model is poorly suited forpredicting the user’s position during sensor blackoutwhen the motion is affected by the presence of obsta-cles. As a result, the predicted distribution of particlesdrifts significantly from the ground truth during theseperiods. In contrast, the path generated by the pro-posed method (HSM) captures the motion of the user,enabling the particle filter to robustly track the usereven through extended periods of sensor blackout.

Table 3 compares the tracking results (mean ofthe posterior distribution) of the four motion modelsagainst ground truth, over the duration of the obstaclecourse. We confirm that the constant velocity modelperforms poorly because linear extrapolation cannotmodel paths in obstacle-rich settings. Steering directlytowards the goal (shortcut) is also a poor choice be-cause it fails to account for the effect of obstacles. Fol-lowing the Voronoi path is overly conservative at ob-stacle avoidance. The human steering model, trainedon Second Life data, is able to accurately predict theuser’s path through the obstacle course during periodsof sensor blackout.

To further study how the quality of paths predictedby the different motion models varies with the durationof sensor blackouts, we performed a series of experi-ments with increasingly longer sensor blackouts (0.5–3.5s). Table 4 summarizes the results. As suggested byour hypothesis, the constant velocity model performswell for short (t ≤ 1.0s) durations, but rapidly deviatesfrom the user’s paths after that point. The Voronoimodel is also a poor predictor because the user doesnot typically start near a Voronoi edge. The heuris-

1727

(a) t=1.4s (b) t=2.4s (c) t=3.3s (d) t=4.0s (e) t=5.0s

(f) t=1.4s (g) t=2.4s (h) t=3.3s (i) t=4.0s (j) t=5.0s

Figure 4: Particle filter using constant velocity model (top) and human steering model (bottom)

tic of simply heading towards the goal performs sur-prisingly well in these experiments, but this should beattributed to a low density of obstacles between theuser and the goal during the sensor blackout. Ourmethod (HSM) is initially worse than constant velocitybut quickly (t > 1.0s) becomes the best choice. Fur-thermore, it accumulates error at a slower rate thanthe other methods, confirming that it can predict usertrajectories during long sensor blackouts.

8 DiscussionOne straightforward extension for improving the

tracking accuracy of the human steering model is toaugment the state of each particle to include the pa-rameters for the HSM. Each particle would effectivelymaintain different hypotheses about the current stateof the HSM model. In future work, we plan to evaluatethe performance of different state-space representationsfor our particle filter tracker.

A general issue with applying the HSM model to newproblems is that the geometry of the environment (ob-stacles and goals) has to be known in advance; thismakes our method well suited for tracking pedestriansfrom surveillance cameras in a known area or for mon-itoring users in a virtual world. In cases where thecamera is moving (e.g., on a robot) or the world geom-etry is known, environmentally-dependent models suchas HSM or Voronoi cannot be applied. For Second Lifescenarios where the geometry is known, we have demon-

strated a supervised approach for learning destinationpreferences by leveraging user map annotations in situa-tions when it is possible to gather user-supplied activitydata [8].

We suggest that it is also possible to use the humansteering model for goal prediction in situations wherethere are multiple possible goals and the algorithm mustpredict the user’s ultimate destination. Our approachto this problem would be to use the HSM to calculatethe difference between the user’s current trajectory andthe set of HSM-generated trajectories to each candidategoal in the environment. This metric could be used asa measure of the relative likelihood of each candidatedestination. An interesting future application of thistechnology would be the creation of parking lot surveil-lance systems for matching cars to pedestrians since itwould be possible to identify the set of goals. In sucha domain, cars would serve both as obstacles (to peo-ple heading for other cars) and also goals (for the carowner); this obstacle/goal duality would serve to accen-tuate the difference between candidate hypotheses.

9 ConclusionThe ability to predict the path of a moving human

is a crucial element in a wide range of applications, in-cluding video surveillance, assisted living environments(smart homes), and simulation systems. In this pa-per, we demonstrate an approach for path predictionand user tracking in the Second Life virtual world; our

1728

Time(sec) 0.5 1.0 1.5 2.0 2.5 3.0 3.5

HSM 1.1794 4.1339 7.2778 11.3609 15.3684 18.7960 21.6839Shortcut 0.8094 4.5494 11.2217 22.2225 32.8891 41.6204 49.0982Voronoi 4.9633 15.4349 25.3729 34.7028 57.6315 77.1578 90.2858Constant Velocity 0.2814 3.3736 10.9680 26.6316 50.3948 81.2956 121.3487

Table 4: Error (SSD) for different motion models for extrapolating paths of different time lengths.

method is applicable to any environment (real or vir-tual) with known geometry. Although standard pathplanning approaches can be used to predict human be-havior at a macroscopic level, they do not accuratelymodel human angular acceleration preferences. We pro-pose an alternate approach for path prediction based ona second-order dynamical human steering model orig-inally introduced by Fajen et al. [3] and confirm theapplicability of the HSM for our task by showing statis-tically significant path fit improvements over standardplanning models. By basing the tracker’s motion modelon human steering preferences, we improve on strictlygeometric models such as Voronoi diagrams. Addition-ally, our model is robust to network-induced “occlu-sions” and outperforms the commonly used constantvelocity motion model during sensor blackouts.

References[1] A. Bruce and G. Gordon. Better motion prediction for

people-tracking. In Proceedings of International Con-ference on Robotics and Automation, 2004.

[2] W. J. Conover. Practical Nonparametric Statistics,chapter 6: Statistics of the Kolmogorov-Smirnov Type,pages 428–467. John Wiley & Sons, 3rd edition, 1999.

[3] B. R. Fajen, W. H. Warren, S. Temizer, and L. P. Kael-bling. A Dynamical Model of Visually-Guided Steer-ing, Obstacle Avoidance, and Route Selection. Inter-national Journal of Computer Vision, 54(1–3):13–34,2003.

[4] L. Liao, D. Fox, J. Hightower, H. Kautz, and D. Schulz.Voronoi tracking: Location estimation using sparse andnoisy sensor data. In Proceedings of IEEE/RSJ Inter-national Conference on Intelligent Robots and Systems(IROS), 2003.

[5] L. Liao, D. Fox, and H. Kautz. Learning and infer-ring transportation routines. In Proceedings of NationalConference on Artificial Intelligence, 2004.

[6] S. G. Nash and A. Sofer. Linear and Nonlinear Pro-gramming, chapter 13: Nonlinear Least-Squares DataFitting. McGraw-Hill, 1996.

[7] C. W. Reynolds. Steering behaviors for autonomouscharacters. In Proceedings of Game Developers Con-ference, 1999.

[8] S. F. A. Shah, P. Bell, and G. Sukthankar. Identifyinguser destinations in virtual worlds. In Proceedings ofFlorida Artificial Intelligence Research Society, 2009.

[9] S. Thrun, W. Burgard, and D. Fox. ProbabilisticRobotics. The MIT Press, September 2005.

[10] G. Welch and G. Bishop. An Introduction to theKalman Filter. Technical report, University of NorthCarolina at Chapel Hill, Chapel Hill, NC, USA, 1995.

[11] B. Ziebart, A. Maas, J. A. Bagnell, and A. Dey. Max-imum entropy inverse reinforcement learning. In Pro-ceedings of National Conference on Artificial Intelli-gence, 2008.

1729