agent-based gesture tracking

42
Agent-Based Gesture Tracking Robert Bryll and Francis Quek Vision Interfaces and Systems Laboratory (VISLab) CSE Department, Wright State University 303 Russ Engineering Center 3640 Colonel Glenn Hwy Dayton, OH 45435-0001 Correspondence: [email protected] January 23, 2004 Abstract We describe an agent-based approach to the visual tracking of human hands and head that represents a very useful “middle ground” between the simple model-free tracking and the highly constrained model-based solutions. It combines the simplicity, speed and exibility of tracking without using explicit shape models with the ability to utilize domain knowledge and to apply various constraints characteristic of more elaborate model-based tracking approaches. One of the key contributions of our system, called AgenTrac, is that it unies the power of data fusion (cue integration) methodologies with a well-organized extended path coherence res- olution approach designed to handle crossing trajectories of multiple objects. Both approaches are combined in an easily congurable framework. We are not aware of any path coherence or data fusion solution in the computer vision literature that equals the breadth, generality and exibility of our approach. The AgenTrac system is not limited to tracking only human motion; in fact, one of its main strengths is that it can be easily recongured to track many types of objects in video sequences. The multiagent paradigm simplies the application of basic domain-specic constraints and makes the entire system exible. The knowledge necessary for effective tracking can be easily encoded in agent hierarchies and agent interactions. Index Terms: Computer vision, object tracking, agent-based systems, data fusion, path coher- ence, gesture tracking, gesture analysis. This research has been funded by the U.S. National Science Foundation STIMULATE program, Grant No. IRI- 9618887, “Gesture, Speech, and Gaze in Discourse Segmentation” and the National Science Foundation KDI program, Grant No. BCS-9980054, “Cross-Modal Analysis of Signal and Sense: Multimedia Corpora and Tools for Gesture, Speech, and Gaze Research”.

Upload: independent

Post on 16-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Agent-Based Gesture Tracking

� Robert Bryll and Francis QuekVision Interfaces and Systems Laboratory (VISLab)

CSE Department, Wright State University303 Russ Engineering Center3640 Colonel Glenn HwyDayton, OH 45435-0001

�Correspondence: [email protected]

January 23, 2004

Abstract

We describe an agent-based approach to the visual tracking of human hands and head thatrepresents a very useful “middle ground” between the simple model-free tracking and the highlyconstrained model-based solutions. It combines the simplicity, speed and flexibility of trackingwithout using explicit shape models with the ability to utilize domain knowledge and to applyvarious constraints characteristic of more elaborate model-based tracking approaches.

One of the key contributions of our system, called AgenTrac, is that it unifies the power ofdata fusion (cue integration) methodologies with a well-organized extended path coherence res-olution approach designed to handle crossing trajectories of multiple objects. Both approachesare combined in an easily configurable framework. We are not aware of any path coherenceor data fusion solution in the computer vision literature that equals the breadth, generality andflexibility of our approach.

The AgenTrac system is not limited to tracking only human motion; in fact, one of its mainstrengths is that it can be easily reconfigured to track many types of objects in video sequences.The multiagent paradigm simplifies the application of basic domain-specific constraints andmakes the entire system flexible. The knowledge necessary for effective tracking can be easilyencoded in agent hierarchies and agent interactions.

Index Terms: Computer vision, object tracking, agent-based systems, data fusion, path coher-

ence, gesture tracking, gesture analysis.

�This research has been funded by the U.S. National Science Foundation STIMULATE program, Grant No. IRI-9618887, “Gesture, Speech, and Gaze in Discourse Segmentation” and the National Science Foundation KDI program,Grant No. BCS-9980054, “Cross-Modal Analysis of Signal and Sense: Multimedia Corpora and Tools for Gesture,Speech, and Gaze Research”.

1 Introduction

This article presents our work on developing a multiagent framework for vision-based tracking of

conversational gestures. As described in [1], visual analysis of human motion, including areas such

as hand gesture and face recognition, whole body tracking and activity recognition, has many pos-

sible uses, such as advanced user interfaces (e.g. gesture driven control), motion analysis in sports

and medicine (e.g. content-based indexing of video footage, clinical studies of orthopedic patients),

psycholinguistic research, smart surveillance systems, virtual reality and entertainment (e.g. games,

character animation, special effects in movies) and very low bit-rate video compression. The two

additional applications that are being studied in our research are improving the speech recognition

algorithms by incorporating gesture information and vision-based assessment of effectiveness of a

speech therapy used in Parkinson disease patients on their general motor performance.

Our gesture tracking framework is a part of a multidisciplinary effort to improve understanding

of human gestures, speech and gaze in natural conversation. Our research, encompassing multiple

institutions, has already resulted in numerous publications, such as [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12].

Although our project is focused on psycholinguistic aspects of human gesticulation and discourse,

the implications of this research are important for the human-computer interaction, machine vision

and computer science research. From the engineering point of view, better understanding of how

gestures function in natural human discourse and designing automated methods of gesture tracking

and analysis are crucial for the general progress of man-machine interaction and user interface

design research.

In this article we focus on our gesture tracking framework, with special emphasis on the inno-

vative agent-based gesture tracking system that allows us to extract hand motion traces from our

experimental gesture elicitation videos. The system, called AgenTrac, is symbolized by the shaded

rectangle in Figure 1.

As shown in Figure 1, our gesture tracking pipeline starts with gesture elicitation experiments

performed at various institutions with pairs of subjects involved in natural conversation, such as

describing an action plan or a cartoon seen by one of the participants. Figure 2 shows our typical

experimental setup.

A typical experiment is about 10 to 15 minutes long involving five video streams: two stereo

calibrated camera-pairs each focused on the subject and interlocutor to capture head and handmove-

2

Object AppearanceBlob Detection

(based on color,brightness, size)

Trajectory SegmentExtraction

(First Stage)

Trajectory AmbiguityResolution

(Second Stage)

AgenTrac (2D)

Tracking QualityControl and

Error Correction

Gesture Elicitation Experiment(calibrated camera pair)

VCM Motion VectorComputation (2D)

Interest Point Detection

Vector Computation

3D Triangulation of 2D Stereo Pair Data

3D Motion Trajectories of Hands

Figure 1: Our gesture tracking framework.

3

DetailedMonocularMain SpeakerGaze View

Calibrated Interlocutor Stereo Camera Views

Calibrated Main Speaker Stereo Camera Views

Figure 2: Typical experimental setup for gesture elicitation.

4

ments, and one close-up on the subject’s (main speaker) face to capture eye gaze and head orien-

tation. At 30fps, this results in about 22,000 frames per camera. Therefore our algorithms have to

be sufficiently robust, and accurate enough to allow us to correlate gestural motions (e.g. prepara-

tions, strokes, and retractions [13]) with detailed speech signal analysis and manually transcribed

and tagged speech elements. The algorithms must be sufficiently sensitive to minor hand motions

to capture the necessary psycholinguistically significant movements. The hand tracking in each of

the stereo cameras focused on each speaker must be accurate enough to permit adequate extraction

of three-dimensional movement.

Processing the recorded videos in order to extract the three-dimensional hand trajectories con-

sists of three main phases (see Figure 1):

First, we employ Vector Coherence Mapping (VCM), a parallel approach for the computation of

an optical flow field from a video image sequence [14, 7, 2]. This approach incorporates the various

local smoothness, spatial and temporal coherence constraints transparently by the application of

fuzzy image processing techniques. VCM accomplishes this by a weighted voting process in “local

vector space” with the constraints providing high level guidance. Our experimental results show

that VCM is capable of extracting flow fields for video streams with global dominant fields (e.g. due

to camera pan or translation), moving camera and moving object(s), and multiple moving objects. It

is also able to operate under both strong image noise and motion blur, and is resistant to boundary

oversmoothing. While VCM is a general coherent vector field extraction approach that may be

applied to other domains, we specialized it to the task of tracking hand movements.

Second, we perform the actual tracking of the hands using a multiagent-based approach, the

AgenTrac system, which is the key contribution of this article. The method fuses the motion infor-

mation in VCM vectors with the positional information from the skin-colored blobs to form hand

trajectories. One of the most significant innovations of the AgenTrac framework is that it addresses

the problem of crossing trajectories by using a multiagent approach. The trajectory segments are

represented by agents that are subsequently “recruited” by higher-level agents to form consolidated

trajectories of both hands. The similarity measures used to build the complete hand trajectories

are affected by the agent interactions and their hierarchy (agent coalitions which will be discussed

later), providing a powerful and straightforward way of influencing the trajectory choices during

tracking. This facilitates easy application of higher level domain-specific motion constraints that

improve tracking reliability, making the entire framework flexible.

5

Another key contribution of the AgenTrac system is that it unifies the power of data fusion

(cue integration) methodologies with a well-organized extended path coherence resolution approach

designed to handle crossing trajectories of multiple objects. Both approaches are combined in an

easily configurable framework. We are not aware of any path coherence or data fusion solution in

the computer vision literature that equals the breadth, generality and flexibility of our approach.

The two tracking phases described above deal with the 2-D video data. The third and final

phase is combining the computed image-plane hand traces using stereo calibration and triangulation

algorithm [15] to obtain 3-D hand trajectories from each stereo camera pair.

2 Tracking, Agents, Computer Vision, Path Coherence andDataFusion

To provide a foundation for the discussion of the motivation and contributions of our agent-based

tracking framework, we present a very brief overview of several topics closely related to our re-

search.

2.1 Model-Free vs Model-Based Tracking

Based on the well-known reviews of human body tracking research, such as [16, 17, 1], the tracking

methods can be roughly divided into model-free and model-based approaches.

In general, the model-free approaches rely on establishing correspondence between consecutive

video frames based on similarity of features such as position, velocity, shape, texture and color.

The features can vary in complexity from low level (points) to higher level (lines, blobs, polygons).

There is a trade-off between feature complexity and tracking efficiency. Lower-level features are

easier to extract but relatively more difficult to track than higher-level features. The advantages

of model-free tracking include simplicity and speed, whereas its main disadvantage is inability to

encode and utilize a priori knowledge about the appearance and dynamic behavior of the tracked

body parts.

Model based approaches may use stick figures, 2-D contours (ribbons) and volumetric models.

The basic task is to recover (e.g. using heuristic approaches) configuration of the model that corre-

sponds to the video/image data. After a fit of the model to the 2-D view(s) is achieved, the model

pose can be analyzed. The main advantage of these methods is the fact that they can use the a pri-

ori knowledge about the appearance and dynamics of the human body in 2-D projections, however

6

their high complexity and difficulty in encoding and modifying shape and motion constraints are

their disadvantages.

2.2 Agent-Based Systems

Research on agent and multiagent systems is a large and quickly growing subfield of Distributed

AI. Excellent introductions to the agent- and multiagent-based systems can be found in [18, 19, 20].

More in-depth discussions of multiagent systems issues are in [21, 22].

Woolridge and Jennings [19] cite Carl Hewitt’s remark that the question “what is an agent?” is

as embarrassing for the agent-based computing community as the question “what is intelligence?”

is for the mainstream AI community. This statement clearly indicates the fuzziness of the agent

definition and the relative youth of the entire research field.

However, following [23], a tentative definition of an agent can be formulated as follows:

“An agent is a (computer) system that is situated in some environment and that is capa-

ble of autonomous action in this environment in order to meet its design objectives”.

The environment in which an agent operates can be either real or simulated, e.g. an autonomous

robot works in a real environment, whereas a software agent’s environment is entirely virtual.

The slightly more restricted definition [23] states that an intelligent (also called autonomous)

agent has to be “[. . . ] capable of flexible autonomous action in order to meet its design objectives,

where flexibility means three things”: 1) Reactivity: An agent is able to perceive its environment

and respond in a timely fashion to changes that occur in order to satisfy its design objectives. 2) Pro-

Activeness: An agent exhibits goal-directed behavior by taking the initiative in order to satisfy its

design objectives. 3) Social Ability: An agent is capable of interaction (cooperation, competition)

with other agents and possibly humans in order to satisfy its design objectives.

The classical examples of agents include control systems (e.g. a thermostat, nuclear reactor

control system), autonomous space probes, software daemons (e.g. all processes running in the

background in the Unix operating system; these agents inhabit the non-physical software environ-

ment), robots, and travel agents (this is the ultimate example of intelligent autonomous agents:

humans).

Agents can be classified according to many taxonomies. One of the most popular divides agents

into deliberative (complex) and reactive (simpler) agents.

7

Agents are typically used in multiagent systems. An excellent survey of multiagent systems was

written by Stone and Veloso [18]. The authors present a very useful taxonomy of the multiagent

systems, classifying them according to two major features: degree of communication and degree of

heterogeneity of agents. The third important classification feature that can be applied to multiagent

systems is benevolence (degree of cooperation or competition) of agents.

Due to the space constraints and the breadth of the field, we have to refer the reader to the above

mentioned literature for a more adequate discussion of agents and multiagent systems. However,

we would like to emphasize a feature of these systems that makes them significant and appealing in

various research areas:

In our view, the key advantage of agent-based systems is the fact that agents form a new and use-

ful abstraction tool for analysis and solving complex problems. Thinking about existing problems

in terms of agents and their interactions very often leads to innovative and interesting approaches

and new solutions. Moreover, multiagent systems enforce well-organized design and program-

ming, resulting in flexible, conceptually simple and extendable implementations. We hope that our

AgenTrac system is a good example of such an innovative approach and implementation.

2.3 Agent-Based Computer Vision

The phrase “Agent-Based Computer Vision” can be used in two contexts that are not necessarily

closely related. One usage of that phrase is to describe computer vision systems of autonomous

agents, usually autonomous robots [24]. The second usage is to refer to computer vision systems

consisting of software agents. The former systems deal with all aspects of autonomous robot vision,

such as active vision, mobile platform issues and stereo vision, whereas the latter systems employ

agent-based approach to analyze/retrieve/process images or video sequences and do not have to be

owned by autonomous agents. Our AgenTrac system belongs to this second class of agent-based

computer vision systems. We call them design-level agent-based computer vision systems.

In design-level agent-based computer vision the most common usage of multiagent-based sys-

tems is to perform effective task decomposition by employing multiple agents with different pro-

cessing competencies (visual capabilities) at varying levels of abstraction to solve specific subtasks

and then to synthesize a solution. The agents are therefore usually used as means to effectivelymod-

ularize and/or parallelize the design of a computer vision system [25, 26, 27, 28, 29, 30, 31, 32, 33].

In contrast to this typical solution, where different agents are assigned to different visual tasks,

8

on the most general level our AgenTrac solution can be classified as assigning different agents to

different tracked visual objects. Related approaches have been proposed earlier [34, 27, 35], but

our approach involves extended flexibility and the ability to encode domain knowledge in agent

organizational structures (hierarchies and coalitions).

2.4 Path Coherence

One of the first attempts to solve the problem of resolving (crossing) trajectories of multiple objects

was the Multiple Hypothesis Tracking (MHT) proposed by Reid [36]. The method was later effi-

ciently implemented by Cox and Hingorani [37, 38]. More recently, Polat et al [39] improved the

method by combining it with the path coherence constraints proposed by Sethi and Jain [40, 41].

As described and implemented in [37, 38], Multiple Hypothesis Tracking involves generating

and evaluating a set of hypotheses � about positions of a set of tracked objects � given set of

position measurements (potential object positions)� at time �. The most likely hypothesis is

selected based on the previous hypotheses and current measurements using Bayesian evaluation.

An approach that achieves results similar to Multiple Hypothesis Tracking is the Greedy Ex-

change algorithm proposed by Sethi and Jain [40] and later improved by Salari and Sethi [42, 41]

to handle occlusions. The algorithm is designed to find coherent, smooth trajectories of a set of

� features (objects, tokens, entities) tracked over � video frames. It relies on path coherence as-

sumptions and essentially involves examining the tree of all possible trajectories formed by the�

features in all the analyzed frames, greedily maximizing the smoothness of motion of all tracked

features.

All the path coherence approaches listed above share the following disadvantages:

First, they assume that the location, scalar velocity and direction of motion of tracked objects

is relatively unchanged from one frame to the next. This assumption is easily violated in natural

human gesticulation captured at 30fps. Second, they do not offer an organized framework to in-

corporate additional cues (except motion cues) into the trajectory resolution process. As a result,

these approaches consider only a subset of situations that can be handled by the AgenTrac system.

The subset consists essentially of situations in which the overlaps between the tracked objects are

relatively brief and their trajectories smooth.

9

2.5 Data Fusion

Combining various sources of information is known to improve accuracy and robustness of vision-

based object tracking. In computer science, the technique is known under various names, such as

“cue integration”, “data association” or “data fusion” and due to its intuitive obviousness it is often

used without being explicitly named. Its basic idea is combining data from multiple and diverse

sources of information (“sensors”) in order to perform inferences that would not be possible with

only a single source. Data fusion has been a very important topic in military research, where it is

known as “multisensor data fusion” or “distributed sensing” [43] and it is used in automatic identi-

fication of targets in air defense, analysis of battlefield situations and threat assessment. The non-

military applications include remote sensing problems (e.g. location of mineral resources), control

of complex machinery (e.g. nuclear power plants), law enforcement (e.g. detection of drugs, airport

security), automated manufacturing, robotics, and medical diagnosis, where the problem is often

referred to as “registration” of various sensor modalities. Data fusion relies on methods developed

in various areas of science, including signal processing, statistics, artificial intelligence, pattern

recognition, cognitive psychology and information theory [43]. Statistical methods of estimation

(recursive estimation in particular) and time-series analysis form a theoretical backbone of many

data fusion techniques [44].

In computer vision one can find examples of data fusion research that apply strict mathematical

rigor similar to that presented in [43], and also numerous examples of implicit or explicit data

fusion performed in a more intuitive way. Usually, the presented methods rely on extracting and

combining various cues from a single sensor (camera) by applying algorithms that derive different

types of information from the same modality (e.g. edge information and color information extracted

from the same video image). This contrasts with multisensor data fusion common in robotics or

military applications. Recent examples of data fusion applied in computer vision include [45, 46,

47, 48, 49, 50].

The main disadvantage of the data fusion approaches existing in computer vision is that most of

them do not even consider resolving crossing trajectories of multiple objects. Only Rasmussen and

Hager [48] attempt to model crossing trajectories, occlusions and simple kinematic constraints by

using the Constrained Joint Likelihood Filter. Their framework is well organized and in many

ways equivalent to the AgenTrac system. However, our agent-based approach provides more

built-in flexibility and conceptual clarity, resulting in better usability of our system. In particu-

10

lar, the AgenTrac system offers a flexible and straightforward way of encoding common sense in

an intermediate-level representation; the framework presented by Rasmussen and Hager does not

have such advantage.

2.5.1 Active Fusion

The active fusion approach was proposed by Pinz, Prantl, Ganster and Kopp-Borotschnig in 1996

[51] in the context of image interpretation in remote sensing (satellite images). Since then the

authors produced a series of publications on the subject: [52, 53, 54, 55].

Active fusion, in contrast to the standard data fusion, is concerned not only with how to combine

information, but also how to select it in order to achieve best results. As stated in [52], active fusion

“not only combines information, but also actively selects the sources to be analyzed and controls

the processes to be performed on these data following the paradigm of active perception”. Put

differently, the active fusion paradigm attempts to actively control the acquisition of information in

addition to combining the available cues as in traditional data fusion.

The active fusion concept transforms the traditionally sequential image understanding task into

iterative and interactive process, so that the standard preprocessing - segmentation - representation

- recognition pipeline used in computer vision is no longer fixed and no longer unidirectional [26].

Shang and Shi [26] observed that the active fusion paradigm fits very well into the multiagent

framework. In [26] they propose an agent-based system implementing this framework.

In the further discussion we will show that the active fusion is the next logical step in the

development and improvement of our agent-based tracking system.

3 Vector Coherence Mapping

Our tracking process starts with the extraction of relevant motion vectors from the video sequence.

This comparatively low-level step is achieved by Vector Coherence Mapping (VCM) , a correlation-

based algorithm that tracks iconic structures (regions, templates) in video while implicitly applying

local smoothness, motion and other constraints [14, 7]. As such, VCM can be broadly classified as

an optical flow computation algorithm.

Barron et al’s review of optical flow techniques [56] contains a good classification of optical

flow approaches. According to his taxonomy, our VCM approach falls under the region-based

11

matching category. VCM employs a local piecewise constant parametric motion model in that it

applies smoothness constraints in localized image regions instead of the entire image.

The following is an intuitive explanation of the basic principles of the VCM algorithm. A more

detailed and formalized presentation can be found in our earlier publications [14, 7, 2, 57].

Figure 3 illustrates how VCM applies a spatial coherence constraint (minimizing the directional

variance). Assume 3 feature points ���� � � ��

�at time � (represented by the squares at the top of the

figure) move to their new locations (represented by circles) in the next frame. If all three feature

points correspond equally to each other, correlation-based matching (e.g. by ADC) from each

���would yield correlation maps with 3 hotspots (shown as � �

�� � �� �

�in the middle of figure 3).

If all three correlation maps were summed, we would obtain the vector coherence map (vcm) at

the bottom of the figure. The ‘correct’ correlations would reinforce each other, while the chance

correlations would not. Therefore a simple weighted summation of neighboring correlation maps

yields a vector that minimizes the local variance in the computed vector field. We can adjust the

degree of coherence enforced by adjusting the contributing weights of the neighboring correlation

maps as a function of distance of these maps from the point of interest.

VCM can be related to local differentialmethods [58, 59] which were found to be good perform-

ers [56]. More precisely, in terms of smoothness constraint application, VCM is similar to [58, 60],

with their weighted least-squares local smoothness criterion where higher weights are assigned to

the pixels close to the center of the considered neighborhood. The advantage of VCM is that it

combines the correlation and constraint-based smoothing processes into a set of fuzzy image pro-

cessing operations. Its flexible fuzzy model facilitates the enforcement of a variety of constraints,

such as momentum, color and neighborhood similarity by application of various types of likelihood

masks. The method is also easily parallelizable and does not require any iterative post-process.

More details about VCM are available in [14, 7, 2, 57].

3.1 Interest Point Detection

VCM computes the motion vectors for a set of interest points detected in each frame by combining

a modified moving edge detector with skin color likelihood information, which results in track-

ing moving edges of skin-colored regions. A detailed description of our interest point selection

algorithm can be found in [57].

12

ADC Area of P2

ADC Area of P3

P1

P2

P3

ADC Area of P1

N (p )3tN (p )2tN (p )1t

vcm(p )1t

Correlation 'Hot Spots'

CorrectCorrespondenceReinforced

Figure 3: Spatial Coherence Constraint in VCM.

13

FeatureManager

FeatureDetectors BlobDetector

VCMVectorDetector

VideoReader

BlobCreator

MotionBlobDetector BlobCreator

Movie File

Motion Vectors(VCM)

SupervisingAgent

(Human or Software)SystemDirector

AgentManager

BlobAgents BABABABABABABABA

LabelingAgents LA LA

First Stage(Segment Extraction)

Second Stage(Ambiguity Resolution)

Figure 4: Overview of the AgenTrac system.

4 The AgenTrac System

4.1 Motivation

The main reason for developing the AgenTrac system is to create a system that provides an or-

ganized and flexible framework for handling crossing trajectories and occlusions of multiple ob-

jects, since these aspects of tracking are addressed by very few systems in computer vision. Other

reasons include straightforward and quick encoding of basic domain-specific motion and position

constraints into the tracking process without making the entire system overly complex, offering

a framework for future extensions and providing ability to combine vector-based and blob-based

tracking to improve tracking accuracy and robustness.

4.2 System Overview

Figure 4 shows a simplified block diagram of our AgenTrac system. All the major components

of the system and their relationships are presented. The AgentManager is the main system com-

14

ponent, encapsulating all agents as well as the FeatureManager that manages FeatureDetectors

and provides the agents with features extracted from video. The system contains two types of

agents - the BlobAgents and the LabelingAgents - responsible for two distinct phases (stages) of

the tracking process that will be described in Section 4.3. The BlobAgents use features supplied

by FeatureDetectors that can detect various image features such as color blobs, image motion ar-

eas and also use inter-frame motion vectors pre-computed by our VCM algorithm as an additional

source of motion information. More FeatureDetectors can be added to the system as required.

In the current implementation the tracking is essentially blob-based, where blobs represent image

regions sharing some characteristics (e.g. skin color regions and/or regions of coherent image mo-

tion). However, the proposed agent architecture can work with different types of tracking evidence.

4.3 Processing Stages

Figure 5 illustrates the two processing stages used by the AgenTrac system. In Figure 5a the

horizontal axis represents time (quantized by the video frames) and the vertical axis represents the

x position of each agent (assumed to be a moving “blob” for simplicity). The y dimension is omitted

for clarity, although the real system tracks objects in two dimensions.

The two processing stages are:

� First Stage: Blob Agent (Segment) Extraction Stage (Figure 5a), in which the BlobAgents

play an active role and the LabelingAgents are only passively providing BlobAgents with

object (blob) detection parameters.

� Second Stage: Ambiguity (Trajectory) Resolution Stage (Figure 5b) in which the Labelin-

gAgents are active and the BlobAgents are (mostly but not fully) passive and used by the

LabelingAgents as trajectory segments to be “chained” together.

The two stages of processing, together with the agents active in each stage, can be related to

the cognitive model used in [29]. The First Stage, with its active BlobAgents, corresponds to the

perceptive stage of the model, and the Second Stage, with the LabelingAgents and the Supervising

Agent playing central roles, corresponds to the cognitive stage. The First Stage extracts relatively

low-level information, permitting the LabelingAgents in the Second Stage to operate at a higher

level of abstraction and “reason” about the extracted entities.

15

C

D

E

F

A

B

C

D

A

B

C

D

E

F

4-frame Overlapof BlobAgentsA and B (theyshare a singleblob).

1-frame Overlapof BlobAgentsc and d (theyshare a singleblob).

Frame Number

X p

osi

tio

n

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Am

big

uit

y f

or

A a

nd

B(E

nd

of

Act

ive

Lif

e)

C a

nd

D S

paw

ned

Am

big

uit

y f

or

C a

nd

D(E

nd

of

Act

ive

Lif

e)

E a

nd

F S

paw

ned

Tracked Blobs

a)

b)

First Stage(Segment Extraction)

Second Stage (Trajectory Resolution)

C E

LabelingAgent L Selects BlobAgent Chain A-C-Fwith the Highest Total Similarity Measure S

L

Figure 5: Two processing stages employed by the AgenTrac system.

16

Below we discuss Figure 5 in order to give the reader an intuitive understanding of how the

system works. Although the illustration shows only an example behavior in a particular trajectory

intersection case, we hope it gives a good idea about the general principles of the system’s behavior.

In Figure 5a, six BlobAgents denoted as A, B, C, D, E and F are extracted in the First Stage.

The BlobAgents represent segments of object trajectories that can be tracked with high reliability,

(this generally means “clean” trajectory segments between object overlaps, trajectory splits and oc-

clusions). The BlobAgents can be thought of as “smart” object trajectory segments that possess

some additional information and are aware of overlaps with one another. In the current implementa-

tion, the BlobAgents fuse the object blob information with the pre-computed VCMmotion vectors

and motion region information to achieve higher tracking accuracy.

The BlobAgent trajectories are drawn with thick curves, ending with filled circles symbolizing

beginnings and endings of agent lifetimes. Agents A and B do not have the beginnings of their lives

marked, since their tracking intervals extend beyond the left edge of the drawing. Similarly, agents

E and F continue their tracking beyond the right edge of the figure.

As shown, agents A and B continue their tracking despite overlapping (sharing the same object

appearance blob) from frame 6 to frame 9. The BlobAgents accept overlaps with one another, i.e.

they can continue tracking despite being in overlap, and they know the identity of the agent(s) they

overlap with.

Each BlobAgent stops tracking when a trajectory ambiguity is reached (e.g. two hand blobs

merge and then separate) or further tracking is impossible (e.g. the tracked object disappears).

After stopping active tracking, a BlobAgent enters a dormant state in which it looks for possible

continuations of its trajectory. The criteria used to detect trajectory ambiguities will be touched

upon in the following discussion.

After stopping tracking due to trajectory ambiguities (splits), which occur at frame 9 for agents

A and B and frame 15 for agents C and D, the stopped agents continue gathering links to potential

continuations of their trajectories. The potential trajectory continuations are embodied by newly

spawned BlobAgents, in this case C, D, E and F. Both agents A and B store links to agents C

and D as possible trajectory extensions, and both C and D store links to E and F. No trajectory

disambiguation is performed in the First Stage; all decisions are postponed until the Second Stage.

The BlobAgents and their links to potential trajectory continuations (other BlobAgents) form a

tree of possible object trajectories. We would like to emphasize that if we limit the number of links

17

that can be collected by a single BlobAgent, the BlobAgents will effectively prune the search tree

of all possible (crossing) blob trajectories. The trajectory links are schematically visualized in the

center of Figure 5.

In the Second Stage, shown in Figure 5b, the BlobAgents extracted in the First Stage are used

by LabelingAgents to construct full trajectories of the tracked objects. There is one LabelingA-

gent assigned to each tracked object (e.g. one agent for the right hand, one for the left hand and

one for the head). A LabelingAgent L (see Figure 5b) recursively searches the tree of possible

trajectories and selects a trajectory - equivalent to a chain of BlobAgents - that maximizes a Total

Trajectory Similarity Measure � that will be described in the following sections. All the tracking

information gathered by the BlobAgents in the First Stage, including the overlap information, af-

fects the similarity measure �. Since the search of the trajectory tree is exhaustive, the depth of

recursive search is limited to maintain acceptable performance.

In the existing implementation of the system, the First Stage is automatic and requires only

manual initialization of certain blob detection parameters, such as skin color samples, brightness

range, blob size range, and color detection threshold. The system then traverses the entire video se-

quence automatically, creating thousands of BlobAgents. Since the BlobAgents are implemented

as persistent objects, they can be stored in a file to be later used in the Second Stage.

The Second Stage requires active participation of a Supervising Agent (see Figure 4) that guides

the LabelingAgents in creating full and correct object trajectories. The iterative nature of this stage

is denoted by a processing loop in Figure 1.

4.4 Required BlobAgent Behavior

To demonstrate the conceptual simplicity of our system at the agent control level, we list the behav-

iors that are required of the BlobAgents in the First Stage of processing. The BlobAgents have to

be able to:

� track object evidence (blobs of features) according to parameters specified by the LabelingA-

gents;

� tolerate overlaps with other BlobAgents and to be aware of them;

� detect ambiguities in tracking (trajectory splits);

18

� collect links to potential trajectory extensions (followers; newly spawned BlobAgents).

Each BlobAgent detects tracking ambiguities by performing motion prediction, finding ob-

ject evidence (object color, motion blob) in the neighborhood of its predicted next frame position,

ranking the evidence according to position and appearance similarity and deciding if there is a well-

defined single best evidence (object blob). If no well-defined single best evidence is present, an

ambiguity is flagged and the agent stops its active tracking, entering a dormant state.

4.5 Supervising Agents

The Supervising Agent shown in the upper left corner of Figure 4 provides a high-level guidance for

the tracking agents in the Second Stage. In the current implementation the role of the Supervising

Agent can be played by a human operator or by a software agent called AutoTester.

The AutoTester Supervising Agent provides an effective way of evaluating the system perfor-

mance for a broad range of tracking parameters and it enables the system to learn the values of

tracking parameters that result in most effective tracking, as will be shown in Section 6. The agent

uses the motion traces reviewed and corrected by a human as a reference to perform multiple runs

of the tracking system while varying selected tracking parameters. It is able to collect correction

statistics for each run and select the parameter values resulting in the smallest number of tracking

errors.

A Supervising Agent can affect the tracking agents in the following ways:

� By correcting the tracking errors and incorrect trajectory resolutions. Corrections affect the

further tracking by changing the relative importance of the tracking agents, giving priority to

agents that make fewer tracking errors.

� By adjusting certain tracking parameters and templates, such as agent’s adaptability and/or

position template that affect the Total Trajectory Similarity Measure � used in trajectory

disambiguation.

The human operator supervising the AgenTrac system in the Second Stage of processing in-

teracts with the agents using an interface/protocol identical to that of the AutoTester. From the

point of view of the AgentManager, there is no functional distinction between the two types of

supervising agents.

19

4.6 Advantages of Trajectory Resolution (Second Stage)

The fact that all decisions at the trajectory ambiguity points are deferred until the Second Stage of

processing allows the system to consider larger trajectory segments, giving it a more global view

of the objects’ motions. In effect, the LabelingAgents “wait” until more evidence is collected

before making trajectory decisions. As a result, a LabelingAgent is able to select some trajectory

sub-segments with very low similarity measure as long as they result in the high overall similarity

of the larger segment. In other words, the Second Stage of processing offers an organized protocol

for complex data fusion.

4.7 Agent Coalitions

The LabelingAgents instantiated in the AgenTrac system can form coalitions. Each coalition

has a Coalition Master: a LabelingAgent or a coalition assigned to an object that can be tracked

with higher confidence than the remaining coalition members. As an example, a coalition can be

formed by the LabelingAgents tracking the left hand, the right hand and the head, where head is

the Coalition Master.

The purpose of the Coalition Master is to guide (constrain) the coalition members in order to

improve their tracking reliability. Guiding is achieved by using Position Templates which will be

described in the following sections.

The Coalition Master can be an object with a special marker that can be tracked very reliably.

Thanks to the concept of Agent Coalitions, this reliability can be partially propagated to coalition

members, improving the tracking performance.

The AgenTrac system allows multiple levels of Agent Coalitions, i.e. coalitions of coalitions

are possible, with coalitions acting as Coalition Masters in other coalitions. The conceptual power

of such a coalition hierarchy in object tracking lies in the fact that it offers a flexible way of encoding

domain-specific spatial relationships between the tracked objects. For example, a single coalition

representing the tracked human body could consist of sub-coalitions representing the torso and

limbs, and each sub-coalition could in turn be composed of even smaller tracked sub-units, down

to the fingertips. One can imagine that the coalition representing - for example - left palm could be

guided in its tracking by the arm coalition, and the arm could take guidance from the torso coalition.

As a result, the structure of the human body could be represented hierarchically by the agents and

their coalitions and the higher-level coalitions would be assumed to possess increasing levels of

20

knowledge about the way the human body is built and the way it moves. However, to transfer such

knowledge between levels of tracking agents, the system needs more elaborate inter-agent commu-

nicationmechanisms than the currently implemented Position Template-based protocol (see Section

5.2). The necessary increase in inter-agent communication complexity is discussed in Section 8.2.

5 Encoding Agent Behavior and Communication

5.1 Total Trajectory Similarity Measure

The BlobAgents created in the First Stage of processing accumulate various statistics for the object

trajectory segments they represent. In the Second Stage, these statistics, together with a few rela-

tively simple heuristics, are used to compute the Total Trajectory Similarity Measures for various

possible “chains” of BlobAgents.

At the highest level of abstraction, the Total Trajectory Similarity Measure � offers a powerful,

straightforward and flexible way of encoding agent behavior as well as encoding the domain con-

straints in the AgenTrac system. From this point of view it can be therefore treated as an internal

representation of the behavioral heuristics and knowledge used by all the agents.

Moreover, the Total Trajectory Similarity Measure is used as a means of combined inter-agent

communication in the system, resulting in high-level fusion of the following types of informa-

tion about the tracked objects: appearance, instantaneous (current) motion, accumulated motion

(motion history, “typical position”), overlaps between objects and their durations, a priori spatial

(positional) relationships and domain-specific constraints with respect to the Coalition Master or to

the video frame.

The Total Trajectory Similarity Measure � of each examined BlobAgent chain of length (re-

cursion depth)� is expressed as the normalized weighted sum:

� ���

��

����� � ������ � ����

�� � ��� � ��

�(1)

where:

� �� is the Positional Compatibility of the BlobAgent chain trajectory to the trajectory of

Coalition Master computed using a Position Template. It is the sum of positional compatibil-

ities of all BlobAgents participating in the trajectory chain.

21

� ��� is the sum of similarities of all the BlobAgents in the trajectory chain to the object

appearance model stored in the LabelingAgent that performs the trajectory resolution. The

components of this sum are computed by participating BlobAgents and we call the sum

Appearance Model Similarity.

� �� is the sum of inter-agent similarities between all adjacent BlobAgents (trajectory seg-

ments) in the considered trajectory chain. This similarities are computed by the participating

BlobAgents using heuristics described in the following sections.

� �� , ��� and �� are the weights describing relative importance of all three similarity com-

ponents. The weights are learned by the AutoTester Supervising Agent.

� � is the Active Track Ratio, computed as � � �����, where �� is the number of video frames

in the BlobAgent chain under consideration in which the BlobAgents are actively tracking

(not just accumulating the links to possible followers after stopping) and � � is the total number

of frames in the agent chain. The ratio � is in the interval �� �� and it is raised to the power

that fine-tunes the penalty for trajectories where little active tracking occurs.

� � is the number of BlobAgents in the chain (recursion depth). It is raised to the power �

which fine-tunes the bias of the similarity measure towards/against longer agent chains.

� and � are exponents learned by the AutoTester Supervising Agent. According to our

experiments (see Section 6), the best tracking results are obtained for � ��� and � �

���, meaning that favoring active trajectories (increased penalty due to inactive trajectories)

and “normalization” of the similarity by the recursion depth are beneficial for the tracking

reliability.

The motivation behind the Equation 1 is to combine various, potentially opposing influences

that affect object tracking into a unified similarity measure. In short, the Total Trajectory Simi-

larity measure fuses the information about object appearance, its instantaneous and accumulated

motion history, its interactions (overlaps) with other objects and its domain-related position/motion

constraints into a single adjustable expression reflecting common sense observations about how the

tracked object typically moves in a specific domain.

22

5.2 Position Templates

In Equation 1, the Positional Compatibility with Master of the entire BlobAgent chain, ��, is

computed using Position Templates. A schematic layout of Position Templates is shown in Figure

6. There can be two types of Position Templates in the current version of the AgenTrac system:

Video Frame

Global Position Template

Template Anchored at LeftUpper Corner of Video Frame

Video Frame

Relative Position Template

Coalition MasterTrajectory

Template Anchored at CurrentPosition of Coalition Master(Template Moves with theMaster)

Figure 6: Two types of Position Templates used in the AgenTrac system.

1. Relative Position Template, in which the Coalition Master is assumed to be in the center of

the template and the template follows the motion of the Coalition Master (it is anchored at

the Master).

2. Global Position Template where the template is simply a stationary likelihood map covering

the entire video frame and establishing the expected position of a LabelingAgent in the

video frame. A Global Position Template is not connected to any Coalition Master (it does

not require any Agent Coalitions to exist at all).

Position Templates are 2D likelihood maps that map the expected (average, typical) position of a

coalition member with respect to the Coalition Master or with respect to the video frame (station-

ary master or no master). They are a way in which Coalition Master (or the Supervising Agent)

communicates spatial constraints to coalition members.

Despite their simplicity, the Position Templates can encode relatively complex spatial relation-

ships: e.g. if we imagine the Sun as the Coalition Master and Earth as its member, the Earth’s

Position Template would contain a donut-shaped likelihood map centered at the Sun and represent-

ing Earth’s orbit.

23

�� is easy to compute: by adding the Position Template cells corresponding to object position

we obtain a high value if the object’s position conforms to the map and low value if it does not.

5.3 Appearance Similarity

Currently the Appearance Model Similarity ��� is computed as a weighted mean of normalized

object color and size similarities. The object blob’s color distribution is modeled as 2D Gaussian

in Normalized RGB space. Color similarity is measured as a reciprocal of the distance between

color distributions the in Normalized RGB space. The object size is obtained from its connected

components (color blobs). Median color models and sizes are computed for all BlobAgents.

��� can be thought of as a way in which the LabelingAgents communicate the tracked object’s

appearance constraints to recruited BlobAgents.

5.4 Inter-Agent Similarity

The Inter-Agent Similarity measure �� is set up in a way that allows straightforward encoding of

common sense agent motion heuristics. In this discussion we demonstrate the encoding of one of

these heuristics. The heuristics used by BlobAgents to compute similarities between one another

are based on the fundamental observation, summarized in Figure 7, and can be stated as follows:

If a tracking agent (a BlobAgent and consequently a LabelingAgent that uses it to build its

trajectory) is in overlap with another agent for a long time (Figure 7b)) and then separates from it,

its most recent trajectory and object appearance characteristics lose their discriminative (predictive)

power and - for lack of anything better - it has to rely more on accumulated (old) object appearance

(size, color) and trajectory (position) information to compute similarity to its potential followers

(further BlobAgents in the considered trajectory chain). Therefore the influence of its most recent

trajectory and object blob appearance characteristics on the inter-agent similarity measure � � has

to decrease (since it predictive power is low) and the influence of the accumulated trajectory and

blob appearance parameters has to increase. If an agent is in very short overlap (or no overlap at

all; Figure 7a)) and an ambiguity - trajectory split - occurs, its most recent trajectory and object

appearance characteristics have high discriminative (predictive) power and their influence on the

total inter-agent similarity measure �� should increase, i.e. the agent should use the most recent

instantaneous trajectory prediction and most recent local object appearance characteristics to find

its most likely followers. By symmetry, the influence of the accumulated (historical) trajectory and

24

?A

B

b)Long Overlap (Shared ObjectBlob and Shared Trajectory)

Instantaneous trajectoryand object appearance ofmerged agents A and Bhave low predictive powerand cannot be reliablyused to establish theirfuture trajectories. Analternative source ofinformation has to beemployed (e.g. “expected”or “typical” position ofA and B and their “old”appearances).

A

B

Short Overlap

Instantaneous trajectoryand object appearance ofagents A and B have highpredictive power and canbe used to establish theirfuture trajectories with highconfidence.

a)

!

A & B

A & B

Figure 7: Basic observation used to compute inter-agent similarities in the AgenTrac system.

blob appearance parameters should be diminished in the short- or no-overlap situation.

The Inter-Agent Similarity of a chain of� BlobAgents can be computed as:

�� ����

��� � � � (2)

where ��� �� � is the similarity between the � ���th and th BlobAgent in the trajectory chain

under consideration.

��� � � � is computed as the weighted sum: ��� � � � � ��� � � ���� � � � �

��� ��� � � ����� � � �

where

� ��� �� � is the local similarity between � ���th BlobAgent and the th BlobAgent, that

is a similarity that only considers the most recent trajectory and object blob appearance prop-

erties of the two agents under consideration, disregarding all the motion history preceding the

appearance of these two BlobAgents.

� ��� �� � is the accumulated (global) similarity between � ���th BlobAgent and the th

BlobAgent, i.e. a similarity that also considers all the motion/object blob appearance history

25

preceding the the � � ��th BlobAgent in the trajectory of the LabelingAgent performing

the optimal trajectory search. The entire trajectory and object appearance leading to � ���th

BlobAgent (including the � � ��th agent) is considered in the similarity measure.

� ��� �� � is a normalized weight inversely proportional to the overlap time of the � ���th

agent and proportional to its adaptability which controls how much importance the agent

usually assigns to its most recent (instantaneous) trajectory and appearance characteristics

and how much it relies on the accumulated trajectory and appearance information.

Due to the fact that the BlobAgents are aware of their mutual overlaps and their motion and

appearance histories, the equations above allow a straightforward and easily tunable encoding of

the fundamental observation presented in Figure 7: when the overlap time between agents is short,

��� � � � is high and the most recent trajectory/object characteristics play the most important

role in the similarity measure. Conversely, if the overlap time is long, weight��� �� � is low and

the accumulated information about the trajectories and appearances of both agents (such as their

“typical” positions expressed as position histograms) are most important in the similarity measure.

Although the implementation of the trajectory and appearance similarity measures involves many

details that cannot be presented here due to the space constraints, encoding of the basic heuristics

is conceptually simple, as demonstrated above.

We would like to emphasize that the straightforward implementation of the heuristics presented

in Figure 7 as the Inter-Agent Similarity �� of Equation 2 is possible only due to the fact that that

the BlobAgents are aware of their mutual overlaps. This clearly demonstrates one of the benefits

of using an agent-based approach in solving a complex problem.

6 Experimental Results

The majority of our experiments were performed using the AutoTester supervising agent and sets

of video sequences corrected manually in order to obtain reference traces for the tracked hands. The

AutoTester allowed us to learn the optimal values of various weights, exponents and coefficients

used throughout the system and also establish that application of the Position Templates gives a

statistically significant reduction in the number of manual corrections in the tracking process. A

small sample of the experimental results is presented below. The details of other experiments and

tests performed on the AgenTrac system can be found in [57].

26

6.1 Learning Optimal Exponents in Similarity Equation

We used the AutoTesterSupervising Agent to learn optimal values of the exponents (Active Track

Ratio Exponent) and � (Recursion Depth Exponent) appearing in Equation 1. In the experiment we

used a set of 12 video sequences totaling 157,901 video frames, equivalent to nearly 88 minutes of

video.

The optimal values of both exponents should minimize the number of trajectory corrections

performed by the Supervising Agent and maximize the percentage of correct trajectory decisions at

trajectory ambiguities in all considered video sequences.

Since performing multiple AutoTester runs while changing more than one tracking parameter

at a time was be unfeasible due to excessive computation times, we chose to use a greedy approach

in which only one parameter is modified and all the remaining parameters remain fixed during the

experimental run. The underlying assumption is that the error hypersurfaces are “well behaved”

in the multidimensional parameter space and that their shapes can be inferred correctly based on

one-dimensional cross-sections.

Figures 8 and 9 show the results of our tests. The “trajectory corrections” appearing in both fig-

ures are manual interventions necessary to correct tracking errors in the Second Stage of processing

(the “Tracking Quality Control and Error Correction” step in Figure 1). Since typically the tracking

errors are clustered in time, the measure of “Correction groups” was introduced to count the num-

ber of correction episodes, where an episode involves one or more corrections in consecutive video

frames.

Figure 8 summarizes the results obtained in the AutoTester runs for the 12 experimental se-

quences performed for the Active Track Ratio Exponent changing from 0.2 to 3 in 15 steps.

The minima in the number of corrections and the number of correction groups occur for � ���,

whereas the maximum percentage of correct trajectory decisions occurs for � ��. Since the per-

centage of correct trajectory decisions is insignificantly smaller for � ���, the value of � ���

was accepted as the optimal Active Track Ratio Exponent.

Figure 9 summarizes the results obtained in the AutoTester runs for the 12 experimental se-

quences performed for the Recursion Depth Exponent � changing from 0.1 to 1.5 in 15 steps. The

minima in the number of corrections and the number of correction groups occur for � � ��,

whereas the maximum percentage of correct trajectory decisions occurs for � � ���. Since there is

a dramatic increase in the percentage of correct trajectory decisions for � � ��� in comparison to

27

6550

6600

6650

6700

6750

6800

6850

6900

6950

7000

7050

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3

Active Track Ratio Exponent u

To

tal N

um

ber

of

Tra

ject

ory

Co

rrec

tio

ns

a.

3720

3740

3760

3780

3800

3820

3840

3860

3880

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3

Active Track Ratio Exponent u

To

tal N

um

ber

of

Tra

ject

ory

Co

rrec

tio

n G

rou

ps

b.

77

77.2

77.4

77.6

77.8

78

78.2

78.4

0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2 2.2 2.4 2.6 2.8 3

Active Track Ratio Exponent u

Ave

rag

e C

orr

ect

Tra

ject

ory

Dec

isio

ns

(%)

c.

Figure 8: Total number of trajectory corrections (a.), total number of trajectory correction groups(b.) and average percentage of correct trajectory decisions (c.) as functions of the Active TrackRatio Exponent in 12 experimental sequences.

28

6550

6600

6650

6700

6750

6800

6850

6900

6950

7000

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5

Recursion Depth Exponent v

To

tal N

um

ber

of

Tra

ject

ory

Co

rrec

tio

ns

a.

3700

3750

3800

3850

3900

3950

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5

Recursion Depth Exponent v

To

tal N

um

ber

of

Tra

ject

ory

Co

rrec

tio

n G

rou

ps

b.

77.5

78

78.5

79

79.5

80

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 1.3 1.4 1.5

Recursion Depth Exponent v

Ave

rag

e C

orr

ect

Tra

ject

ory

Dec

isio

ns

(%)

c.

Figure 9: Total number of trajectory corrections (a.), total number of trajectory correction groups(b.) and average percentage of correct trajectory decisions (c.) as functions of the Recursion DepthExponent � in 12 experimental sequences.

29

Parameter Templates No Templates

Corrections per 1000 frames 42.2 46.3Correction groups per 1000 frames 19.7 22.5Correct Trajectory Decisions (%) 77.70 73.66

Table 1: Comparisons of the AgenTrac system performance with and without Position Templatesin 15 experimental sequences.

� � ��, whereas the numbers of corrections and correction groups for � � ��� are only slightly

higher than those of � � ��, the value of � � ��� was accepted as the optimal Recursion Depth

Exponent.

6.2 Establishing Usefulness of Position Templates

This set of tests was performed in order to establish if the Position Templates used in the Second

Stage of processing by Labeling Agents have a statistically significant effect on the performance of

the AgenTrac system.

The tests were done using the AutoTester and a set of 15 experimental sequences totaling

251,317 frames (about 140 minutes).

The Second Stage of tracking was run twice by the AutoTester, first with and then without

applying the Position Templates to the Total Trajectory Similarity Measure �. To achieve this, the

weight of the Positional Compatibility to Coalition Master, �� , appearing in Equation 1 was set to

0 in the second run. The results of these experimental runs are summarized in Table 1.

It is apparent that the results obtained with the Position Templates applied (column 3) are better

than those achieved without them (column 2). Specifically, the numbers of trajectory corrections

and correction groups are lower when the Position Templates are used, whereas the percentage of

the correct trajectory decisions (row 4) are higher. Significance tests show that all the differences

in results obtained with and without Position Templates are statistically significant (� � �����).

Therefore, the Position Templates are beneficial for the overall tracking performance of the Agen-

Trac system, resulting in 8.9% reduction in the number of trajectory corrections, 12.4% reduction

in the number of corrections groups, as well as the 4.04% increase in the percentage of correct

trajectory decisions made at trajectory ambiguities in the Second Stage of processing.

30

7 Demonstration Movies

The demonstration movies showcase the system performance in a range of tracking situations,

highlighting the behavioral heuristics described in Section 5.4. All tracking examples were obtained

without any manual interventions. The tracked hands are marked as R and L in the movies and

denoted by RH and LH in the descriptions below. The movies demonstrate:

� 01-fast-head-cross.mp4: The most recent RH trajectory and appearance are used to cor-

rectly resolve the trajectory ambiguity after the fast head crossing. The most recent trajectory

characteristics dominate the inter-agent similarity measure of Section 5.4 due to the very brief

overlap between the RH and the head.

� 02-multiple-hand-splits.mp4: Due to relatively long periods of overlaps (mergers) between

hands, accumulated position information for RH and LH (“typical” positions) as well as

Position Templates of both hands dominate the similaritymeasure used to resolve the multiple

split-merge events.

� 03-hand-cross-fast.mp4: Another example of the most recent hand trajectory characteris-

tics used to correctly resolve intersecting trajectories of both hands.

� 04-hand-cross-fast-2.mp4: Extreme example of very fast moving hands tracked correctly

despite trajectory intersection. Again the most recent hand trajectory characteristics dominate

the similarity measure used to resolve the trajectory ambiguity (due to the very brief overlap

between hands).

� 05-hand-cross-multiple.mp4: An example of situation in which the mixed influence of

Position Templates and instantaneous (most recent) trajectory information leads to correct

tracking during multiple quick overlaps between hands.

� 06-head-cross-slow.mp4: The long overlap between the subject’s LH and his head results

in Position Templates and accumulated left hand position information dominating the Total

Trajectory Similarity measure (see Section 5.1). As a result, the LH is tracked correctly after

its merger with the head ends at about 34s. The trajectory interpolation performed by the LH

LabelingAgent between trajectory segments is apparent from 33s to 34s.

31

� 07-interference-short.mp4: The system is able to recover from the brief interference of the

interlocutor’s LH into the path of the main subject’s RH.

� 08-interference-long.mp4: Much longer interference of the interlocutor’s hands into the

path of the main speaker’s RH is also handled correctly. There are some tracking errors

evident between 11s and 14s, but ultimately the system is able to recover the correct position

of the RH (at 14s).

� 09-occlusion-short.mp4: The main speaker’s RH disappears behind the interlocutor’s head

for a few seconds, after which the system is able to pick up its trace. Again trajectory in-

terpolation performed by the RH LabelingAgent is evident between 10s and 13s (the agent

interpolates between the last known position of the RH before occlusion at 4s and its reap-

pearance position at 13s).

� 10-occlusion-long.mp4: Both hands disappear from view for an extended period of time

and the system picks up their tracks correctly at 1min 26s. The ability to resume tracking

even after long occlusion is the result of the BlobAgents being able to “wait” indefinitely for

the links to potential trajectory continuations (see Section 4.3).

The demonstration movies can be downloaded in two sets at:

http://vislab.cs.wright.edu/�rbryll/AgenTracDemo1.zip and

http://vislab.cs.wright.edu/�rbryll/AgenTracDemo2.zip. They are in the MPEG-4 format and

can be played by the Quicktime player available at www.quicktime.com.

8 Conclusions and Future Directions

In this article we presented an innovative agent-based object tracking system which is tuned to

the task of tracking human gestures in video sequences. The AgenTrac system is a working im-

plementation used to generate real research data to support our broader psycholinguistic research

mentioned in Section 1.

8.1 Key Contributions

The key contributions of the AgenTrac system can be summarized as follows:

32

� It unifies the power of complex data fusion and path coherence approaches. Unlike most

of the data fusion methods used in computer vision and discussed briefly in Section 2.5,

our approach offers an organized framework for handling crossing trajectories of multiple

objects. Unlike the pure path coherence methods described in Section 2.4, our system is able

to incorporate additional cues into the trajectory resolution process, which results in it being

able to handle long object overlaps and occlusions and incorporate more types of information

and constraints in the tracking process.

� It offers a useful, fast and conceptually simple “middle ground” solution between the model-

free and model-based tracking, combining the simplicity, speed and flexibility of model-free

approaches with the ability to utilize domain knowledge and to apply positional and motion

constraints characteristic of the model-based system (see Section 2.1).

� Unlike most of the agent-based approaches in computer vision (see Section 2.3), it addresses

the crossing trajectory and motion/position constraint encoding (via agent coalitions, two

processing stages and similarity measures) problems.

� It is one of the very few systems in computer vision that offer an organized framework for

tracking more than one person in video.

� The system is easily reconfigurable and extendable.

� It demonstrates the power and usefulness of the agent-based systems as a new abstraction

tool used to analyze and solve complex problems.

� It is a working production system used to generate data (motion traces) indispensable in

further gesture, speech and gaze research.

8.2 Future Directions

The current version of the AgenTrac system was designed to work well with our typical gesture

elicitation experiments which provide clean segmentation of skin-colored blobs (hands and head)

from the background (see Figure 10). Therefore, the BlobAgents rely on skin-colored blobs as

their basic “object evidence” units. This design decision allowed us to simplify the initial “proof

of concept” version of the system. Unfortunately, as evidenced by tests described in [57], it also

33

Figure 10: Video frame from a typical gesture elicitation experiment captured according to ourexperimental guidelines.

Figure 11: Video frame from “noisy” gesture elicitation experiment violating our experimentalguidelines.

34

makes the system sensitive to “suboptimal” video sequences where clean segmentation of hand and

face object blobs is impossible due to low light levels, high image noise, improper or cluttered

background and skin-colored clothing or short sleeves/shorts worn by the subjects (see Figure 11).

In other words, the current implementation of the AgenTrac system is sensitive to the quality of

analyzed video sequences due to its reliance on clean segmentation of color blobs in tracking.

There are two ways of improving the performance of the AgenTrac system in noisy sequences:

First, one can focus on improving the segmentation of foreground blobs in noisy sequences,

e.g. by statistical modeling of background (background subtraction) or multiresolution frame dif-

ferencing and averaging, which we have started to investigate. Also, we can introduce sophisticated

logical combination of motion blobs with color blobs to enable better usage of motion cues if the

color segmentation is unreliable. This is a relatively straightforward system extension that can be

achieved quickly and may result in significant improvements in tracking reliability.

Second, the range of data fusion performed by the agents can be extended by implementing

the Active Fusion paradigm discussed briefly in Section 2.5.1. This would require much more

conceptual work and designing clear rules and protocols to be used in inter-agent communica-

tion and decision making. The goal would be to obtain a system in which a higher-level agent,

say a LabelingAgent or a Supervising Agent, could actively influence the choice of cues used by

the lower-level agents (BlobAgents) in their tracking and in cases of “suspected problems” could

request re-processing of certain image regions or frame intervals with modified detection and seg-

mentation parameters. The system would work similarly to the recognition system presented in

[26], making the tracking task an iterative and interactive rather than a sequential process. For

example, if a LabelingAgent, assumed to contain more intelligence than in the present version of

the system, decided that the color blob information was not reliable enough for tracking, it could

ask the BlobAgents to use the motion information instead and rely on it until further notice. Ob-

viously, assessing the relative reliability of cues is non-trivial and some rigorous statistical and

information-theoretic approaches from the data fusion literature would most likely have to be ap-

plied. All higher-level agents would have to base their decisions on available object evidence and

current situation and would have to be able to evaluate the reliabilities of various kinds of evidence.

This solution would require the system to substitute the currently used “blobs” of object properties

with a more general concept of object evidence that could be embodied in abstract agents capable

of analyzing the image properties and extracting/ranking the evidence for the tracked entities based

35

on the encountered image/motion features.

Introducing the active fusion processing in the AgenTrac would increase the amount of nec-

essary inter-agent communication, significantly raising the complexity of the entire system. For

one, the current clean conceptual distinction between the two stages of processing (segment extrac-

tion and trajectory resolution) would have to be blurred, since the LabelingAgents in the Second

Stage would have to be able to request re-processing from the existing BlobAgents or spawn new

BlobAgents to track different types of object evidence. In principle the agents can do this this even

in the existing system, but without strict behavioral and coordination protocols, this could lead to

computational explosion and saturation of all resources.

To conclude, the work on improving the robustness of the AgenTrac system should start from

improvements in the cue segmentation. The further work on active fusion still requires additional

conceptual development and important design decisions have to be made before the system can be

upgraded.

References

[1] D. Gavrila, “The visual analysis of human movement: A survey,” CVIP, vol. 73, no. 1, pp.

82–98, 1999.

[2] R. Bryll and F. Quek, “Accurate tracking by vector coherence mapping and vector-

centroid fusion,” Vision Interfaces and Systems Laboratory, CSE Department, Wright

State University, Dayton, OH, Tech. Rep. VISLab-02-10, June 2002, available at

http://vislab.cs.wright.edu/Publications/2002/BryQ02.html.

[3] R. Ansari, Y. Dai, J. Lou, D. McNeill, and F. Quek, “Representation of prosodic structure in

speech using nonlinear methods,” in Workshop on Nonlinear Signal and Image Processing,

Antalya, Turkey, 1999.

[4] D. McNeill, F. Quek, K.-E. McCullough, S. Duncan, N. Furuyama, R. Bryll, and R. Ansari,

“Catchments, prosody and discourse,” in Oralite et Gestualite, ORAGE (Speech and Gesture

2001), C. Cave, I. Guaıtella, and S. Santi, Eds., Aix-en-Provence, France, 2001, pp. 474–481.

36

[5] F. Quek, R. Bryll, H. Arslan, C. Kirbas, and D. McNeill, “A multimedia database system for

temporally situated perceptual psycholinguistic analysis,”Multimedia Tools and Applications,

vol. 18, no. 2, pp. 91–113, 2002.

[6] F. Quek, D. McNeill, R. Ansari, X. Ma, R. Bryll, S. Duncan, and K.-E. McCullough, “Gesture

cues for conversational interaction in monocular video,” in ICCV’99 International Workshop

on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, Corfu,

Greece, Sept. 26–27 1999, pp. 64–69.

[7] F. Quek, X. Ma, and R. Bryll, “A parallel algorithm for dynamic gesture tracking,” in ICCV’99

International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in

Real-Time Systems, Corfu, Greece, Sept.26–27 1999, pp. 119–126.

[8] F. Quek, McNeill, R. D., Bryll, C. Kirbas, H. Arslan, K.-E. McCullough, N. Furuyama, and

R. Ansari, “Gesture, speech, and gaze cues for discourse segmentation,” in Proceedings of the

IEEE Conference on Computer Vision and Pattern Recognition, vol. 2, Hilton Head Island,

South Carolina, June 13-15 2000, pp. 247–254.

[9] F. Quek, D. McNeill, R. Bryll, S. Duncan, X. Ma, C. Kirbas, K.-E. McCullough, and

R. Ansari, “Multimodal human discourse: Gesture and speech,” ACM Transactions on

Computer-Human Interaction, vol. 9, no. 3, pp. 1–23, Sept. 2002.

[10] D. McNeill and F. Quek, “Gesture and speech multimodal conversational interaction in

monocular video,” in Proceedings of the 3rd International Conference on Methods and Tech-

niques in Behavioral Research, Measuring Behavior 2000, Nijmegen, The Netherlands, Aug.

15–18 2000, p. 215.

[11] F. Quek and D. McNeill, “A multimedia system for temporally situated perceptual psycholin-

guistic analysis,” in Proceedings of the 3rd International Conference on Methods and Tech-

niques in Behavioral Research, Measuring Behavior 2000, Nijmegen, The Netherlands, Aug.

15–18 2000, p. 257.

[12] F. Quek and Y. Xiong, “Oscillatory gestures and discourse,” in International Conference on

Automated Speech and Signal Processing, 2002.

37

[13] D. McNeill, Hand and Mind: What Gestures Reveal about thought. Chicago: University of

Chicago Press, 1992.

[14] F. K. H. Quek and R. K. Bryll, “Vector coherence mapping: A parallelizable approach to

image flow computation,” in Proceedings of the Asian Conference on Computer Vision, vol. II,

Hong Kong, China, 8 - 10 Jan. 1998, pp. 591–598.

[15] R. Tsai, “A versatile camera calibration technique for high accuracy 3d machine vision metrol-

ogy using off-the-shelf TV cameras and lenses,” IEEE Journal of Robotics and Automation,

vol. RA-3, no. 4, pp. 323–344, 1987.

[16] J. Aggarwal and Q. Cai, “Human motion analysis: A review,” CVIP, vol. 73, no. 3, pp. 428–

440, Mar. 1999.

[17] J. Aggarwal, Q. Cai, W. Liao, and B. Sabata, “Nonrigid motion analysis: Articulated and

elastic motion,” CVIP, vol. 70, no. 2, pp. 142–156, May 1998.

[18] P. Stone and M. Veloso, “Multiagent systems: A survey from a machine learning perspective,”

Autonomous Robots, vol. 8, pp. 345–383, 2000.

[19] M. Woolridge and N. R. Jennings, “Intelligent agents: Theory and practice,” Knowledge En-

gineering Review, vol. 10, no. 2, 1995.

[20] N. R. Jennings, K. Sycara, and M. Woolridge, “A roadmap of agent research and develop-

ment,” Autonomous Agents and Multi-Agent Systems, vol. 1, pp. 7–38, 1998.

[21] G. Weiss,Multiagent Systems. A Modern Approach to Distributed Artificial Intelligence. The

MIT Press, 1999.

[22] W. Brenner, R. Zarnekow, and H. Wittig, Intelligent Software Agents. Foundations and Appli-

cations. Springer-Verlag, 1998.

[23] M. Woolridge, “Intelligent agents,” inMultiagent Systems. A Modern Approach to Distributed

Artificial Intelligence, G. Weiss, Ed. The MIT Press, 1999, pp. 27–77.

[24] R. Klette, S. Peleg, and G. Sommer, Eds., Proceedings of the InternationalWorkshop on Robot

Vision. Auckland, New Zealand: Springer, Berlin, New York, Feb. 2001.

38

[25] T. K. Cheng, L. Kitchen, Z.-Q. Liu, and J. Cooper, “An agent-based approach for robot vision

system,” in Proceedings of ICSC 95, 1995, pp. 489–490.

[26] Y. Shang and H. Shi, “A web-based multi-agent system for interpreting medical images,”

World Wide Web, vol. 2, pp. 209–218, 1999.

[27] M. Saptharishi, J. B. Hampshire II, and P. K. Khosla, “Agent-based moving object correspon-

dence using differential discriminative diagnosis,” in Proceedings of CVPR, vol. 2, 2000, pp.

2652–2658.

[28] S. Stavroulakis, V. Callaghan, and L. Spacek, “A multiagent approach to machine vision,” in

EXPO 2000: Shaping the Future, Hanover, Germany, July 11-13, 2000.

[29] I. Infantino,M. Cossentino, and A. Chella, “An agent based multilevel architecture for robotics

vision systems,” in Proceedings of AIIA, Siena, Italy, Sept.10-13, 2002.

[30] M. Luckenhaus and W. Eckstein, “A multi-agent based system for parallel image process-

ing,” in Proceedings of SPIE, Parallel and Distributed Methods for Image Processing (SPIE’s

Annual Meeting 1997), vol. 3166, San Diego, CA, Aug. 28-29, 1997.

[31] M. Luckenhaus, “A multi-agent system for parallelizing image analysis tasks,” in Fifth Inter-

national Conference on Intelligent Autonomous Systems (IAS-5). Sapporo, Japan: AAAI

Press, June 1-4, 1998.

[32] F. Arcelli, M. De Santo, and S. Di Salvo, “Software agents for computer vision: a prelimi-

nary discussion,” in Proceedings of 31-st Annual Hawaii International Conference on System

Sciences, Hawaii, Jan. 1998, pp. 9–17.

[33] A. Soto and P. Khosla, “Probabilistic adaptive agent-based system for dynamic state estima-

tion using multiple visual cues,” in Proceedings of 10th International Symposium of Robotics

Research (ISSR 2001), Lorne, Victoria, Australia, Nov. 9-12, 2001.

[34] P. Remagnino, T. Tan, and K. Baker, “Multi-agent visual surveillance of dynamic scenes,”

Image and Vision Computing, vol. 16, pp. 529–532, 1998.

[35] L. C. Tan, C. M. Pang, and W. N. Martin, “Transputer implementation of a multiple agent

model for object tracking,” Pattern Recognition Letters, vol. 16, pp. 1197–1203, 1995.

39

[36] D. Reid, “An algorithm for tracking multiple targets,” IEEE Transactions on Automatic Con-

trol, vol. 24, no. 6, pp. 843–854, Dec. 1979.

[37] I. J. Cox and S. L. Hingorani, “An efficient implementation and evaluation of reid’s multiple

hypothesis tracking algorithm for visual tracking,” in Proceedings of the International Con-

ference on Pattern Recognition (ICPR’94), vol. A, 1994, pp. 437–442.

[38] ——, “An efficient implementation of reid’s multiple hypothesis tracking algorithm and its

eveluation for for the purpose of visual tracking,” IEEE Transactions on Pattern Analysis and

Machine Intelligence, vol. 18, no. 2, pp. 138–150, Feb. 1996.

[39] E. Polat, M. Yeasin, and R. Sharma, “A tracking framework for collaborative human computer

interaction,” in Proceedings of the Fourth IEEE International Conference on Multimodal In-

terfaces (ICMI’02), 2002.

[40] I. Sethi and R. Jain, “Finding trajectories of feature points in a monocular image sequence,”

IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 9, no. 1, pp. 56–73,

Jan. 1987.

[41] R. Jain, R. Kasturi, and B. G. Schunck,Machine Vision. New York: McGraw-Hill Inc, 1995.

[42] V. Salari and I. Sethi, “Feature point correspondence in the presence of occlusion,” IEEE

Transactions on Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 87–91, Jan.

1990.

[43] D. L. Hall,Mathematical Techniques in Multisensor Data Fusion. Artech House, Inc., 1992.

[44] P. Young, Recursive Estimation and Time-Series Analysis. Springer-Verlag, 1984.

[45] Y. Shirai, R. Okada, and T. Yamane, “Robust visual tracking by integrating various cues,” in

Robust Vision for Vision-Based Control of Motion, M. Vinche and G. D. Hager, Eds. IEEE

Press, 2000.

[46] D. Kragic and H. I. Christensen, “Cue integration for manipulation,” in Robust Vision for

Vision-Based Control of Motion, M. Vinche and G. D. Hager, Eds. IEEE Press, 2000.

40

[47] M.-H. Yang and N. Ahuja, “Extraction and classification of visual motion patterns for hand

gesture recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern

Recognition, 1998, pp. 892–897.

[48] C. Rasmussen and G. D. Hager, “Probabilistic data association methods for tracking complex

visual objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23,

no. 6, June 2001.

[49] J. Sherrah and S. Gong, “Fusion of perceptual cues for robust tracking of head pose and

position,” Pattern Recognition, vol. 34, pp. 1565–1572, 2001.

[50] K. Toyama and G. D. Hager, “Incremental focus of attention for robust vision-based tracking,”

IJCV, vol. 35, no. 1, pp. 45–63, 1999.

[51] A. Pinz, M. Prantl, H. Ganster, and H. Kopp-Borotschnik, “Active fusion - a new method

applied to remote sensing image interpretation,” Pattern Recognition Letters, vol. 17, no. 13,

pp. 1349–1359, 1996.

[52] H. Kopp-Borotschnik and A. Pinz, “A new concept for active fusion in image understanding

applying fuzzy set theory,” in Proceedings of the Fifth IEEE International Conference on

Fuzzy Systems (FUZZ-IEEE’96), New Orleans, USA, 1996.

[53] M. Prantl, H. Borotschnik, H. Ganster, D. Sinclair, and A. Pinz, “Object recognition by active

fusion,” in Proceedings of SPIE, vol. 2904, Boston, USA, 1996.

[54] H. Borotschnik, L. Paletta, M. Prantl, and A. Pinz, “A comparison of probabilistic, possi-

bilistic and evidence theoretic schemes for active object recognition,” Computing, vol. 62, pp.

293–319, 1999.

[55] H. Borotschnik, “Uncertain information fusion in active object recognition,” Ph.D. disserta-

tion, Institute for Computer Graphics and Vision, Technical University Graz, Austria, 1999.

[56] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Performance of optical flow techniques,”

International Journal of Computer Vision, vol. 12, no. 1, pp. 43–77, 1994.

41

[57] R. Bryll, “A robust agent-based gesture tracking system,” Ph.D. dissertation, Computer Sci-

ence and Engineering Department, Wright State University, Dayton, OH, 2003, available at

http://vislab.cs.wright.edu/�rbryll/PAPERS/Bryll-PhD-Dissert.pdf.

[58] B. Lucas and T. Kanade, “An iterative image registration technique with an application to

stereo vision,” in Proceedings of DARPA IU Workshop, 1981, pp. 121–130.

[59] D. J. Fleet and A. D. Jepson, “Computation of component image velocity from local phase

information,” IJCV, vol. 5, pp. 77–104, 1990.

[60] B. Lucas, “Generalized image matching by the method of differences,” Ph.D. dissertation,

Dept. of Computer Science, Carnegie Mellon University, 1984.

42