probabilistic framework for multi-target tracking using...

Post on 28-Sep-2020

6 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Probabilistic framework for multi-target tracking

using multi-camera: applied to fall detection

Master thesis presentation

Presented by: Victoria Rudakova

Supervisor: Prof. Faouzi Alaya Cheikh

Color in Informatics and MEdia Technology

Gjøvik University College

June 4 2010

Introduction Previous work Proposed solution Conclusions

Outline

1 Introduction

2 Previous work

3 Proposed solution

4 Conclusions

Introduction Previous work Proposed solution Conclusions

Outline

1 Introduction

2 Previous work

3 Proposed solution

4 Conclusions

Introduction Previous work Proposed solution Conclusions

Motivation

The population of elderly grows → need in new technologiesto insure their safety

Falling down is a greatest danger for elderly

The main question: how to detect a fall or maybe prevent it?

Classical methods: using wearable sensors

But: sometimes not very effective

Possible solution?

Video-based approach.

Introduction Previous work Proposed solution Conclusions

Motivation

The population of elderly grows → need in new technologiesto insure their safety

Falling down is a greatest danger for elderly

The main question: how to detect a fall or maybe prevent it?

Classical methods: using wearable sensors

But: sometimes not very effective

Possible solution?

Video-based approach.

Introduction Previous work Proposed solution Conclusions

Motivation

The population of elderly grows → need in new technologiesto insure their safety

Falling down is a greatest danger for elderly

The main question: how to detect a fall or maybe prevent it?

Classical methods: using wearable sensors

But: sometimes not very effective

Possible solution?

Video-based approach.

Introduction Previous work Proposed solution Conclusions

Motivation

The population of elderly grows → need in new technologiesto insure their safety

Falling down is a greatest danger for elderly

The main question: how to detect a fall or maybe prevent it?

Classical methods: using wearable sensors

But: sometimes not very effective

Possible solution?

Video-based approach.

Introduction Previous work Proposed solution Conclusions

Problem statement

The main objective

Build a robust multi-camera multi-target tracking system as abasis for high-level analysis - fall detection

Requirements

tracking and identification of multiple targets

handling mutual occlusions

cope with background clutter, illumination changes, shadows,etc.

Introduction Previous work Proposed solution Conclusions

Problem statement

The main objective

Build a robust multi-camera multi-target tracking system as abasis for high-level analysis - fall detection

Requirements

tracking and identification of multiple targets

handling mutual occlusions

cope with background clutter, illumination changes, shadows,etc.

Introduction Previous work Proposed solution Conclusions

General block-scheme of the system

Introduction Previous work Proposed solution Conclusions

?Research questions?

Multi-target tracking

target detection

target tracking

resolving occlusions

background clutter, illumination changes, shadows, etc.

Multi-camera tracking

avoid camera calibration

multi-view data fusion

Activity recognition

distinguish falls from other everyday activities

Introduction Previous work Proposed solution Conclusions

?Research questions?

Multi-target tracking

target detection

target tracking

resolving occlusions

background clutter, illumination changes, shadows, etc.

Multi-camera tracking

avoid camera calibration

multi-view data fusion

Activity recognition

distinguish falls from other everyday activities

Introduction Previous work Proposed solution Conclusions

?Research questions?

Multi-target tracking

target detection

target tracking

resolving occlusions

background clutter, illumination changes, shadows, etc.

Multi-camera tracking

avoid camera calibration

multi-view data fusion

Activity recognition

distinguish falls from other everyday activities

Introduction Previous work Proposed solution Conclusions

Illustration of the problem

Introduction Previous work Proposed solution Conclusions

Outline

1 Introduction

2 Previous work

3 Proposed solution

4 Conclusions

Introduction Previous work Proposed solution Conclusions

At HIG

People detection and tracking

CAMSHIFT combined with optical flow using single camera

Single camera DOES NOT

cover all the monitored area

provide robust tracking for multi-targets (occlusions)

Introduction Previous work Proposed solution Conclusions

At HIG

People detection and tracking

CAMSHIFT combined with optical flow using single camera

Single camera DOES NOT

cover all the monitored area

provide robust tracking for multi-targets (occlusions)

Introduction Previous work Proposed solution Conclusions

Multi-view setup

Second camera

helps to resolve occlusions

extends the FOV

Introduction Previous work Proposed solution Conclusions

Data fusion

Multiple cameras

Build a correspondence between different views

Most popular methods

homography

epipolar geometry

Drawback

Requires camera calibration or other initial configuration

Conclusions

view correspondence must be based on some probability /confidence function

it constrains tracking algorithm to be probability-based also

Introduction Previous work Proposed solution Conclusions

Data fusion

Multiple cameras

Build a correspondence between different views

Most popular methods

homography

epipolar geometry

Drawback

Requires camera calibration or other initial configuration

Conclusions

view correspondence must be based on some probability /confidence function

it constrains tracking algorithm to be probability-based also

Introduction Previous work Proposed solution Conclusions

Data fusion

Multiple cameras

Build a correspondence between different views

Most popular methods

homography

epipolar geometry

Drawback

Requires camera calibration or other initial configuration

Conclusions

view correspondence must be based on some probability /confidence function

it constrains tracking algorithm to be probability-based also

Introduction Previous work Proposed solution Conclusions

Data fusion

Multiple cameras

Build a correspondence between different views

Most popular methods

homography

epipolar geometry

Drawback

Requires camera calibration or other initial configuration

Conclusions

view correspondence must be based on some probability /confidence function

it constrains tracking algorithm to be probability-based also

Introduction Previous work Proposed solution Conclusions

Outline

1 Introduction

2 Previous work

3 Proposed solution

4 Conclusions

Introduction Previous work Proposed solution Conclusions

System overview

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: notations

consider target evolution as STATE transition process thatcould be described by some model

a state:

xt ∈ Rnx , t ∈ N - the current time step

represented by a vector - coordinates, velocities, scale etc.

The objective

Evaluate current state of the target given observations (data)

An observation: zt ∈ Rnz , t ∈ N - the current time step

graphical modeling helps to represent a relationships betweenthese two variables

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: notations

consider target evolution as STATE transition process thatcould be described by some model

a state:

xt ∈ Rnx , t ∈ N - the current time step

represented by a vector - coordinates, velocities, scale etc.

The objective

Evaluate current state of the target given observations (data)

An observation: zt ∈ Rnz , t ∈ N - the current time step

graphical modeling helps to represent a relationships betweenthese two variables

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: notations

consider target evolution as STATE transition process thatcould be described by some model

a state:

xt ∈ Rnx , t ∈ N - the current time step

represented by a vector - coordinates, velocities, scale etc.

The objective

Evaluate current state of the target given observations (data)

An observation: zt ∈ Rnz , t ∈ N - the current time step

graphical modeling helps to represent a relationships betweenthese two variables

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: notations

consider target evolution as STATE transition process thatcould be described by some model

a state:

xt ∈ Rnx , t ∈ N - the current time step

represented by a vector - coordinates, velocities, scale etc.

The objective

Evaluate current state of the target given observations (data)

An observation: zt ∈ Rnz , t ∈ N - the current time step

graphical modeling helps to represent a relationships betweenthese two variables

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: notations

consider target evolution as STATE transition process thatcould be described by some model

a state:

xt ∈ Rnx , t ∈ N - the current time step

represented by a vector - coordinates, velocities, scale etc.

The objective

Evaluate current state of the target given observations (data)

An observation: zt ∈ Rnz , t ∈ N - the current time step

graphical modeling helps to represent a relationships betweenthese two variables

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: graphical models

A relationship between an observation zt and hidden state xt :

HMM serves well when describing a sequential data:

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: posterior distribution

Evolution

system state dynamics

xt = ft(xt−1, vt−1) (1)

observation dynamics

zt = ht(xt , ut) (2)

Tracking problem in Bayesian context

recursively calculate a belief degree p(xt |z1:t)

prior p(x0|z0) ≡ p(x0) is given

Markov assumptions works a

aFirst order Markov chain: xt⊥z0:t−1|xt−1 and zt⊥z0:t−1|xt

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: posterior distribution

Evolution

system state dynamics

xt = ft(xt−1, vt−1) (1)

observation dynamics

zt = ht(xt , ut) (2)

Tracking problem in Bayesian context

recursively calculate a belief degree p(xt |z1:t)

prior p(x0|z0) ≡ p(x0) is given

Markov assumptions works a

aFirst order Markov chain: xt⊥z0:t−1|xt−1 and zt⊥z0:t−1|xt

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: posterior distribution (cont.)

Posterior distribution inducing

1 prediction:

p(xt |z1:t−1) =

∫p(xt |xt−1)p(xt−1|z1:t−1)dxt−1 (3)

2 updation: use zt to update through Bayes’ rule

p(xt |z1:t) =p(zt |xt)p(xt |z1:t−1)

αt

(4)

How to use it? What to know?

motion model p(xt |xt−1) - described by 1

perceptual model p(zt |xt) - described by 2

start from: p(x0|z0) =p(z0|x0)

p(z0)p(x0)

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: posterior distribution (cont.)

Posterior distribution inducing

1 prediction:

p(xt |z1:t−1) =

∫p(xt |xt−1)p(xt−1|z1:t−1)dxt−1 (3)

2 updation: use zt to update through Bayes’ rule

p(xt |z1:t) =p(zt |xt)p(xt |z1:t−1)

αt

(4)

How to use it? What to know?

motion model p(xt |xt−1) - described by 1

perceptual model p(zt |xt) - described by 2

start from: p(x0|z0) =p(z0|x0)

p(z0)p(x0)

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: posterior distribution (cont.)

Posterior distribution inducing

1 prediction:

p(xt |z1:t−1) =

∫p(xt |xt−1)p(xt−1|z1:t−1)dxt−1 (3)

2 updation: use zt to update through Bayes’ rule

p(xt |z1:t) =p(zt |xt)p(xt |z1:t−1)

αt

(4)

How to use it? What to know?

motion model p(xt |xt−1) - described by 1

perceptual model p(zt |xt) - described by 2

start from: p(x0|z0) =p(z0|x0)

p(z0)p(x0)

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: posterior distribution (cont.)

Posterior distribution inducing

1 prediction:

p(xt |z1:t−1) =

∫p(xt |xt−1)p(xt−1|z1:t−1)dxt−1 (3)

2 updation: use zt to update through Bayes’ rule

p(xt |z1:t) =p(zt |xt)p(xt |z1:t−1)

αt

(4)

How to use it? What to know?

motion model p(xt |xt−1) - described by 1

perceptual model p(zt |xt) - described by 2

start from: p(x0|z0) =p(z0|x0)

p(z0)p(x0)

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian sequential estimation: one dimentional illustration

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking

Graphical model:

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking

Graphical model: two consecutive frames t − 1 and t

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking

Graphical model: two cameras A and B

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking

Graphical model: two layers - hidden (circles) and obervable(rectangles)

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking

Graphical model: state dynamics p(xt |xt−1)

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking

Graphical model: local observation likelihood p(zt |xt)

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking

Graphical model: ’interaction’

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking

Graphical model: camera ’collaboration’

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking (cont.)

Generic statistical framework for one targer

p(xA,i0:t |z

A,i1:t , z

A,J1:t1:t , z

B,i1:t ) = kt p(zA,i

t |xA,it ) p(xA,i

t |xA,i0:t−1)

× p(zA,Jt

t |xA,it , z

A,it ) p(zB,i

t |xA,it )

× p(xA,i0:t−1|z

A,i1:t−1, z

A,J1:t−1

1:t−1 , zB,i1:t−1),

(5)

where

p(zA,it |x

A,it ) - local observation likelihood

p(xA,it |x

A,i0:t−1) - state dynamics

p(zA,Jt

t |xA,it , z

A,it ) - target interaction function

p(zB,it |x

A,it ) - camera collaboration function

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking (cont.)

Generic statistical framework for one targer

p(xA,i0:t |z

A,i1:t , z

A,J1:t1:t , z

B,i1:t ) = kt p(zA,i

t |xA,it ) p(xA,i

t |xA,i0:t−1)

× p(zA,Jt

t |xA,it , z

A,it ) p(zB,i

t |xA,it )

× p(xA,i0:t−1|z

A,i1:t−1, z

A,J1:t−1

1:t−1 , zB,i1:t−1),

(5)

where

p(zA,it |x

A,it ) - local observation likelihood

p(xA,it |x

A,i0:t−1) - state dynamics

p(zA,Jt

t |xA,it , z

A,it ) - target interaction function

p(zB,it |x

A,it ) - camera collaboration function

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking (cont.)

Generic statistical framework for one targer

p(xA,i0:t |z

A,i1:t , z

A,J1:t1:t , z

B,i1:t ) = kt p(zA,i

t |xA,it ) p(xA,i

t |xA,i0:t−1)

× p(zA,Jt

t |xA,it , z

A,it ) p(zB,i

t |xA,it )

× p(xA,i0:t−1|z

A,i1:t−1, z

A,J1:t−1

1:t−1 , zB,i1:t−1),

(5)

where

p(zA,it |x

A,it ) - local observation likelihood

p(xA,it |x

A,i0:t−1) - state dynamics

p(zA,Jt

t |xA,it , z

A,it ) - target interaction function

p(zB,it |x

A,it ) - camera collaboration function

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking (cont.)

Generic statistical framework for one targer

p(xA,i0:t |z

A,i1:t , z

A,J1:t1:t , z

B,i1:t ) = kt p(zA,i

t |xA,it ) p(xA,i

t |xA,i0:t−1)

× p(zA,Jt

t |xA,it , z

A,it ) p(zB,i

t |xA,it )

× p(xA,i0:t−1|z

A,i1:t−1, z

A,J1:t−1

1:t−1 , zB,i1:t−1),

(5)

where

p(zA,it |x

A,it ) - local observation likelihood

p(xA,it |x

A,i0:t−1) - state dynamics

p(zA,Jt

t |xA,it , z

A,it ) - target interaction function

p(zB,it |x

A,it ) - camera collaboration function

Introduction Previous work Proposed solution Conclusions

Multi-target trackingBayesian framework for multi-target multi-camera tracking (cont.)

Generic statistical framework for one targer

p(xA,i0:t |z

A,i1:t , z

A,J1:t1:t , z

B,i1:t ) = kt p(zA,i

t |xA,it ) p(xA,i

t |xA,i0:t−1)

× p(zA,Jt

t |xA,it , z

A,it ) p(zB,i

t |xA,it )

× p(xA,i0:t−1|z

A,i1:t−1, z

A,J1:t−1

1:t−1 , zB,i1:t−1),

(5)

where

p(zA,it |x

A,it ) - local observation likelihood

p(xA,it |x

A,i0:t−1) - state dynamics

p(zA,Jt

t |xA,it , z

A,it ) - target interaction function

p(zB,it |x

A,it ) - camera collaboration function

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation

SMCM also are known as particle filter, condensation,bootstrap filter

Main idea

represent a posterior as a sample set with appropriate weights

p(xt |z0:t) ≈ x(i)t , ω

(i)t

Ns

i=1,

where x(i)t - one particle and ω

(i)t - its associated weight

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation

SMCM also are known as particle filter, condensation,bootstrap filter

Main idea

represent a posterior as a sample set with appropriate weights

p(xt |z0:t) ≈ x(i)t , ω

(i)t

Ns

i=1,

where x(i)t - one particle and ω

(i)t - its associated weight

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation(cont.)

Belief propagation

1 predict

p(xt |z0:t−1) ≈Ns∑i=1

p(xt |x(i)t−1)ω

(i)t−1

2 update

p(xt |z0:t) ≈Ns∑i=1

ω(i)t δ(xt − x

(i)t ),

where ω(i)t ∝

p(zt |x(i)t )p(x

(i)t |x

(i)t−1

)

q(x(i)t |x

(i)t−1,z0:t)

ω(i)t−1.

What to know?

dynamics model p(xt |x(i)t−1)

likelihood model p(zt |x(i)t )

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation(cont.)

Belief propagation

1 predict

p(xt |z0:t−1) ≈Ns∑i=1

p(xt |x(i)t−1)ω

(i)t−1

2 update

p(xt |z0:t) ≈Ns∑i=1

ω(i)t δ(xt − x

(i)t ),

where ω(i)t ∝

p(zt |x(i)t )p(x

(i)t |x

(i)t−1

)

q(x(i)t |x

(i)t−1,z0:t)

ω(i)t−1.

What to know?

dynamics model p(xt |x(i)t−1)

likelihood model p(zt |x(i)t )

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation(cont.)

Belief propagation

1 predict

p(xt |z0:t−1) ≈Ns∑i=1

p(xt |x(i)t−1)ω

(i)t−1

2 update

p(xt |z0:t) ≈Ns∑i=1

ω(i)t δ(xt − x

(i)t ),

where ω(i)t ∝

p(zt |x(i)t )p(x

(i)t |x

(i)t−1

)

q(x(i)t |x

(i)t−1,z0:t)

ω(i)t−1.

What to know?

dynamics model p(xt |x(i)t−1)

likelihood model p(zt |x(i)t )

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation(cont.)

Belief propagation

1 predict

p(xt |z0:t−1) ≈Ns∑i=1

p(xt |x(i)t−1)ω

(i)t−1

2 update

p(xt |z0:t) ≈Ns∑i=1

ω(i)t δ(xt − x

(i)t ),

where ω(i)t ∝

p(zt |x(i)t )p(x

(i)t |x

(i)t−1

)

q(x(i)t |x

(i)t−1,z0:t)

ω(i)t−1.

What to know?

dynamics model p(xt |x(i)t−1)

likelihood model p(zt |x(i)t )

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation: example

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation: particle filter demo

One particle is represented as an ellipse

particle filtering

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation: resampling

Degeneracy phenomenon

definition: all, but one particle have close to zero weights

→ most of computations is wasted on those particles withnegligible weights

Solution

Use resampling technique!

ignore particles with very low weights

concentrate attention on more promising particles

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation: resampling

Degeneracy phenomenon

definition: all, but one particle have close to zero weights

→ most of computations is wasted on those particles withnegligible weights

Solution

Use resampling technique!

ignore particles with very low weights

concentrate attention on more promising particles

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation: SIR scheme

x(i)t−1,N

−1Ni=1

approximates p(xt−1|z0:t−2)

update x(i)t−1, ω

(i)t−1

Ni=1 to

represent p(xt−1|z0:t−1)

resample to make

x(i)t−1,N

−1Ni=1

propagate to x(i)t ,N

−1Ni=1

to represent p(xt |y0:t−1)

Introduction Previous work Proposed solution Conclusions

Multi-target trackingSequential Monte-Carlo implementation: resampling demos

One particle = an ellipse (centers are displayed)

no resampl. high-thresh. resampl.

Introduction Previous work Proposed solution Conclusions

Multi-target trackingTarget representation

5-dimentional parametric ellipse model

xt = (cxt , cyt , at , bt , ρt),

(cx , xy) - coordinates of the ellipse center(a, b) - major and minor axisesρ - orientation angle

Introduction Previous work Proposed solution Conclusions

Multi-target trackingModeling of densities

Local observation model p(zt |xt)

Single cue: color histogram model

State dynamics model p(xt |xt−1)

Motion-based proposal (Lucas-Kanade optical flow algorithm):

motion vector: ∆Vt = (cxt − cxt−1, cyt − cyt−1, 0, 0, 0)

sampling scheme: xt = xt−1 +∆Vt + ωt ,

where ωt - Gaussian noise

Introduction Previous work Proposed solution Conclusions

Multi-target trackingModeling of densities

Local observation model p(zt |xt)

Single cue: color histogram model

State dynamics model p(xt |xt−1)

Motion-based proposal (Lucas-Kanade optical flow algorithm):

motion vector: ∆Vt = (cxt − cxt−1, cyt − cyt−1, 0, 0, 0)

sampling scheme: xt = xt−1 +∆Vt + ωt ,

where ωt - Gaussian noise

Introduction Previous work Proposed solution Conclusions

Multi-target trackingState estimate

Given: cloud of particles x(i)t , ω

(i)t

Ns

i=1

Want to know: what will be the current state estimate?(Where our target is located?)

Solution

Weighted sum of particles!

mean shape

Introduction Previous work Proposed solution Conclusions

Multi-target trackingState estimate

Given: cloud of particles x(i)t , ω

(i)t

Ns

i=1

Want to know: what will be the current state estimate?(Where our target is located?)

Solution

Weighted sum of particles!

mean shape

Introduction Previous work Proposed solution Conclusions

Multi-target trackingInteraction model

when targets are occluding each other, can’t rely onmotion-based propagation anymore

use random-based prediction

inertia information could be of use for further data association

xt = xt−1 +∆Vt +Ω−t xt = Axt−1 +Ω+t

Introduction Previous work Proposed solution Conclusions

Multi-target trackingInteraction model: demo

random-based.

Introduction Previous work Proposed solution Conclusions

Multi-camera trackingMulti-camera data fusion: a problem illustration

A problem

How to associate targets in different views so that they have thesame identities?No calibration information is given!

Rely on appearance?

Introduction Previous work Proposed solution Conclusions

Multi-camera trackingMulti-camera data fusion: a problem illustration

A problem

How to associate targets in different views so that they have thesame identities?No calibration information is given!

Rely on appearance?

Introduction Previous work Proposed solution Conclusions

Multi-camera trackingMulti-camera data fusion: a problem illustration

A problem

How to associate targets in different views so that they have thesame identities?No calibration information is given!

Rely on appearance?

Introduction Previous work Proposed solution Conclusions

Multi-camera trackingMulti-camera data fusion: Gale-Shapley 1962 algortihm

General idea

given preference list for each target

helps to find a stable matching

Modifications

each preference has a probability (likelihood)

the preference list must be built beforehand

Drawbacks

2 cameras only

equal number of targets in both camera views

proposers optimality and acceptors pessimality

Introduction Previous work Proposed solution Conclusions

Multi-camera trackingMulti-camera data fusion: Gale-Shapley 1962 algortihm

General idea

given preference list for each target

helps to find a stable matching

Modifications

each preference has a probability (likelihood)

the preference list must be built beforehand

Drawbacks

2 cameras only

equal number of targets in both camera views

proposers optimality and acceptors pessimality

Introduction Previous work Proposed solution Conclusions

Multi-camera trackingMulti-camera data fusion: Gale-Shapley 1962 algortihm

General idea

given preference list for each target

helps to find a stable matching

Modifications

each preference has a probability (likelihood)

the preference list must be built beforehand

Drawbacks

2 cameras only

equal number of targets in both camera views

proposers optimality and acceptors pessimality

Introduction Previous work Proposed solution Conclusions

Fall detectionFeature extraction

Extracted features will be used later for activity recognition

Silhouette-based features

basic: aspect ratio, height of the center of mass, orientation,major and minor axises, height of the bounding box

advanced: edge histogram

Motion-based features

motion direction, speed value, motion change gradient, etc.

Output

Feature vector

Introduction Previous work Proposed solution Conclusions

Fall detectionFeature extraction

Extracted features will be used later for activity recognition

Silhouette-based features

basic: aspect ratio, height of the center of mass, orientation,major and minor axises, height of the bounding box

advanced: edge histogram

Motion-based features

motion direction, speed value, motion change gradient, etc.

Output

Feature vector

Introduction Previous work Proposed solution Conclusions

Fall detectionFeature extraction

Extracted features will be used later for activity recognition

Silhouette-based features

basic: aspect ratio, height of the center of mass, orientation,major and minor axises, height of the bounding box

advanced: edge histogram

Motion-based features

motion direction, speed value, motion change gradient, etc.

Output

Feature vector

Introduction Previous work Proposed solution Conclusions

Fall detectionClassification

The algorithm

Support Vector Machine (SVM)

helps to classify data into classes

2 classes: ’fall’ and ’no fall’

supervised learning method → needs some training

OpenCV implementation is available

Introduction Previous work Proposed solution Conclusions

Fall detectionClassification

The algorithm

Support Vector Machine (SVM)

helps to classify data into classes

2 classes: ’fall’ and ’no fall’

supervised learning method → needs some training

OpenCV implementation is available

Introduction Previous work Proposed solution Conclusions

Fall detectionClassification

The algorithm

Support Vector Machine (SVM)

helps to classify data into classes

2 classes: ’fall’ and ’no fall’

supervised learning method → needs some training

OpenCV implementation is available

Introduction Previous work Proposed solution Conclusions

Fall detectionClassification

The algorithm

Support Vector Machine (SVM)

helps to classify data into classes

2 classes: ’fall’ and ’no fall’

supervised learning method → needs some training

OpenCV implementation is available

Introduction Previous work Proposed solution Conclusions

Fall detectionClassification

The algorithm

Support Vector Machine (SVM)

helps to classify data into classes

2 classes: ’fall’ and ’no fall’

supervised learning method → needs some training

OpenCV implementation is available

Introduction Previous work Proposed solution Conclusions

Outline

1 Introduction

2 Previous work

3 Proposed solution

4 Conclusions

Introduction Previous work Proposed solution Conclusions

Conclusions

Multi-target tracking

Status: Bayesian framework was implemented and adapted

Analysis: tests for two people only

Extensions: use additional cues for robustness (e.g.PCA-based appearance model)

Multi-camera data fusion

Status: Gale-Shapley algorithm was implemented

Analysis: simulation of a system with 2 cameras

Extensions: extend from 2 to more cameras

Fall detection

Status: the procedure is described theoretically

Extensions: embed into the system and apply on databases fortracking and fall detection

Introduction Previous work Proposed solution Conclusions

Conclusions

Multi-target tracking

Status: Bayesian framework was implemented and adapted

Analysis: tests for two people only

Extensions: use additional cues for robustness (e.g.PCA-based appearance model)

Multi-camera data fusion

Status: Gale-Shapley algorithm was implemented

Analysis: simulation of a system with 2 cameras

Extensions: extend from 2 to more cameras

Fall detection

Status: the procedure is described theoretically

Extensions: embed into the system and apply on databases fortracking and fall detection

Introduction Previous work Proposed solution Conclusions

Conclusions

Multi-target tracking

Status: Bayesian framework was implemented and adapted

Analysis: tests for two people only

Extensions: use additional cues for robustness (e.g.PCA-based appearance model)

Multi-camera data fusion

Status: Gale-Shapley algorithm was implemented

Analysis: simulation of a system with 2 cameras

Extensions: extend from 2 to more cameras

Fall detection

Status: the procedure is described theoretically

Extensions: embed into the system and apply on databases fortracking and fall detection

top related