olga zoidi , anastasios tefas , member, ieee ioannis pitas, fellow, ieee

45
Visual Object Tracking Based on Local Steering Kernels and Color Histograms IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY VOL. 23, NO.5, MAY 2013 Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE

Upload: alec

Post on 23-Feb-2016

54 views

Category:

Documents


0 download

DESCRIPTION

Visual Object Tracking Based on Local Steering Kernels and Color Histograms IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY VOL. 23, NO.5, MAY 2013. Olga Zoidi , Anastasios Tefas , Member, IEEE Ioannis Pitas, Fellow, IEEE. Overview. Introduction Proposed method - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Visual Object Tracking Based on Local Steering Kernels and Color HistogramsIEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGYVOL. 23, NO.5, MAY 2013

Olga Zoidi, Anastasios Tefas, Member, IEEE Ioannis Pitas, Fellow, IEEE

Page 2: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

• Proposed method

• Experiment Result

• Conclusion

Page 3: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

Visual tracking

Object representation

Object position prediction

• Proposed method

• Experiment Result

• Conclusion

Page 4: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Visual Tracking

• Visual tracking is difficult to accomplish as some reason

Illumination conditions

Object may be nonrigid or articulated

Occluded

Rapid and complicated movements

Page 5: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

Visual tracking

Object representation

Object position prediction

• Proposed method

• Experiment Result

• Conclusion

Page 6: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Representation

• Model-based

• Appearance-based

• Contour-based

• Feature-based

• Hybrid

Page 7: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Representation : Model-Based

• Exploit a priori information about the object shape and create model [7].

• Deal with the problem of object tracking under illumination variations, viewing angle, and partial occlusion.

• Heavy cost.

[7]D.Roller, etc, “Model-based object tracking in monocular image sequences of road traffic scenes” Int. J. Comput. Vision, vol. 10, pp. 257-281, Mar.1993.

Page 8: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Representation : Appearance-Based

• Use the visual information for the object projection on the image plane, i.e., color, texture, and shape.

• Deal with simple object transformation.

• Sensitive to illumination changes.

Page 9: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Representation : Contour-Based

• By employing shape matching or contour-evolution techniques [9]. Contour can be represented by active models, such as snakes or B-splines [10].

• Deal with rigid and nonrigid objects.

• Incorporate with occlusion detection and estimation techniques.

[9] A. Yilmaz, X.Li, and M. Shah, “Contour-based object tracking with occlusion handling in video acquired using mobile cameras”, IEEE Trans. Pattern Anal. Mach. Intell., vol. 26, no. 11, pp. 1531-1536, Nov. 2004[10] Y.Wang and O. Lee, “Active mesh – a feature seeking and tracking image sequence representation scheme”, IEEE Trans, Image Process, vol.3, no. 5, pp. 610-624, Sep. 1994

Page 10: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Representation : Feature-Based

• By tracking a set of feature points and these features are then grouped.

• Problem is the correct distinction between the target object and background features.

Page 11: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

Visual tracking

Object representation

Object position prediction

• Proposed method

• Experiment Result

• Conclusion

Page 12: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Position Prediction

• The position of the object in the following frame is usually predicted using a linear Kalman filter. [32]

[32] G. Welch and G. Bishop, “An introduction to the Kalman filter,” Univ. North Carolina, Chapel Hill, NC, Tech. Rep. TR95041, 2000.

Page 13: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

• Proposed method

• Experiment Result

• Conclusion

Page 14: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Proposed Method

• Some tips of object tracking algorithm

Using color histogram (CH) to handle severe change in the object view.

Using decomposing target into fragments, which are tracked separately, to handle partial occlusion.

Using local steering kernel (LSK) object texture descriptor to represent the region of interest (ROI).

• Proposed tracking approach is an appearance based method using both CHs and LSK descriptor.

Page 15: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Proposed Method

• First, search image regions in video frame that have high color similarity to the object CH, and get candidate regions.

• Next, LSK descriptors of both the target object and candidate search regions are extracted.

• Discard the image regions with small CH similarity to the object CH, the new position of the object is selected as the image region, whose LSK representation has the maximum similarity to the one of the target object.

• As tracking evolves, target object appearance changes and being a stack containing different instances. Stack is updated with the representation of the most recent detected object.

Page 16: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

LSK Object Tracing Framework

• Steps

A. Initialization of the object ROI in the first video frame. Initialization can be done manually.

B. Use CH information for color similarity search in the current search region. CH information can lead to background subtraction and reduction of the number of the candidate.

C. Representation of both the object and search region with LSK features.

D. Decision on the object ROI in the new frame, based on the similarities between candidate and: a) ROI in the previous frame, and b) top object instance in the stack.

E. Update the object instance in the stack.

F. Prediction of the object position in the following video frame and initialization of an object search region.

Page 17: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

• Proposed method

Color Similarity

Object Texture Description

Object Localization and Model Update

Search Region Extraction in The Next Frame

• Experiment Result

• Conclusion

Page 18: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Color Similarity

• After object position prediction and search region selection, the search region of size R1*R2 is divided into candidate ROI which size is Q1*Q2.

• Parameter d determines a uniform sampling of the candidate object ROIs every d pixels in the search region.

• At frame t, the Bt% of the search region patches with the minimal histogram similarity to the object histogram are considered to belong to the background.

• Cosine similarity :

• Normalized form S =

Page 19: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Color Similarity

• Three color channels’ S of all patches comprise a matrix MCH.

• The distribution of MCH takes values and sets a threshold in deciding whether the patch is a valid candidate ROI.

• Finally, the binary matrix BCH, whose entry is set to 1 if entry of MCH is ≧threshold and 0, otherwise. BCH will be used in tracking in Object Location section.

• Setting , and as the mean, maximal, and minimal values of entries, respectively.

Page 20: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

• Proposed method

Color Similarity

Object Texture Description

Object Localization and Model Update

Search Region Extraction in The Next Frame

• Experiment Result

• Conclusion

Page 21: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Texture Description

• Introduction of LSK descriptors

LSKs are descriptors of the image salient features.

They were proven to be robust in small scale and orientation changes and deformations.

Result in successful tracking of slowly deformable objects.

• LSKs descriptors are a nonlinear combination of weighted spatial distances between a pixel p of an image of size N1*N2 and its surrounding M*M pixels (pi). (M is equal to 3 pixels in this paper)

• The distance K is measured using a weighted Euclidean distance, which uses as weights the covariance matrix Ci of the image gradients.

Page 22: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Texture Description

• In order to get Ci matrix in Ki(p), get gradient vectors gi and formed matrix GiM^2*2. Where .

• And Ci can be calculated via the singular value decomposition (SVD) of G i.

• , and

Page 23: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Texture Description

• For each neighboring pixel , ,extract K(p) and normalize into , where is the L1-norm.

• Above concepts are applied. First converted ROI and search region from RGB to La*b* color space and the LSKs are computed for each channel separately through steps above.

• The final representation of ROI is obtained by applying PCA [26].

• Finally, the search region is divided into patches and the LSK similarity matrix, which will be used in next section, is estimated (like color similarity) by applying the cosine similarity measure.

[26] H. Seo and P. Milanfar, “Training-free, generic object detection using locally adaptive regression kernels,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 32, no. 9, pp. 1688–1704, Sep. 2010.

Page 24: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

• Proposed method

Color Similarity

Object Texture Description

Object Localization and Model Update

Search Region Extraction in The Next Frame

• Experiment Result

• Conclusion

Page 25: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Localization and Model Update

• Object localization in the search region is performed by taking into account CH and LSK similarity of patch to the 1ROI in the previous frame and the 2object instance in the stack.

• First, divide the search region into overlapping patches of size equal to the detected object. And for each patch, we extract CH and LSK features.

• Then, for each patch, we construct three cosine similarity matrices

LSK similarity. Between this patch and the detected object in the previous frame.

LSK similarity. Between this patch and the last updated object instance.

CH similarity. Between this patch and the last updated object instance.

Page 26: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Object Localization and Model Update

• The new ROI is decided with the final decision matrix, which is computed by . (* denotes the element-wise matrix multiplication and λ usually takes the value 0.5)

• The new candidate object position is at the patch with the maximal value max i,j(Mij).

• We compare Mij with previous frame’s maximal value Mij’. If the values drops under a threshold, it indicates a possible change in the object appearance.

• Other 4 decision matrix of rotation and scaling are calculated. The final decision for the new object is the one which corresponds to the maximal value of five decision matrices.

• The newly localized object is stored in a stack, which size is constant.

Page 27: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

• Proposed method

Color Similarity

Object Texture Description

Object Localization and Model Update

Search Region Extraction in The Next Frame

• Experiment Result

• Conclusion

Page 28: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Search Region Extraction in The Next Frame

• The position of the object in the following frame is predicted using a linear Kalman filter.

• The object motion state , and the new state is given by . denotes the process noise, with probability distribution.

• After get , we can then get to compute the equation :

• Where is a covariance matrix with stochastic model. And this model is adjusted through equations below.

• Among the equation above, is the predicted position of a search region’s center.

Page 29: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

• Proposed method

• Experiment Result

• Conclusion

Page 30: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result

• Quantitative evaluation comparison is performed through the frame detection accuracy (FDA) measure.

• FDA calculates the overlap area between the ground truth object and the detected object D at a given frame t.

Page 31: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result

• The performance of proposed tracker is compared with two other trackers, PF tracker and FT tracker.

• Test case:

Page 32: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result

Page 33: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case1 : test for variation of object scale

Page 34: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case2 : test for variation of object scale

Page 35: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case3 : test for variation of object

rotation

Page 36: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case4 : test for variation of partial

occlusion

Page 37: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case5 : test for variation of partial

occlusion; the man walks behind the woman

Page 38: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case6 : test for strong change in

illumination

Page 39: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case7 : test for human activity

(orientation of glass)

Page 40: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case8 : test for human activity (hands are

articulated objects)

Page 41: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case9 : test for human activity

Page 42: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result• Test Case10 : test for face tracking

Page 43: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Experiment Result

Page 44: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Overview

• Introduction

• Proposed method

• Experiment Result

• Conclusion

Page 45: Olga  Zoidi ,  Anastasios Tefas , Member, IEEE  Ioannis  Pitas, Fellow, IEEE

Conclusion

• The tracker extracted a representation of the target object based on LSK and CH at frame and tried to find its location in the frame .

• Proposed method is effective in object tracking under severe changes in appearance, affine transformations, and partial occlusion.

• The method cannot handle the case of full occlusion. (The tracker continues tracking another object in the background)

• Kalman filter cannot follow sudden changes in the object direction or speed. (Although a larger search region may solve the issue, but it would result in rapid decrease of speed)