[ieee 2012 visual communications and image processing (vcip) - san diego, ca, usa...

6
ENHANCED MOVING OBJECT DETECTION USING TRACKING SYSTEM FOR VIDEO SURVEILLANCE PURPOSES Axel Beaugendre, Chenyuan Zhang, Jiu Xu, Satoshi Goto Waseda University, Japan ABSTRACT Foreground extraction and moving object detection are often used in human tracking systems. However those methods are not able to produce accurate results when objects are too close or when occlusions happen since the result is generally a sin- gle big blob which contains all the different objects. In this paper we propose a novel and efficient moving object detec- tion enhancement method. Indeed, by using the results of the previous iteration tracking we are able to keep a correct num- ber of moving objects by removing useless blobs and by split- ting those which contain more than one tracker. The method can also speed up the tracking by creating a link between a tracker and a blob to avoid unnecessary processing in some situations. Index Termstracking, foreground, detection, moving object, refinement. 1. INTRODUCTION Detecting, recognizing and tracking objects in video se- quences are important topics in computer vision. The final objective is to get a complete understanding of the scene. Researchers have been and are still trying to create automatic systems which can be substitutes of human beings for certain purposes like surveillance. One of the most important feature for intelligent video surveillance is to detect and track objects over time in natural busy scenes and to keep a consistent iden- tity for each target object through sequences [1]-[3]. In many systems, the first step for automatic video surveillance is the foreground extraction by using, for example, an adaptive background subtraction [1] or a foreground subtraction [6]. Tracking from this foreground can be easy for local single targets but the accuracy is really low when multiple tracked objects merge into groups. The occlusion complexity makes it a very challenging task. Although many algorithms for multi-object tracking have been proposed in the past years, the accuracy and the real speed can still be improved [13]. Using human appearance model can achieve good results for tracking multiple standing and walking humans. However when occlusions append, especially in large crowds, the visual appearance is not enough to deal with that kind of problem. Many algorithms to get moving objects have been proposed in the past years. In 1999 the popular Gaussian Mixture Model(GMM) has been proposed by Stauffer and Grimson [5]. In 2005, Kim [6] introduced the codebook, a non-parametric algorithm for background subtraction and in 2011 Xu [8] proposed a block-based codebook model with Oriented-Gradient Feature. Also a fast object detection method based on a likelihood background model has been proposed by Ikeda [7]. All of those method, however, are unable to detect individual targets when two moving objects are too close and fuse into a single blob. For the proposed tracking methods, researchers have used joint data association or joint state space representation to handle interactions among objects when multiple filters are used to track multiple objects [9, 10]. In [10] an interaction feature between filters has been introduced to solve the merging and splitting problems. Okuma et al. [12] developed a boosted particle filter (BPF) which is an improvement of the mixture particle filter of Vermaak et al. In the BPF, the proposal distribution of each objects is estimated by integrating object detection and dynamic models. Li et al. [13] proposed another efficient tracking system based on dominant color histograms (DCH), directed acyclic graphs and depths order into a sequential strategy. The common process of tracking systems which use moving object detection is to first extract the foreground from the current frame and then perform the tracking using those extracted foreground elements. Foreground extraction features are however very sensible to group situations and also to uninteresting moving objects in the background like trees during windy days. Indeed, in those cases where people are walking in a group or where two people are crossing each other, the extraction results in a single blob for the whole group instead of one blob per moving target. The method we propose here is both an efficient moving noise removal feature and also a moving object refinement which is able to get the proper number of moving objects where in common methods there is a single blob detected. Our method is also adaptable to any foreground detector and to any tracking system since it provides the same kind of output data as the foreground detection.

Upload: satoshi

Post on 02-Mar-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

ENHANCED MOVING OBJECT DETECTION USING TRACKING SYSTEM FOR VIDEOSURVEILLANCE PURPOSES

Axel Beaugendre, Chenyuan Zhang, Jiu Xu, Satoshi Goto

Waseda University, Japan

ABSTRACT

Foreground extraction and moving object detection are oftenused in human tracking systems. However those methods arenot able to produce accurate results when objects are too closeor when occlusions happen since the result is generally a sin-gle big blob which contains all the different objects. In thispaper we propose a novel and efficient moving object detec-tion enhancement method. Indeed, by using the results of theprevious iteration tracking we are able to keep a correct num-ber of moving objects by removing useless blobs and by split-ting those which contain more than one tracker. The methodcan also speed up the tracking by creating a link between atracker and a blob to avoid unnecessary processing in somesituations.

Index Terms— tracking, foreground, detection, movingobject, refinement.

1. INTRODUCTION

Detecting, recognizing and tracking objects in video se-quences are important topics in computer vision. The finalobjective is to get a complete understanding of the scene.Researchers have been and are still trying to create automaticsystems which can be substitutes of human beings for certainpurposes like surveillance. One of the most important featurefor intelligent video surveillance is to detect and track objectsover time in natural busy scenes and to keep a consistent iden-tity for each target object through sequences [1]-[3]. In manysystems, the first step for automatic video surveillance is theforeground extraction by using, for example, an adaptivebackground subtraction [1] or a foreground subtraction [6].Tracking from this foreground can be easy for local singletargets but the accuracy is really low when multiple trackedobjects merge into groups. The occlusion complexity makesit a very challenging task. Although many algorithms formulti-object tracking have been proposed in the past years,the accuracy and the real speed can still be improved [13].Using human appearance model can achieve good results fortracking multiple standing and walking humans. Howeverwhen occlusions append, especially in large crowds, thevisual appearance is not enough to deal with that kind ofproblem.

Many algorithms to get moving objects have beenproposed in the past years. In 1999 the popular GaussianMixture Model(GMM) has been proposed by Stauffer andGrimson [5]. In 2005, Kim [6] introduced the codebook, anon-parametric algorithm for background subtraction andin 2011 Xu [8] proposed a block-based codebook modelwith Oriented-Gradient Feature. Also a fast object detectionmethod based on a likelihood background model has beenproposed by Ikeda [7]. All of those method, however, areunable to detect individual targets when two moving objectsare too close and fuse into a single blob.

For the proposed tracking methods, researchers have usedjoint data association or joint state space representation tohandle interactions among objects when multiple filters areused to track multiple objects [9, 10]. In [10] an interactionfeature between filters has been introduced to solve themerging and splitting problems. Okuma et al. [12] developeda boosted particle filter (BPF) which is an improvementof the mixture particle filter of Vermaak et al. In the BPF,the proposal distribution of each objects is estimated byintegrating object detection and dynamic models. Li et al.[13] proposed another efficient tracking system based ondominant color histograms (DCH), directed acyclic graphsand depths order into a sequential strategy.

The common process of tracking systems which usemoving object detection is to first extract the foregroundfrom the current frame and then perform the tracking usingthose extracted foreground elements. Foreground extractionfeatures are however very sensible to group situations andalso to uninteresting moving objects in the background liketrees during windy days. Indeed, in those cases where peopleare walking in a group or where two people are crossing eachother, the extraction results in a single blob for the wholegroup instead of one blob per moving target. The methodwe propose here is both an efficient moving noise removalfeature and also a moving object refinement which is able toget the proper number of moving objects where in commonmethods there is a single blob detected. Our method is alsoadaptable to any foreground detector and to any trackingsystem since it provides the same kind of output data as theforeground detection.

2. MOVING OBJECT DETECTION ENHANCEMENTUSING THE TRACKING SYSTEM

The level of scene understanding needed to achieve good re-sults is not the same for an object detection and for a tracker.The detection process works with low level data (the pixel val-ues) while the tracking process works with high level data likeinformation about moving objects (speed, size, appearance,etc.). The main idea in this proposal is to bring high levelinformation into the moving object detection part in order toenhance the detection results. First, by bringing the notionof targets we can refine the results and remove unnecessaryblobs and create new blobs where moving objects should be.Then by bringing the notions of position, speed and space oc-cupancy, we can split and move the blobs to fit a more realstate of the scene.

Fig. 1. Refinement process incorporated into a classic track-ing system.

2.1. Refinement Using Last Tracking Results

In the moving object detection, the background can bring a lotof noise. Indeed, moving trees for example are a high sourceof uninteresting and noisy moving objects. It is, however,quite difficult to erase all that noise automatically. The pur-pose of the refinement process is to remove the noise by keep-ing only interesting moving regions which are, for a trackingpurpose, those where the targets are located.

Once the background subtraction is done and blobs de-tected, we check the presence of trackers on the blobs bycounting their number inside each blob. The position of thetrackers is determined by their position in the last frame andby their speed according to the previous iterations. A proba-ble position is calculated and four situations can occur. First,if no tracker is on a blob then this blob is very probably anoise and we can erase it. The second scenario is when onlyone tracker is on a blob. Since no other tracker shares theblob, the tracker can be associated to this blob. Then a dis-tance check between the center of the blob and the center ofthe tracker will determine if the blob has to be replace by acopy of the tracker of not. Indeed if there is only one targetinside the blob, some noise from the background could havemerged thus increasing the size of the blob with backgroundparts. The third case appears when two or more trackers sharethe same blob. It is an indication that the blob contains obvi-ously at least two targets but depending on their position, theblob might also contain a lot of background parts. This isa problem because we want to have moving blobs which fit

the reality by being exactly at the place of the targets withtheir shape. In this situation, our aim is to split the blob intothe number of trackers it contains. Finally, the last situationwhich could happen is if a tracker is not on any blob. Thiscould either mean the target stays still, and in that case thebackground subtraction does not detect it anymore, or the tar-get has left the scene. To deal with this, we give a chance tothe tracker by creating a blob at its position and by startinga counter. After few frames, if there is still no moving blobunder the tracker then the tracker is deactivated.

In order to remove and split blobs efficiently, targets mustnot be split in multiple blobs during the blob extraction pro-cess. Since we want to remove useless blobs, we do not wantto remove blobs which represent a part of a target. By merg-ing all blobs which are close to each other we can avoid thiskind of situation. Also there is absolutely no problem to doa fusion of blobs representing a target and one for the back-ground since it is the purpose of this work to sort the goodand the bad.

Algorithm 1 Refinement Algorithm1: Input: trackers results {Ti, i = 1, . . . , Nt}; blobs{Mj , j = 1, . . . , Nb};

2: Assignment3: for all Ti do4: for all Mj do5: move Ti according to its prediction model6: if the center ci of Ti is inside the area of Mj then

assign tracker Ti to the blob Mj ;7: end for8: if not assigned to a blob then create new blob copy ofTi, assign and start counter;

9: end for10: Refinement11: for all Mj do12: r.1: if number of assigned tracker njt = 1 then if δic <

γ link assigned tracker to blob otherwise create copy oftracker and link;

13: r.2: if njt >= 2 then split current blob into njt blobsaccording to Algorithm 2;

14: end for

2.2. Splitting Process

The moving object feature can be a powerful tool for tracking,especially for a single target. The limitation of this feature is,in a multiple target case, the low accuracy when the detectoris not able to separate each target as soon as they are too closefrom each other. As a result we have a big blob which is a fu-sion of the moving targets and eventually also some movingbackground parts. Our aim during this step is to split the orig-inal blob into Nt new blobs, Nt being the number of trackerson the original blob.

The splitting works in two pass : the new blob creationand the repositioning.

2.2.1. First Pass

From the previous iterations we have an estimation of the tar-gets’ properties in the current frame. For all the targets wehave their size and their estimated position which we alreadycalculated using their speed. Thus, we can create artificialblobs, copies of the trackers which are in the blob being split.Those new blobs Mi have exactly the same size and positionproperties. Once we have all those new blobs, we want nowto optimize the occupancy of the original blob with the newones by moving them to fit the border of the original. We cal-culate the distance δi

b1,2x,ybetween each corresponding borders

from the new blob and M0 the original blob. For each pairof borders of Mi four situations can happen depending on thevalues of δib1x,y

and δib2x,y(Figure2). If Mi is slightly outside

(situations 1 to 3) then Mi is moved to be inside from theclosest border. If all the borders of Mi are inside then noth-ing has to be done. The blob may be moved only if necessaryduring the second pass. All new blobs are now prepositioned.

Fig. 2. The four possible situations for the top and bottomborders. The left and right borders have an identical approach.

2.2.2. Second Pass

After the first pass, instead of having one big blob M0 wehave now one blob for each tracker on M0. However, thepositions of the blobs are not ideal. Since a blob is a rep-resentation of a moving object, the new blobs should coverevery borders of the original blob M0 if possible. The reposi-tioning algorithm is based on the distance δi

b1,2x,yfrom the blobs

to the borders b1,2x,y and works by borders’ couple (left-right,top-bottom). For each couple we calculate the distance fromall the blobs to the borders and select the two closest. If theblob which is closer to a border of M0 than the other blobs isalso the closest for the opposite border, then we check whichone of the two borders is closer to the blob, associate the blobto the closest border. For the other border, the blob which wasthe second closest becomes first and is associated to the bor-der. The exception is when a border is too far from any blob(δi

b1,2x,y>= ε) then we do not move any blob to this border.

The border which is too far from a blob will not be associ-ated to a blob. Those situations occur when a moving noise

is merged with the other moving objects and shifting the blobwould give bad results. When the blobs and borders of M0

have been associated, we can move the blobs to match theborder it has been associated to. M0 is removed at the endof the splitting process since it has been replaced by the newblobs Mi. Figure3 shows the blob state before and after thesplitting process.

Algorithm 2 Splitting Algorithm1: Input: blob M0; trackers results {Ti, i = 1, . . . , Nt}2: 1st pass: new moving object creation and prepositioning3: for all Ti associated to M0 do4: create a new blob Mi as a copy of Ti;5: calculate δx1

, δx2, δy1

and δy2the distance between

the tracker’s border and M0’s respective borders bx1, bx2

,by1

and by2;

6: check the situation :7: s.1: if δx1 <= 0 && δx2 <= 08: center Mi on M0, associate Mi to the borders bx1

and bx2and set its width to wi =W0;

9: s.2: if δx1<= 0 && δx2

> 010: shift Mi on the axis until δx1

= 0, associate Mi tothe border bx1 and if wi > W0 set its width to wi =W0;

11: s.3: if δx1 > 0 && δx2 <= 012: shift Mi on the axis until δx2

= 0, associate Mi tothe border bx2

and if wi > W0 set its width to wi =W0;13: s.4: if δx1

> 0 && δx2> 0

14: . nothing has to be done in this case.15: same process with δy1 and δy2 ;16: end for17: 2nt pass: re-positioning18: for all border b1,2x,y do19: if b1,2x,y has no association then20: if b2,1x,y is associated to 2 or more objects then21: select Mi by min(δi

b1,2x,y);

22: shift and associate to the border if δib1,2x,y

< ε;

23: else if b2,1x,y is associated to 1 object only then24: select Mi by min(δi

b1,2x,y) && δi

b2,1x,y6= 0;

25: shift and associate to the border if δib1,2x,y

< ε;26: else27: select Mi which has the closest center to the

border28: shift and associate to the border if δi

b1,2x,y< ε;

29: end if30: end if31: end for

Fig. 3. One the left, the trackers and the main blob M0 (inred) before the splitting. In the middle moving objects afterfirst pass. On the right the main blob and the new set of blobsresulting from the splitting process.

2.3. Tracking System

The second part of the system is the tracking. The particlefilter here is based on the work of Beaugendre et al. [9]. Theyused a particle filter tracking system based not only on mov-ing objects but also on color information and interaction be-tween objects.

The particle filter is a Bayesian sequential sampling tech-nique which recursively approximates the posterior distribu-tion using a finite set of weighted samples. The particle filterpredicts the posterior variable of interest xk (a rectangle rep-resented by the position and the size of the object) using pre-vious observations Yk−1

0 = {y1, . . . , yk−1} up to time k − 1and the state transition probability p(xk|xk−1), at time k asfollow

p(xk|Yk−10 ) =

∫p(xk|xk−1)p(xk−1|Yk−1

0 )dxk−1 (1)

The tracking works in two steps, the prediction and theupdate. The prediction stage uses a model f(k, xk) with ap-propriate noise wk to simulate the movement of the set of par-ticles Si

k = [xik bik] knowing the previous position of each

particle at instant k − 1, the algorithm being recursive. Then,in the update step we update the particles’ weight (or match-ing probability) using the last measurements to estimate mov-ing object’s probability density function. The measurementsare based on an appearance feature (using color and size), amoving object feature and an interaction feature. Finally thediscrepancy is limited by the resampling process which re-moves particles with low likelihoods and enhances those withhigh likelihoods.

The particle filter is a powerful tool but it has some draw-backs. For example, the bigger the set of particle is, the big-ger the time consumption will be. If a big number of samplesis needed when many targets are getting close to each other,it is however a waste when there is only one moving objectaround. In this paper we proposed to deal with this problemby associating a blob and a tracker if no other tracker is onthe blob. Instead of having to compare each particle to theirmodel, we already have the information needed which is theposition of the target in the current frame. Thus we can skip

all the measurement part and reduce the time consumption ofthe given tracker.

3. RESULTS

The trackers’ particle filter each contains 200 particles for thefirst two experiments and 300 particles for the third. All otherparameters are the same : the position variances and the sizevariances for the x and y axis are respectively 50, 50, 2.4 and2,8 for the first two videos and 25, 25, 1.2 and 1.4 for the lastone.

Each experiment shows one aspect of the present al-gorithm. Experiment 1 focuses on the background noiseremoval while Experiment 2 presents the splitting processthrough a simple case. The last experiment shows the ef-fectiveness of the algorithm on a more natural case. Differ-ent moving object detection have been used to emphasize theadaptability of the algorithm to existing solutions for movingobject detection.

3.1. Experiment One

(a) Frame 20 (b) (c)

(d) Frame 31 (e) (f)

Fig. 4. On the left, original image. In the middle codebookforeground detection. On the right, blobs after refinement.

The first experiment is the detection and tracking of a singletarget. The video in 1080p represents a woman walking fromleft to right with moving trees in the background. In the sec-ond column we can observe the results from the codebookmethod [8] that the trees, which are supposed to be fix as apart of the background, are in fact detected as moving objectsbecause of the wind. The last column shows the noise removaleffect of the refinement. Instead of having multiple blobs,which were in majority some noise from the background, wehave now only one moving object which is the one focusedon the target. The time consumption of the tracking part for asingle target without the refinement is about 0.05s/frame. Byassociating the tracker to the blob can reduce the time con-sumption to 0.015s/frame. That means the tracking part has

been hijacked by the association at every frames and the par-ticles did not need to be measured. Only the update stage hasbeen done in order to calculate the movement of the target andto move all the particles to the estimated position.

3.2. Experiment Two

The second experiment is a 1280x720 sequence where twopeople are crossing (Figure5). One of the two men is fallingcausing the stop of both targets for few seconds. In thisexample a total occlusion occurs between the two targets.The codebook method [8] (top row) detects only one blobduring all the crossing (Frames 70-107) due to the proximityof the two targets. With our method we are able, from theresults given by the codebook algorithm to get two movingobjects at all time, even during the crossing. Moreover theposition of the new blobs created fit correctly the real targets.Also, even if the blob has the size of a single target we alwayscreate two blobs for each tracker to ease the tracking whenthe targets start to split after the crossing.

(a) (b) (c)

(d) Frame 53 (e) Frame 92 (f) Frame 215

Fig. 5. Comparison between moving objects extraction re-sults from codebook method [8](on top) and the results afterour refinement method (on bottom)

3.3. Experiment Three

The third sequence (Figure6) is taken from PETS2009 and itis a short video of 768x576 pixels where people are gather-ing. For this video we compared our results with [7] whichwe used for the detection part. The tracking stage uses upto 5 trackers of 200 particles and runs at an average speedof 9 fps. The enhancement’s speed for 5 trackers is around0.003s/frame. The refinement works the same way as in theprevious experiments regardless of how the original detectionis done since the refinement uses directly the results from thedetection part. It is thus possible to use this method after ev-ery moving/foreground detection system. In this scene themoving object detection method [7] produces a lot of smallnoisy moving objects and close moving objects are merged

into a single blob (frames 6(b),6(c)). Sometimes there are asmany blobs as there are trackers but it could be just a coin-cidence as we can see at frame 6(c). Indeed, small parts ofthe background are detected. Sometimes we cannot apply atoo strong threshold on the size of the blobs, especially if thefield of view is big. We could by mistake remove some blobswhich are in fact representing real targets. With our method,the background has much less influence since we remove ev-erything which could not represent a target.

In this example, the targets are getting close to each otherthus forming blobs which are the fusion of the targets. Atthis stage, the splitting process shows its interests. Since thetargets are just starting to get really close to each other the fu-sion blob contains the targets of course but also a lot of back-ground parts and so making the tracking more difficult for atracker using a moving object feature. In frames 6(e)-6(f) wecan observe the effects of the splitting process. All the blobswhich contained more than one tracker have been divided intosub-blobs. Not only we can keep a correct number of movingobjects but also their positions are much more accurate and fitbetter the real position of the targets.

(a) (b) (c)

(d) Frame 14 (e) Frame 17 (f) Frame 20

Fig. 6. Top, moving objects extraction results from [7]method. Bottom, results after our refinement method. Orig-inal moving objects (in dark) are kept if necessary or aresplit into new blobs (in white) if they contain more than onetracker.

4. CONCLUSION

An efficient moving object detection enhancement methodhas been proposed. It removes uninteresting blobs and alsosplits blobs containing more than one target. By checkingthe number of trackers, on which we have information abouttheir position or speed, the refinement can delete blobs whichdo not have any tracker at their proximity. Also, if there ismore than one tracker on a blob then a splitting process willcreate new blobs as copies of the trackers and then movesthem to optimize the space occupancy of the blob which isbeing split. We showed from the results that our method was

efficient for removing moving noise from the background butalso for splitting the blobs in order to have the same num-ber of blobs as the number of active trackers. Another use ofthis method could be to use it to separate targets from all othermoving objects like cars for example as soon as we know whatare the targets.

However, this method have some limits. Indeed, in orderto be efficient, the detection system has to provide blobs bigenough contain the whole target and not just parts of it. Oth-erwise the system will consider that the target is either staticor has disappeared. Thus if a new blob is created at first, af-ter few frames if the tracker does not match any blob it willbe deactivated. The second lack is the dependency from thetracking system. The system has to be able at least to focuscorrectly on the targets. Swaps of targets are not a problemsince there is no affiliation between a tracker and a blob inthe splitting process. However if many targets are in the sameblob and the tracking results are not accurate in the last itera-tions, the new blobs might not be moved correctly.

From now we want to produce more experiments andcomplete the solution to solve those problems. One approachcould be to use a local human detector in the blobs beingsplit. Another would be to extend this method to networkcamera systems. Indeed, we think that this enhancement pro-cess would give even better results on those systems. Withdifferent angles the refinement and splitting should give bet-ter results and a new objective would be to maximize the as-sociation between blobs and trackers to skip all or most of theparticle filters’ process.

Acknowledgment

This work is supported by Kitakyushu Foundation for the Ad-vancement of Industry Science and Technology (FAIS).

5. REFERENCES

[1] W.Hu, T. Tan, L. Wang and S. Maybank, A survey on vi-sual surveillance of object motion and behaviors. IEEETrans. Syst., Man, Cybern. C, Appl. Rev., vol . 34, no.30, pp. 334-352, Aug. 2004.

[2] T. Yu, Y. Wu, Collaborative tracking of multiple targets,in Proc. IEEE Conf. Comput. Vis. Pattern Recog., 2004,vol 1, pp. 834-841.

[3] T. Zhao and R. Neviata, Tracking multiple humans incomplex situations, IEEE Trans. Pattern Anal. Mach. In-tell, vol. 26, no. 9, pp. 1208-1221, Sep. 2004.

[4] S. L. Dockstader and A. M. Tekalp, Tracking multipleobjects in the presence of articulated and occluded mo-tion in Proc. Workshop Human Motion, 2000, pp. 88-95.

[5] Stauffer C and Grimson WEL., Adaptive backgroundmixture models for real-time tracking, IEEE Interna-tional Conference on Computer Vision and PatternRecognition, vol. 2, 1999.

[6] K. Kim, T. H. Chalidabhonse, D. Hardwood, and L.Davis. Real-time foreground-background segmentationusing codebook model. Elsevier Real-Time Imaging,vol. 11, no.3, 167-256, June 2005.

[7] Hiroo Ikeda, Eiki Ishidera, An Object Detection MethodBased on a Likelihood Background Model. FIT2006(Forum on Information Technology 2006) Vol.3 I 072,pp.175-176, 2006.

[8] J. Xu, N. Jiang, S. Goto, Block-based Codebook Modelwith Oriented-Gradient Feature for Real-time Fore-ground Detection, MMSP 2011.

[9] A. Beaugendre, H. Miyano, E. Ishidera, S. Goto, HumanTracking System for Automatic Surveillance with Parti-cle Filters, IEEE Asia Pacific Conference on Circuitsand Systems, Dec. 2010, pp. 152-155.

[10] S.L.Tang, Z.Kadim, K.M.Liang, M.K.Lim, Hybrid Bloband Particle Filter Tracking Approach for Rubust ObjectTracking. ICCS 2010.

[11] A. Elgammal, D. Harwood, L. Davis, NonparametricModel for Background Subtraction. 6th European Con-ference on Computer Vision (ECCV),2, pp. 751-767,2000.

[12] K. Okuma, A. Taleghani, N. Feitas, J.J. Little and D. G.Lowe, A booster particle filter: Multitargets detectionand tracking in Proc. Eur. Conf. Comput. Vis., 2004,pp. 28-39.

[13] L. Li, W. Huang, I. Yu-Hua Gu, R. Luo, Q. Tian, An Effi-cient Sequential Approach to Tracking Multiple ObjectsThrough Crowds for Real-Time Intelligent CCTV Sys-tems, IEEE Trans. Sys., Man, Cybern., part B: Cybern.vol. 38, Oct. 2008, pp. 1254-1269.

[14] R. Hess and A. Fern, Discriminatively Trained Parti-cle Filters for Complex Multi-Object Tracking. In Proc.IEEE Conf. on Computer Vision and Pattern Recogni-tion (CVPR), 2009.