an automated vehicle counting system based on blob analysis...

6
An Automated Vehicle Counting System Based on Blob Analysis for Traffic Surveillance G. Salvi 1 1 Department of Economics Studies, University of Naples "Parthenope", Naples, Italy AbstractRobust and reliable traffic surveillance system is an urgent need to improve traffic control and manage- ment. Vehicle flow detection appears to be an important part in surveillance system. The traffic flow shows the traffic state in fixed time interval and helps to manage and control especially when there’s a traffic jam. In this paper, we propose a traffic surveillance system for vehicle counting. The proposed algorithm is composed of five steps: background subtraction, blob detection, blob analysis, blob tracking and vehicle counting. A vehicle is modeled as a rectangular patch and classified via blob analysis. By analyzing the blob of vehicles, the meaningful features are extracted. Tracking moving targets is achieved by comparing the extracted features and measuring the minimal distance between consecutive frame. The experimental results show that the proposed system can provide real-time and useful information for traffic surveillance. Keywords: Background subtraction, blob analysis, blob tracking, vehicle counting. 1. Introduction In recent years, the traffic surveillance system is put forward extensively to be discussed and studied because it can provide meaningful and useful information such as traffic flow density, the length of queue, average traffic speed and the total vehicle in fixed time interval. Generally, the traffic surveillance system requires more sensors. The common traffic sensors include (1) push button (detecting pedestrian demand), (2) loop detectors (detecting vehicle presence at one point), (3) magnetic sensors (magnetome- ters), (4) radar sensors, (5) microwave detectors, (6) video cameras. Video camera is a promising traffic sensor because of its low cost and its potential ability to collect a large amount of information (such as the number of vehicles, vehi- cles speed/acceleration, vehicle class, vehicles track) which can also infer higher-level information (incidents, speeding, origin-destination of vehicles, macroscopic traffic statistics, etc). The video cameras (CCD or CMOS) are connected to a computer that performs images/video processing, object recognition and object tracking. Numerous research projects aiming to detect and tract vehicle from stationary rectilinear cameras have been carried out in terms of measuring traffic performance during the past decades [1]-[3]. It is widely recognized that vision-based systems are flexible and versa- tile in traffic monitoring applications if they can be made sufficiently reliable and robust [4], [5]. As the key goal for a traffic surveillance system, the evaluation of traffic conditions can be represented by the following parameters: traffic flow rate, average traffic speed, the length of queue and traffic density. Much of the proposed methods used to extracted traffic condition information are based on vehicle detection and tracking techniques. In these systems, robust and reliable vehicle detection and tracking is a critical step. In this paper, we describe a computer vision system to count vehicles moving on roads. The system involves analyzing a sequence of road images which represent the flow of traffic for the given time period and place. The approach utilized to analyze traffic videos using the following module pipeline: 1. Background Subtraction [6]. 2. Blob Detection [7]. 3. Blob Analysis. 4. Blob Tracking. 5. Vehicle Counting. The system works by detecting the entering objects to the scene, and tracking them throughout the video. The input to the algorithm is the raw video data of a site. The algorithm then performs the following steps: First, a statistical background model of the scene is populated using the first few frames of the video. This background model collects the statistics of the background of the recorded scene such as road, trees, buildings, etc. This model is then used to distinguish the objects of interest (vehicles) from the sur- roundings. In the next step, the detected foreground parts of the scene are grouped together by a neighborhood analysis, and a filtering process is applied to remove noise and mis- detections. The objects of interest obtained at the end of this step are then tracked throughout the video until they leave the scene. The remainder of this paper is organized as follows. Section II describes the software structure of the vehicle counting system; Experimental results are discussed in Section III, followed by a conclusion in Section IV. 2. System Review The proposed system employs a loop-based approach to detect and count moving vehicles in the scene. In virtue of the PC software, the user is able to view the real-time image

Upload: others

Post on 03-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Automated Vehicle Counting System Based on Blob Analysis …worldcomp-proceedings.com/proc/p2012/IPC4516.pdf · 2014-01-19 · An Automated Vehicle Counting System Based on Blob

An Automated Vehicle Counting System Based on Blob Analysis forTraffic Surveillance

G. Salvi11Department of Economics Studies, University of Naples "Parthenope", Naples, Italy

Abstract— Robust and reliable traffic surveillance systemis an urgent need to improve traffic control and manage-ment. Vehicle flow detection appears to be an importantpart in surveillance system. The traffic flow shows thetraffic state in fixed time interval and helps to manageand control especially when there’s a traffic jam. In thispaper, we propose a traffic surveillance system for vehiclecounting. The proposed algorithm is composed of five steps:background subtraction, blob detection, blob analysis, blobtracking and vehicle counting. A vehicle is modeled asa rectangular patch and classified via blob analysis. Byanalyzing the blob of vehicles, the meaningful features areextracted. Tracking moving targets is achieved by comparingthe extracted features and measuring the minimal distancebetween consecutive frame. The experimental results showthat the proposed system can provide real-time and usefulinformation for traffic surveillance.

Keywords: Background subtraction, blob analysis, blob tracking,vehicle counting.

1. IntroductionIn recent years, the traffic surveillance system is put

forward extensively to be discussed and studied becauseit can provide meaningful and useful information such astraffic flow density, the length of queue, average trafficspeed and the total vehicle in fixed time interval. Generally,the traffic surveillance system requires more sensors. Thecommon traffic sensors include (1) push button (detectingpedestrian demand), (2) loop detectors (detecting vehiclepresence at one point), (3) magnetic sensors (magnetome-ters), (4) radar sensors, (5) microwave detectors, (6) videocameras. Video camera is a promising traffic sensor becauseof its low cost and its potential ability to collect a largeamount of information (such as the number of vehicles, vehi-cles speed/acceleration, vehicle class, vehicles track) whichcan also infer higher-level information (incidents, speeding,origin-destination of vehicles, macroscopic traffic statistics,etc). The video cameras (CCD or CMOS) are connected toa computer that performs images/video processing, objectrecognition and object tracking. Numerous research projectsaiming to detect and tract vehicle from stationary rectilinearcameras have been carried out in terms of measuring trafficperformance during the past decades [1]-[3]. It is widely

recognized that vision-based systems are flexible and versa-tile in traffic monitoring applications if they can be madesufficiently reliable and robust [4], [5]. As the key goalfor a traffic surveillance system, the evaluation of trafficconditions can be represented by the following parameters:traffic flow rate, average traffic speed, the length of queueand traffic density. Much of the proposed methods used toextracted traffic condition information are based on vehicledetection and tracking techniques. In these systems, robustand reliable vehicle detection and tracking is a critical step.In this paper, we describe a computer vision system to countvehicles moving on roads. The system involves analyzing asequence of road images which represent the flow of trafficfor the given time period and place. The approach utilized toanalyze traffic videos using the following module pipeline:

1. Background Subtraction [6].2. Blob Detection [7].3. Blob Analysis.4. Blob Tracking.5. Vehicle Counting.

The system works by detecting the entering objects tothe scene, and tracking them throughout the video. Theinput to the algorithm is the raw video data of a site.The algorithm then performs the following steps: First, astatistical background model of the scene is populated usingthe first few frames of the video. This background modelcollects the statistics of the background of the recorded scenesuch as road, trees, buildings, etc. This model is then usedto distinguish the objects of interest (vehicles) from the sur-roundings. In the next step, the detected foreground parts ofthe scene are grouped together by a neighborhood analysis,and a filtering process is applied to remove noise and mis-detections. The objects of interest obtained at the end ofthis step are then tracked throughout the video until theyleave the scene. The remainder of this paper is organizedas follows. Section II describes the software structure of thevehicle counting system; Experimental results are discussedin Section III, followed by a conclusion in Section IV.

2. System ReviewThe proposed system employs a loop-based approach to

detect and count moving vehicles in the scene. In virtue ofthe PC software, the user is able to view the real-time image

Page 2: An Automated Vehicle Counting System Based on Blob Analysis …worldcomp-proceedings.com/proc/p2012/IPC4516.pdf · 2014-01-19 · An Automated Vehicle Counting System Based on Blob

sequence and define a set of regions of interest (ROIs) in avideo image. Each ROI is denoted as a virtual detector onevery lane (see Fig 1).

Fig. 1: Representative ROIs in the image.

The ROIs are laid out to facilitate vehicle detection andmay be linked to make the counting more accurate. Loop-based vehicle detection methods mainly have two advan-tages:

1. only ROIs in the image are processed, as to reduce thecomputation load;

2. object tracking, occlusion handling, and some othercomplex processing steps are not required to countvehicles.

On the other hand, it is clear that the major disadvantageof such methods is their limited monitoring ability due toreduced processing areas. The flowchart of the proposedsystem is shown in Fig 2. Firstly, moving objects aresegmented from the captured image sequence by using abackground subtraction algorithm. Then, each segmentedobject, denoting a vehicle, is bounded into a rectangle andthe center and area of such a rectangle are regarded asfeatures of that vehicle. Based on those features, each vehicleis counted.

2.1 Background subtractionIn our application, the camera is fixed above a road;

therefore the background seen in the camera images is nearlystatic. This is the motivation in using background subtractionapproaches. These approaches compute a background modelof the camera images and update the model continuously.We have mainly utilized the algorithm in [6], which isrobust and has high detection performance, although it hashigher computational requirements than other algorithms inthe literature [8], [9]. However, the performance achieved atthis step greatly affects the subsequent stages, and therefore a

Fig. 2: System Overview.

robust and accurate algorithm is desired. The background ofa video is generally defined as objects staying constant (suchas buildings and roads) or moving in small amounts (suchas trees) in the scene. The background subtraction modelfirst builds a histogram based statistical model of the videoimage background for each pixel. This model collects thecolor co-occurrences in the video and applies quantizationto decrease the computational load. This selection of thecolor co-occurrences increases the accuracy of the algorithmin the presence of moving background objects. The algo-rithm then employs a Bayes decision rule for classificationof background and foreground. The Bayes decision rulefacilitates the use of posterior probabilities of the featurevectors, and calculates the probability of a feature vector(color co-occurrence) belonging to a specific class. The clas-sification is then made easily comparing the probabilities. Animportant property of any effective background subtractionalgorithm is to adapt the background model to the changingconditions in the scene. These conditions include the changeof time during the day and sudden changes in illumination(such as passing clouds blocking the sun), among others. Toaccount for these conditions, the implemented backgroundsubtraction algorithm updates the background model at eachvideo frame as follows: After the classification, new featuresfrom the background of the scene are collected and used toupdate the model using a weighted average filter (an infiniteimpulse (IIR) filter), that is, both new and old featureshave an effect on the background model but with differentdegrees. Combined with the Bayes decision rule mentionedabove, both gradual and sudden changes in the scene can beaccounted for and used to update the statistics. Finally, thealgorithm maintains a reference image of the background.The block diagram of the algorithm is shown in Fig 3.Owing to the similar pixels existed in both foreground andbackground, it will generate several holes in the binary mask

Page 3: An Automated Vehicle Counting System Based on Blob Analysis …worldcomp-proceedings.com/proc/p2012/IPC4516.pdf · 2014-01-19 · An Automated Vehicle Counting System Based on Blob

of the moving object. In order to solve the problem ofhollow phenomenon, erosion and dilation of morphologicaloperations [10] is used. It can not only fill the hollow, butalso remove the noise of the binary image. Therefore, it canprovide a complete mask of that moving object for achievingbetter extraction later. An example result of the backgroundsubtraction module can be seen in Fig 4.

Fig. 3: Block diagram of the background subtraction algo-rithm.

Fig. 4: Left: Original video frame, Right: The result of thebackground subtraction module.

2.2 Blob DetectionThe background subtraction model supplies the pixels

detected as foreground. In the blob detection model, thesepixels are grouped, in current frame, together by utilizinga contour detection algorithm [7]. The contour detectionalgorithm groups the individual pixels into disconnectedclasses, and then finds the contours surrounding each class.Each class is marked as a candidate blob (CB). These CBare then checked by their size and small blobs are removedfrom the algorithm to reduce false detections. An exampleresult of the blob detection module can be seen in Fig 5.

Fig. 5: Left: Original video frame, Right: The result of theblob detection module.

2.3 Blob AnalysisThe blob analysis module is one of the most important

stages in the pipeline. As shown in Fig 6, this modulereceives the CB with their position, as inputs, and providesthe news blob in the current video frame.

Fig. 6: Blob Analysis module.

The blob analysis module identifies which CB in thecurrent frame belong to the same vehicle. For instance,

Page 4: An Automated Vehicle Counting System Based on Blob Analysis …worldcomp-proceedings.com/proc/p2012/IPC4516.pdf · 2014-01-19 · An Automated Vehicle Counting System Based on Blob

there can be many CB corresponding to a same vehicle,due to errors in foreground detection. This correspondenceis computed using the position. The positions of the CB, incurrent frame, are compared using the k-Means clustering.Optimal k for k-Means is decided by the following step:we run the k-Means on the given dataset multiple times fordifferent k, and the best of these is selected. The best valueof k is defined as follows:

Algorithm 1 Compute k

k ← nfor i = 1 to n− 1 doCi ← kmeans(cb, i, labelsi);if Ci ≤ δ thenk ← ireturn;

end ifend for

where n and cb are respectively the number and the vectorof centroids of the CB in the current frame, finally δ is athreshold that determines whether a set of CB belongs to thesame vehicle. The function kmeans implements a k-meansalgorithm that finds the centers of i clusters and groups theinput samples CB around the clusters. On output, containsa 0-based cluster index for the sample stored in the row ofthe samples matrix. The function returns the compactnessmeasure, which is computed as:

n∑j=1

||cbj − centerslabelsj ||2 (1)

The output of this module are the blobs that identify thevehicles in each ROI. Fig 7. shows a result of the blobanalysis module.

2.4 Blob TrackingIn order to achieve the vehicle-flow counting, the proposed

method will track each blob within successive image frames.However, after blob analysis, these blob with their boundingboxes and centroids are extracted from each frame. Intu-itively, two objects that are spatially closest in the adjacentframes are connected. Euclidean distance is used to measurethe distance between their centroids. Besides, the area of avehicle is also considered for enhancing the vehicle tracking.For each object in the current frame, an object with theminimum distance and similar size needs to be searched inthe previous frames. The match function are described inequations (2) and (3).

distn =√(xBc

− xBn)2 + (yBc

− yBn)2 < δ (2)

Fig. 7: Left: Original video frame, Right: The result of theblob analysis module.

ρn = |(wBc × hBn)− (wBc × hBn)| < γ (3)

where n is the number of the previous frame, Bc and Bn

are respectively the blob in the current and previous frameand δ and γ are the thresholds. If distn is minimal and theconditions distn < δ and ρn < γ are met, the object in thecurrent frame is considered to be an object in the previousframe and the blob that identifies it is assigned the samelabel.

2.5 Vehicle countingThe main object of this part is to count and register

the vehicle flow for each lane. To achieve automatic bi-directional counting for the vehicle passing, the proposedmethod sets two base lines, as shown in Fig 8, for eachROI. The moving vehicle is counted when it passes the baseline. When the vehicle passes through the area-R, the framewill be recorded. In each ROI the blob (computed by theblob tracking module) with the same label are analyzed andthe vehicle count is incremented by one if the followingconstraints are satisfied:

BLOBROI > 4 (4)

|BLOBROI [start].y −BLOBROI [stop].y| > δ (5)

Page 5: An Automated Vehicle Counting System Based on Blob Analysis …worldcomp-proceedings.com/proc/p2012/IPC4516.pdf · 2014-01-19 · An Automated Vehicle Counting System Based on Blob

Fig. 8: The setting of the proposed vehicle-counting system.

where BLOBROI denotes a set of blob with the samelabel in a ROI, δ is an appropriate threshold, whileBLOB[start].y and BLOB[stop].y denote the y coordinateof the first and last blob in the set BLOBROI . Specifically,the method is able to count for each ROI fore and aftvehicles. The system will also calculate the velocity of thevehicle in the counting process. By measuring the distancein the real road and frame rate of capturing, the velocity canbe deduced when the vehicle is passing through area-R.

3. Experimental Results

The above system have been implemented in C++. Thesystem can process around 25 frames per second on a dualcore processor at 2.4 GHz. We have performed an evaluationof the system on vehicles video with 320×240 pixels underdifferent kinds of illumination. For instance, in the morningand afternoon. For a fair evaluation, three situations ofvehicle flowing with different moving direction are simulatedand the results are tabulated in Table 1.

Precision Recall F-MeasureCarFlow1 99.1% 99.22% 99.15%CarFlow2 99.1% 98.01% 98.55%CarFlow3 98.5% 97.55% 98.02%Avarage 98.9% 98.26% 98.57%

Table 1: Rate of accuracy under different situations.

The evaluation consists of comparing the automatic countof vehicles in videos against the manual count (groundtruth). In the table, CarFlow1 denotes a bi-directional flowsituation described in Fig 9(a), CarFlow2 and CarFlow3mean different uni-directional flowing situations, as shownin Fig 9(b) and (c), respectively. From the table, it manifeststhat the average of the F-Measure [11], [12], which combinesprecision (PR) and recall (RE) in the form of their harmonicmean providing an index that is more representative than thepure PR and RE measures themselves, is above 98%.

(a)

(b)

(c)

Fig. 9: Results of vehicle counting in different situations. (a)Bi-directional situation, (b) Single-directional situation-1 (c)Single-directional situation-2.

Page 6: An Automated Vehicle Counting System Based on Blob Analysis …worldcomp-proceedings.com/proc/p2012/IPC4516.pdf · 2014-01-19 · An Automated Vehicle Counting System Based on Blob

4. ConclusionsIn this paper we present a system to detect and count mov-

ing vehicles in traffic scenes. The virtual loop-based methodis used to detect and count moving vehicles. Long termtests on actual traffic scenes show that the proposed systemis reliable to estimate real-time traffic flow rate. Actually,our version is not able to recognize vehicle types (lorry-driver, car, motorcycle). The second improvement could beimplementing vehicle classification for each detector in orderto improve the statistic function.

References[1] Meng Cao, Anh Vu, Barth, M., "A Novel Omni-Directional Vision

Sensing Technique for Traffic Surveillance", IEEE Intelligent Trans-portation Systems Conference, 2007, pp. 678-683.

[2] M. Bertozzi, A. Broggi, M. Cellario, A. Fascioli, P. Lombardi, andM. Porta, "Artifical Vision in Road Vehicles", Proc. IEEE, vol. 90,no. 7, pp. 1258-1271, 2002.

[3] R. Bishop, "Intelligent Vehicle Applications Worldwide", IEEE Intel-ligent Systems, 2000.

[4] R. Cucchiara, M. Piccardi, and P. Mello, "Image analysis and rulebased reasoning for a traffic monitoring system," IEEE Trans. Intell.Transport. Syst., vol. 1, no. 2, pp. 119-130, June 2002.

[5] H. Veeraraghavan, O. Masoud, and N.P. Papanikolopoulos, "Computervision algorithms for intersection monitoring," IEEE Trans. Intell.Transport. Syst., vol. 4, no. 2, pp. 78-89, June 2003.

[6] L. Li, W. Huang, I. Y. Gu, and Q. Tian, "Foreground object detectionfrom videos containing complex background", Proc. of the EleventhACM international Conference on Multimedia, Berkeley, CA, USA,November 02 08, 2003.

[7] S. Suzuki, K. Abe., "Topological Structural Analysis of Digital BinaryImages by Border Following.", CVGIP, v.30, n.1. 1985, pp. 32-46.

[8] C.Stauffer and W.E.L. Grimson, "Adaptive background mixture mod-els for real-time tracking", Proc. IEEE CVPR 1999, pp. 246-252, June1999

[9] M. Piccardi, "Background subtraction techniques: a review", IEEEInternational Conference on Systems, Man and Cybernetics 2004.

[10] Rafael C.Gonzalez and Richard E. Wood, Digital Image Processing,2Ed. New Jersey: Prentice Hall, 2002, pp. 523-527

[11] P. L. Rosin and E. Ioannidis, "Evaluation of global image thresholdingfor change detection," Pattern Recognit. Lett., vol. 24, no. 14, pp.2345-2356, Oct. 2003.

[12] A. Ilyas, M. Scuturici, and S. Miguet, "Real time foreground-background segmentation using a modified Codebook model," in Proc.6th IEEE Int. Conf. AVSS, 2009, pp. 454-459.