video based system for railroad collision warning

6
Video Based System for Railroad Collision Warning Jonny A. Uribe 1 , Luis Fonseca 2 , J. F. Vargas 3 1 GITA – Grupo de Investigación en Comunicaciones Aplicadas 2 GEPAR – Grupo Electrónica de Potencia, Automatización y Robótica 3 Grupo de Investigación Microelectrónica y Control Departamento de Ingeniería Electrónica Universidad de Antioquia, Medellín - Colombia {jauribe, luis.fonseca, jfvargas}@udea.edu.co AbstractAutonomous systems can assist humans in the important task of safe driving. Such systems can warn people about possible risks, take actions to avoid accidents or guide the vehicle without human supervision. In railway scenarios a camera in front of the train can aid drivers with the identification of obstacles or strange objects that can pose danger to the route. Image processing in these applications is not easy of performing. The changing conditions create scenes where background is hard to detect, lighting varies and process speed must be fast. This article describes a first approximation to the solution of the problem where two complementary approaches are followed for detecting and tracking obstacles on videos captured from a train driver perspective. The first strategy is a simple-frame-based approach where every video frame is analyzed using the Hough transform for detecting the rails. On every rail a systematic search is done detecting obstacles that can be dangerous for the train course. The second approach uses consecutive frames for detecting the trajectory of moving objects. Analyzing the sparse optical flow the candidate objects are tracked and their trajectories computed in order to determine their possible route to collision. For testing the system we have used videos where preselected fixed and moving obstacles have been superimposed using the Chroma key effect. The system had shown a real time performance in detecting and tracking the objects. Future work includes the test of the system on real scenarios and the validation over changing weather conditions. Index TermsObstacle detection, Object tracking, Autonomous train driving, Digital image processing, Hough transform, Optical flow I. INTRODUCTION Safe autonomous driving is a huge technical challenge which still lacks general solution. Researches had explored different artificial vision methods for implementing automatic driving algorithms but many issues like changing lighting, changing background, process speed, etc., turns this problem into a complex task. Autonomous train driving is a particular case where the previous problems are present too but where the specific characteristics suggest the use of different methodologies for the design and implementation of vision algorithms. We create a first approximation to the solution of detecting both fixed and moving objects in a railway system. Our intention was to explore different techniques for creating a system capable of warning a driver about possible threats in the route. We employed two complementary approaches for detecting and tracking the objects. The first strategy detects the rails and scans the area near them in a bottom-up way searching for possible obstacles. This method was effective in detecting fixed objects in front the train and obstacles just near the rails. The second approach is based on the optical flow between frames. Discarding background moving elements our algorithm finds candidate dangerous objects, tracks their trajectories and foresees their paths for determining if exist a course to collision. This method has the advantages of warn with anticipation when an object could pose danger to the safety of the train. Our work is a first local approximation to the creation of an automatic train driving system and a continuation of the work described in [1]. Nowadays metropolitan train systems has been forced to find new strategies for keeping high user experience quality levels and comply with requirements for increasing the number of people using the transportation. The high rate of passing trains puts the driver in a delicate situation where he must keep a continuous high level of attention on the railway scenes. In these environments is quite easy to lose concentration and the possibilities of making a human mistake are increased. To provide the train driver an automatic aid system could alleviate the effort required and contribute with the safety of the transportation system. The article structure is as follows: section II describes some related work in the field of artificial vision and automatic driver assistance in the railway domain. Section III shows the proposed system and implementation details. Experiments and results are described in section IV. Conclusions are in Section V and the future work is detailed in section VI.

Upload: independent

Post on 18-Mar-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Video Based System for Railroad Collision Warning

Jonny A. Uribe1, Luis Fonseca

2, J. F. Vargas

3

1GITA – Grupo de Investigación en Comunicaciones Aplicadas

2GEPAR – Grupo Electrónica de Potencia, Automatización y Robótica

3Grupo de Investigación Microelectrónica y Control

Departamento de Ingeniería Electrónica

Universidad de Antioquia,

Medellín - Colombia

{jauribe, luis.fonseca, jfvargas}@udea.edu.co

Abstract— Autonomous systems can assist humans in the

important task of safe driving. Such systems can warn people about

possible risks, take actions to avoid accidents or guide the vehicle

without human supervision. In railway scenarios a camera in front

of the train can aid drivers with the identification of obstacles or

strange objects that can pose danger to the route. Image processing

in these applications is not easy of performing. The changing

conditions create scenes where background is hard to detect,

lighting varies and process speed must be fast. This article describes

a first approximation to the solution of the problem where two

complementary approaches are followed for detecting and tracking

obstacles on videos captured from a train driver perspective. The

first strategy is a simple-frame-based approach where every video

frame is analyzed using the Hough transform for detecting the

rails. On every rail a systematic search is done detecting obstacles

that can be dangerous for the train course. The second approach

uses consecutive frames for detecting the trajectory of moving

objects. Analyzing the sparse optical flow the candidate objects are

tracked and their trajectories computed in order to determine their

possible route to collision. For testing the system we have used

videos where preselected fixed and moving obstacles have been

superimposed using the Chroma key effect. The system had shown

a real time performance in detecting and tracking the objects.

Future work includes the test of the system on real scenarios and

the validation over changing weather conditions.

Index Terms—Obstacle detection, Object tracking, Autonomous

train driving, Digital image processing, Hough transform, Optical

flow

I. INTRODUCTION

Safe autonomous driving is a huge technical challenge which still lacks general solution. Researches had explored different artificial vision methods for implementing automatic driving algorithms but many issues like changing lighting, changing background, process speed, etc., turns this problem into a complex task. Autonomous train driving is a particular case where the previous problems are present too but where the specific characteristics suggest the use of different

methodologies for the design and implementation of vision algorithms.

We create a first approximation to the solution of detecting both fixed and moving objects in a railway system. Our intention was to explore different techniques for creating a system capable of warning a driver about possible threats in the route. We employed two complementary approaches for detecting and tracking the objects. The first strategy detects the rails and scans the area near them in a bottom-up way searching for possible obstacles. This method was effective in detecting fixed objects in front the train and obstacles just near the rails. The second approach is based on the optical flow between frames. Discarding background moving elements our algorithm finds candidate dangerous objects, tracks their trajectories and foresees their paths for determining if exist a course to collision. This method has the advantages of warn with anticipation when an object could pose danger to the safety of the train.

Our work is a first local approximation to the creation of an automatic train driving system and a continuation of the work described in [1]. Nowadays metropolitan train systems has been forced to find new strategies for keeping high user experience quality levels and comply with requirements for increasing the number of people using the transportation. The high rate of passing trains puts the driver in a delicate situation where he must keep a continuous high level of attention on the railway scenes. In these environments is quite easy to lose concentration and the possibilities of making a human mistake are increased. To provide the train driver an automatic aid system could alleviate the effort required and contribute with the safety of the transportation system.

The article structure is as follows: section II describes some related work in the field of artificial vision and automatic driver assistance in the railway domain. Section III shows the proposed system and implementation details. Experiments and results are described in section IV. Conclusions are in Section V and the future work is detailed in section VI.

II. RELATED WORK

Primarily two strategies guide the design and implementation of autonomous train driving systems: the active and passive approaches. In active systems emitting devices and associated sensors are placed in train's head for scanning the near area. For example some systems use infrared lasers that emit a beam pattern and its reflection is then analyzed in order to detect obstacles or judge safety route. Works that follow this approach includes [2][3]. The main disadvantages of these kinds of systems are their difficult to identify obstacles boundaries in a reliable way, their short range of action and their low accuracy on curve zones. In contrast passive systems use video cameras in front of the train and rely on image processing algorithms for obstacle detection. A complementary approach consist in mixing active and passive system and integrating the multiple sources of information (see for example [4][5][6]). Our work will be focused in passive system where the only source of data is the video obtained from a single camera attached in the front of the train.

Related to this approach the work proposed by Ukai [7] used the Hough transform for tracking the rails and compute the vanishing point using a moving camera. The camera is dynamically adjusted in such way that this point is always near the center of the image. For obstacle detecting the optical flow is employed with several algorithms for mitigating the adverse effect of the moving background. In order to detect fixed objects any rail interruption on the images is labeled as a possible obstacle. A low luminosity lens in the camera and anti-blurring techniques complement the system.

In the work described in [8] the authors use clothoids segments for the rail detection. In each segment several algorithms are used for detecting obstacles, these included: discontinuities in the lines (rails), gray scale variability between adjacent segments, optical flow between video frames and statistic of textures in the segments.

In [9] dynamic programming is used in order to extract the rails and the area between them but the obstacle detection problem is not considered. Calculating the image gradient using the Sobel operator the near driver segments of the rails are extracted. After that the Hough transform is employed for estimating the rails vanishing point and delimiting the area enclosed by the rails. The remaining segments in the upper part of the image are treated in a recursive way.

In [10] a projective transform is used for creating a new image where the rails become parallel or with a constant separation between them like in a birds view perspective. The Hough transform is employed on this image for tracking and detecting the railway. If the system fails to track the rails at any point the possibility of it being cover with an obstacle arise. In this system an intensive previous configuration is required for the projective transform. In the configuration phase manual operator must indicate through a complete train route where the rails are, which is the area between them and estimates near and far distances.

In [11] the rail detection is considered but not the obstacle identification. Using the inverse mapping transform [12] a bird eye view is compute. With this transformation the distance between rails remains constant from the bottom to the upper part of images altering the train driver perspective. Using Gaussian filters blurring is mitigated and every frame is

segmented in ten consecutive parts. On every segment the rails detection task is initiated using a polynomial model. This model computes the distance between pair of lines and selects the most likely candidates for representing the rails. Information from every segment is then mixed and the central imaginary line between rails is obtained. Using this information the area of interest is identified but the obstacle identification is relegated as future work.

All the previous works had problems identifying the rails in changing environmental conditions, scenes with low luminosity and in pronounced curve zones. The obstacle detection in front of the train is such hard problem that many author prefer not confront it, limiting their approaches to the rail and near area identification.

Our work deals with the railway identification through the Hough transform and initiate the obstacle finding using artificial vision algorithms. Autonomous train driving is still an open research area with not given general solution. Our work is a first exploration in the topic where we initiate the design and implementation of signal processing methods in order to extract the rails and the area of interest near them, to detect obstacles in front of the train and to identify moving objects with route to collision. To our knowledge this is the first initiative in our country in this important and relevant area of research.

In the next section we will describe the proposed system its processing phases and the resources used for adjusting and testing the system.

III. METHODOLOGY

A. Motivation

Fig. 1 shows an image where the rails are capture from the train driver perspective.

We can say that the rails look like lines projecting from the bottom of the image to the horizon and they seem to converge in a far point known as the vanishing point. In the near to driver zone the rails appearance is quite monotonous in all the route while in the horizon do exist greater variability mainly in curve sections. The presence of the rails can be used as a first input for computer vision algorithms.

Fig. 1: Typical railway appearance from the driver perspective

One strategy for detecting the rails consists in extracting the lines in the image and selecting the most likely candidates to rails using criteria such as length, angle and position in the image. After the rail extraction the area near the railway and between the rails become the zone of interest for detecting obstacles to the route.

Nevertheless this strategy is insufficient to deal with moving objects that are traveling in a path to collision but are still far from the rail area. To detect these kinds of objects before they reach the rails or hit the train is highly important in order to take actions as soon as possible.

But accomplish this early detection impose huge challenges to artificial vision algorithms, mainly because from the train perspective, all the background is moving too and to discriminate dangerous objects from mere background elements who are in a perspective normal displacement, requires a fine tuning that can fail when faced to changing environments.

For detecting moving objects we used the optical flow algorithm proposed by Lucas-Kanade [13] and we created a set of rules for discriminating the features belonging to the moving background from characteristics owned by the passing objects. The next section describes our system and its complementary approaches for dealing with fixed obstacles over the rails and moving objects in route to collision.

B. System Diagram

As said we followed two approaches in order to detect obstacles posing danger to the train traveling. The first strategy deals with obstacles that are on the area near the rails. Fig. 2 shows a general diagram of this part of the system. The input to the systems is a video recorded from the train driver perspective but here we process the video in a frame to frame basis. The first task is a preprocessing stage where a filter for improving contours is applied. Then a gray scale conversion and dimensioning reduction phase take place.

The Hough transform was the mechanism used to find the lines representing the rails. We found that using a smaller image could improve the process speed without affecting precision considerably. So the rail detection task resizes the image and applies a rectangular mask for eliminating irrelevant background objects. After a Canny edge process and a morphological closing the Hough algorithm find candidate lines. Applying several criteria like length and position the lines that more likely represent the rails are selected.

In relation with the obstacle detection stage the first task is to use the Canny algorithm. A morphological closing follows and then the contours in the image are identified. Small and disconnected objects are then eliminated. After filling the contours we start a systematic search using the rails as guide.

An independent tracking in each rail is conducted by using a small dynamic area that moves from the bottom of the rail to the upper part. In each analysis step the area and centroid are computed. When the metrics are far from the expected values the selected area grows and modified its centroid with the intention of enclosing the strange object (See Fig. 3).

In order to detect moving objects that could represent a future danger to the train route but which are not near the railway zone yet, we use an strategy based on the optical flow. Fig. 4 shows an scheme of the steps followed for detecting and

tracking these kind of obstacles. After preprocessing every image the main characteristics are extracted using the algorithm described in [14].

Fig. 3: Rails tracking for finding obstacles in front the train

These characteristics represents zones in the image with more chances of be found in consecutive frames. Often zones where the corners of objects are clear allow for tracking their displacement and rotation in different frames. Appliyng the

Preprocessing

Border

Detection

Close

Contour

Detection

Contour

Filling

Residues

Elimination

Resize

Mask

Hough

Transform

Border

Detection

Rail Detection

Rail Tracking

Image

Obstacle

Identification

Fig. 2: System diagram for detecting obstacles over the railways

algorithm proposed by Lucas-Kanade [13] the sparse optical flow between frames is computed. After that, a new set of point of interest is obtained, where several features previously found still persist. If the background in the video were relatively static it could be easy to find the set of characteristics experimenting motion or rotation and therefore to find the foreground objects. Unfortunately the train movement creates the visual effect that almost all the objects are experimenting movement and in this cases the discrimination between foreground and background becomes more challenger.

So the set of characteristics found with the optical flow and under tracking, includes several elements from the background that appears to be moving, but their movements is just apparent and don’t represent danger to the train. This is why we analyzed the pattern of motion from every characteristic found in order to discriminate the elements from the background that are harmless. The first task for this procedure consists in getting the field velocity of every relevant feature in the image. Those characteristics with low velocity magnitude are then discarded. The direction of displacement is then considered. Analyzing the pattern of moving objects we found that many objects from the background often seems like if they were moving away from the rails. Quantifying this appreciation give us the following rules:

1. Features in the left moving with an angle between 180 and 270 degrees belong to the background.

2. Features in the right moving with an angle between 270 and 360 degrees belong to the background.

Althoug these can not be considered general rules give us a first criteria for the discrimination of typical foreground-background movement patterns. The remaining featurs are then clustered around three possible centers. Close centers are glue together as representing the same object. Using the media and variance of features velocity, points are associated as representing a foreground object. If the points velocity seems to be consistent in their displacement a candidate obstacle arises and is enclosed for signaling its detection. After this the trajectorie of the object is compute and its intersection with the rails predicted. Objets in route to collision generate a warning and are tracked in all their trajectories.

C. Data set

Due to the difficulty of using real videos where obstacles appear in front of the train, initially we choose to use freely available recordings on the Internet. We searched high resolution videos recorded from the driver's perspective where the route followed had no problems. After that the obstacles were digitally added through the superimposition of Chroma key videos. For this task we use the Kdenlive [15] software. The system works with image resolutions of 640x480 pixels All the system implementation was accomplished using the C++ libraries of OpenCv [16].

IV. RESULTS

Our system was tested using several modified videos from the Internet but the video used for measuring the system performance shows the route from city of Landskrona to Ramlösa (Sweden) [17]. We were granted permission for modification of the original video from the author. This video was recorded from the driver perspective with favorable weather conditions, at day, without turnouts and with splendid visibility.

For testing the system we used 7 minutes of recording where we added a total of 31 digital obstacles of different nature, shape and obstruction trajectory.

The first strategy, which scans the rails and detect objects

over them successfully identified 30 of the 31 obstacles, but

still has a high occurrence of false positives (15). Most false

detections were due to the presence of objects near the road

but actually do not represent real risk for the train travel, these

includes signs, pass levels, bridges over the rail, some

platforms, etc.

Figs. 5 and 6 show some examples of positive obstacle

identification. One of the false positives found is displayed on

Fig. 7, where a signal is erroneously classified as an obstacle

due its proximity to the rail. The second strategy, which finds objects approximating to

the rails, correctly identified 20 obstacles before they reach the rails or the train. We didn’t get false detections in this case. Fig. 8 shows an example of the moving obstacle detection.

Yes No

Feature Extraction

Sparse Optical Flow

Features Tracking

Displacement Analysis

Obstacles Selection

Trajectory Prediction

Identification Objects

in route to Collision

Features

Lost?

Fig. 4: System diagram for detecting obstacles in route to collision

Fig. 5 Image example showing moving obstacle over the rails and its

detection

Fig. 6: Image example showing fix obstacle over the railway and its

identification

Fig. 7 Incorrect classification of signal as obstacle due to its close position

to the rail

Table 1 summarizes the results for both strategies considered separately, where M1 represents the first approach, M2 the second approach, TP is the number of true positive cases, FP stands for false positives and FN for false negatives.

TABLE I. RESULTS

TP FP FN Precision Sensibility

M1 30 15 1 0.67 0.97

M2 20 0 11 1 0.65

The real time performance of the system reached rates around 30 fps using image resolutions of 640x480 pixels.

Fig. 8: Image example with positive approximating obstacle detection

V. CONCLUSIONS

Our work represents a first approach to the complex task of

creating an automatic tool for supporting train drivers labor.

Two related challenges were considered:

1. Detection of fixed or moving obstacles on the rails or

in the near area.

2. Detection of moving obstacles far from the rails and

in a route to collision.

Both tasks are highly complex and current research topics.

Particularly the detection and tracking of moving obstacles has

many difficulties due to the high speed motion, the

dissemination of forms (blurring) and the changing nature of

the background which greatly complicates its extraction.

The strategy used in our system for the obstacles identification

is still very simplified. Surprisingly the results obtained show

that it has high potential. It is clear that this strategy need to be

complemented with additional methodologies in order to allow

a more precise identification of objects and recognize those

who pose no threat to the train travel and in this way decrease

even more the false positive rate.

VI. FUTURE WORK

Addressing this work showed us that there exist the

potential to build support systems for automatic train driving.

In order to continue in the pursuit of this objective multiple

strategies can be followed to improve the system. Here we list

some of them.

Correct identification of the rails is an important step for the

system. In this work we used the Hough transform for line

detection. However, when the road is curved the method is

quite inaccurate in the remote areas of the rails. To

compensate for this it is necessary to evaluate alternative

strategies to capture more accurately the inclinations of the

rails and find a more appropriate vanishing point according to

the actual scene. A candidate method is the use of clothoids or

parabolic segments that can much better fit to the rail.

Many objects in the railway can affect the monotony of the

rails images and the nearby area without posing a real danger

to the train course. This includes turnouts, signs, pass levels,

tunnels, bridges, etc. Recognition of these objects is essential

for a reliable behavior of a train driver assistance system.

Learning and identification of benign object provides

interesting challenges for computer vision algorithms. Another important aspect that should be explored is the

robustness of the algorithms to changes in light and weather conditions. Driving at night or amidst storms creates a vast number of difficulties for automated driving systems based on vision. It is possible that the creation of strategies based on adaptive parameters can provide a solution but its use is currently complex and difficult to implement.

REFERENCES

[1] L. A. Fonseca, J. A. Uribe, y F. Vargas, «Obstacle Detection over Rails

Using Hough Transform», presented at the XVII SIMPOSIO DE

TRATAMIENTO DE SEÑALES, IMÁGENES Y VISIÓN ARTIFICIAL ,STSIVA-2012, Medellín, Antioquia, COL, 2012.

[2] J. R. Jamieson y M. D. Ray, «Railway obstacle detection system and

method», U.S. Patent App. 10/251,422sep-2002. [3] R. Passarella, B. Tutuko, y A. P. P. Prasetyo, «Design Concept of Train

Obstacle Detection System in Indonesia», IJRRAS, pp. 453-460, dic-

2011. [4] F. Kruse, S. Milch, y H. Rohling, «Multi Sensor System for Obstacle

Detection in Train Applications», Proc. of IEEE Tr., June, pp. 42–46,

2003. [5] S. Sugimoto, H. Tateda, H. Takahashi, y M. Okutomi, «Obstacle

detection using millimeter-wave radar and its visualization on image

sequence», in Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on, 2004, vol. 3, pp. 342 - 345 Vol.3.

[6] J. J. D. Garcia, J. U. Urena, A. A. Hernandez, M. Q. Mazo, J. F.

Vazquez, y M.-J. Diaz, «Multi-sensory system for obstacle detection on

railways», in Instrumentation and Measurement Technology Conference

Proceedings, 2008. IMTC 2008. IEEE, 2008, pp. 2091 -2096.

[7] M. Ukai, «A New System for Detecting Obstacles in Front of a Train»,

vol. 2006, p. 73, otoño 2006.

[8] M. Ruder, N. Mohler, y F. Ahmed, «An obstacle detection system for automated trains», in Intelligent Vehicles Symposium, 2003.

Proceedings. IEEE, 2003, pp. 180–185.

[9] F. Kaleli y Y. S. Akgul, «Vision-based railroad track extraction using dynamic programming», in 12th International IEEE Conference on

Intelligent Transportation Systems, 2009. ITSC ’09, 2009, pp. 1-6.

[10] F. Maire y A. Bigdeli, «Obstacle-free range determination for rail track maintenance vehicles», in 2010 11th International Conference on

Control Automation Robotics & Vision (ICARCV), 2010, pp. 2172-2178.

[11] M. Gschwandtner, W. Pree, y A. Uhl, «Track detection for autonomous trains», Advances in Visual Computing, pp. 19–28, 2010.

[12] H. A. Mallot, H. H. Bülthoff, J. J. Little, y S. Bohrer, «Inverse

perspective mapping simplifies optical flow computation and obstacle detection», Biological cybernetics, vol. 64, no. 3, pp. 177–185, 1991.

[13] B. D. Lucas y T. Kanade, «An iterative image registration technique

with an application to stereo vision», in Proceedings of the 7th international joint conference on Artificial intelligence, 1981.

[14] J. Shi y C. Tomasi, «Good features to track», in Computer Vision and

Pattern Recognition, 1994. Proceedings CVPR’94., 1994 IEEE Computer Society Conference on, 1994, pp. 593–600.

[15] «Kdenlive | Free and open source video editor for GNU/Linux, Mac OS

X and FreeBSD». [Online]. Available: http://www.kdenlive.org/. [Accessed: 08-jul-2012].

[16] «Welcome - OpenCV Wiki». [Online]. Available: http://opencv.willowgarage.com/wiki/. [Accessed: 08-jul-2012].

[17] «Train Driver’s View: Landskrona - Ramlösa - YouTube». [Online].

Available: http://www.youtube.com/watch?v=ParZVLmuxbo. [Accessed: 08-jul-2012].