stereo vision based self-localization of autonomous mobile robots

14
Stereo Vision Based Self-Localization of Autonomous Mobile Robots Abdul Bais 1 , Robert Sablatnig 2 , Jason Gu 3 , Yahya M. Khawaja 1 , Muhammad Usman 1 , Ghulam M. Hasan 1 , and Mohammad T. Iqbal 4 1 NWFP University of Engineering and Technology Peshawar, Pakistan {bais, yahya.khawja, usman, gmjally}@nwfpuet.edu.pk 2 Vienna University of Technology Vienna, Austria [email protected] 3 Dalhousie University Halifax, Canada [email protected] 4 Islamia College Peshawar, Pakistan [email protected] Abstract. This paper presents vision based self-localization of tiny au- tonomous mobile robots in a known but highly dynamic environment. The problem covers tracking the robot position with an initial estimate to global self-localization. The algorithm enables the robot to find its initial position and to verify its location during every movement. The global position of the robot is estimated using trilateration based tech- niques whenever distinct landmark features are extracted. Distance mea- surements are used as they require fewer landmarks compared to meth- ods using angle measurements. However, the minimum required features for global position estimation are not available through the entire state space. Therefore, the robot position is tracked once a global position es- timate is available. Extended Kalman filter is used to fuse information from multiple heterogeneous sensors. Simulation results show that the new method that combines the global position estimation with tracking results in significant performance gain. keywords: self-localization, stereo vision, autonomous robots, Kalman filter, soc- cer robots 1 Introduction In an application where multiple robots are autonomously working on a com- mon global task, knowledge of the position of individual robots turns out to be a basic requirement for successful completion of any global strategy. One of the solutions for the robot position estimation is to start at a known location

Upload: independent

Post on 30-Mar-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Stereo Vision Based Self-Localization ofAutonomous Mobile Robots

Abdul Bais1, Robert Sablatnig2, Jason Gu3, Yahya M. Khawaja1, MuhammadUsman1, Ghulam M. Hasan1, and Mohammad T. Iqbal4

1 NWFP University of Engineering and TechnologyPeshawar, Pakistan

{bais, yahya.khawja, usman, gmjally}@nwfpuet.edu.pk2 Vienna University of Technology

Vienna, [email protected] Dalhousie University

Halifax, [email protected] Islamia College

Peshawar, [email protected]

Abstract. This paper presents vision based self-localization of tiny au-tonomous mobile robots in a known but highly dynamic environment.The problem covers tracking the robot position with an initial estimateto global self-localization. The algorithm enables the robot to find itsinitial position and to verify its location during every movement. Theglobal position of the robot is estimated using trilateration based tech-niques whenever distinct landmark features are extracted. Distance mea-surements are used as they require fewer landmarks compared to meth-ods using angle measurements. However, the minimum required featuresfor global position estimation are not available through the entire statespace. Therefore, the robot position is tracked once a global position es-timate is available. Extended Kalman filter is used to fuse informationfrom multiple heterogeneous sensors. Simulation results show that thenew method that combines the global position estimation with trackingresults in significant performance gain.

keywords: self-localization, stereo vision, autonomous robots, Kalman filter, soc-cer robots

1 Introduction

In an application where multiple robots are autonomously working on a com-mon global task, knowledge of the position of individual robots turns out tobe a basic requirement for successful completion of any global strategy. One ofthe solutions for the robot position estimation is to start at a known location

and track the robot position using methods such as odometry or inertial naviga-tion [1]. These methods suffer from unbounded error growth due to integrationof minute measurements to obtain the final estimate [2].

Another approach is to estimate the robot position globally using externalsensors [3]. To simply the process of global localization the robot environmentis often engineered with active beacons or other artificial landmarks such asbar code reflectors and visual patterns. If it is not allowed to modify the robotenvironment, the global localization has to be based on naturally occurring land-marks in the environment. Such methods are less accurate and demand signif-icantly more computational power [2]. Additionally, the minimum features re-quired for global self-localization are not available through the entire state space.Thus it is clear that a solution that is based only on local sensors or the onewhich computes the global position at every step is unlikely to work. A hybridapproach to self-localization that is based on information from the local sensorsas well as the external sensor is required. Such a method is complementary andcompensates for short comings of each other.

Extended Kalman Filter (EKF) [4,5] has been extensively used for informa-tion fusion in robot navigation problems [6, 7]. However, due to its unimodalnature, EKF is not suitable for global localization in recurrent environments.The problem of handling ambiguities can be addressed using methods that arecapable of representing position belief other than Gaussian i.e multi hypothe-sis tracking system [8], Markov localization [9] or Monte Carlo localization [10].These methods are capable of global self-localization and can recover the robotfrom tracking failures but are computationally expensive [11].

We propose that the problem of global-localization should be addressed sep-arately at startup and when/if the robot losses track of its position. The globalposition once calculated is used to initialize the EKF for tracking the robotposition. The accumulating errors in robot odometry are mitigated by acquir-ing information from the robot environment. Simulation results shows that thisapproach results in a marked performance gain over the randomly initializedstarting position.

Depth information of interesting objects in the robot environment is a re-quirement for position estimation and interaction with other robots. This candone with a wide range of sensors. Ranging devices such as laser range finderand sonar require integration over time and high-level reasoning to accomplishlocalization. In contrast, vision has the potential to provide enough informa-tion to uniquely identify the robot’s position [12]. Depth estimation using singleimage is too erroneous and the approach cannot be used all the time [13]. There-fore, a pivoted stereo vision system is used to obtain 3D information from therobot environment. The stereo vision system can be built small enough to fit thedimensions needed.

The robot environment is known but highly dynamic. It is not allowed tomodify the robot environment with navigational aids. Features found in therobot environment can be divided into different types: they are line based andcolor transitions [14]. Line based features are extracted using gradient based

Hough transform. Corners, junctions and line intersections are determined bysemantic interpretation of the detected line segments. Color transitions betweendistinct colored patches are found using segmentation methods.

Distinct landmarks are scarce in our application environment. Therefore,our global position estimation algorithm has to be based on minimum numberof distinct landmarks. The global position of the robot is estimated wheneverrange estimate of a single landmark and absolute orientation of the robot areavailable [15]. If the robot orientation cannot be estimated independently thentwo distinct landmarks are required [3]. However, even a single distinct landmarkis not available through the entire state space and the robot position has to betracked once a global estimate is available.

For extensive testing of the algorithm the experiments is repeated in threedifferent scenarios. Firstly, the filter is manually initialized. Secondly, the robotis having a random start position with a high uncertainty. This is to study theconvergence of the robot position belief when features are available. Finally,initialization of the filter is done with the help of global position estimationmethods discussed in this paper.

The balance of the paper is structured as follows: Section 2 explains the statetransition and observation model in the framework of an EKF and presentsthe geometric construction of a differential drive robot, formation of controlvector and its uncertainty analysis. Illustration of the robot observation vector,observation prediction and derivation of the robot observation uncertainty isalso included in this section. Estimation of global position and its uncertaintyanalysis is given in Section 3. Experimental results are presented in Section 4and finally the paper is concluded in Section 5.

2 Extended Kalman Filter

Position tracking using EKF is the subject of this section. Using the globallocalization techniques, the filter is provided with optimal initial conditions (inthe sense of minimum mean square error) [3,15]. State transition and observationmodels in the framework of an EKF, geometric construction of a differential driverobot, formation of the control vector and its uncertainty analysis is explained.

2.1 State Transition Model

Beginning with the assumption of having a function f that models the transitionfrom state pk−1

5 to pk in the presence of control vector uk−1 at time k. Thecontrol vector is independent of state pk−1 and is supposed to be corrupted byan additive zero mean Gaussian noise uk of strength Uk. This model is formallystated by the following equation:

pk = f (pk−1,uk−1, k) (1)5 The notation used here is a slightly changed version of [5]. In our case pi|j is the

minimum mean square estimate of pi given observation until time j i.e. Zj , wherej ≤ i. Similarly, Pi|j is the uncertainty of the estimate pi|j

where uk = uk + uk. The quantities pk−1 and pk are the desired (unknown)states of the robot. Given robot observations Zk−1, a minimum mean squareestimate of the robot state pk−1 at time k − 1 is defined as [5]:

pk−1|k−1 = E{pk−1|Zk−1} (2)

The uncertainty of this estimate is denoted by Pk−1|k−1 and can be written asfollows:

Pk−1|k−1 = E{pk−1|k−1pTk−1|k−1|Zk−1} (3)

where pk−1|k−1 is the zero mean estimation error.The next step is to derive expression for pk|k−1, an estimate of pk using all

measurements but the one at time k and its uncertainty Pk|k−1 using (1). Thisis called prediction. Similar to (2), this estimate can be written as follows:

pk|k−1 = E{pk|Zk−1} (4)

Using multivariate Taylor series expansion, pk is linearized around pk−1|k−1 anduk as given by the following expression:

pk = f(pk−1|k−1, uk, k) + J1(pk−1 − pk−1|k−1) + J2(uk − uk) + . . . (5)

where J1 and J2 are the jacobians of (1) w.r.t pk−1 and uk evaluated at pk−1|k−1

and uk, respectively. Substitution of (5) in (4) results in the following:

pk|k−1 = f(pk−1|k−1, uk, k) (6)

From (5) and (6) expression for prediction error, pk|k−1, evaluates to J1pk−1|k−1+J2uk, which is used to calculate prediction covariance as given below:

Pk|k−1 = J1Pk−1|k−1JT1 + J2UkJT

2 (7)

Equation (6) and (7) comprise the prediction phase of the EKF. The uncertaintyof the control vector, Uk, is presented in Section 2.3.

2.2 Geometric Construction of a Differential Drive Robot

It is assumed that the two wheeled differential drive robot is moving on a flatsurface with the robot pose having 3 degrees of freedom i.e. pk =

[xk yk θk

]T .The separation between the two wheels of the robot is w. Movement of therobot center is considered as motion of the whole robot. Fig. 1 shows the robot’strajectory between time step k−1 and k. The distance covered (per unit time) bythe left and right wheels of the robot is denoted by vlk and vrk, respectively. Thethree components of state change in robot coordinate system are represented byδxk, δyk and δθk. Whereas, αk denotes change in robot orientation, ck the radiusof curvature and vk is the robot velocity. Using geometric relationship between

Fig. 1. Geometric construction of differential drive robot

different elements of Fig. 1, the control vector can be formally stated as follows:

uk =

δxk

δyk

δθk

= g([

vlk

vrk

])=

w(vrk+vlk)2(vrk−vlk) sin( vrk−vlk

w )w(vrk+vlk)2(vrk−vlk) (1− cos( vrk−vlk

w ))vrk−vlk

w

(8)

This is a generalized expression for the control vector for the case when the robotis following a curved path and is required to predict future state of the robotas given by (6). Uncertainty analysis of the control vector is given in the nextsection.

2.3 Control Vector Uncertainty

For uncertainty analysis of the control vector it is assumed that the robot’sodometry is calibrated for systematic errors using methods such as the Universityof Michigan Benchmark (UMBmark) [16]. For the non-systematic errors, it isassumed that error in distance traveled by each wheel of the robot is zero meanGaussian. This error is then propagated to the robot control vector as follows.Suppose the deviation vk of the estimated velocity vector vk from its true valuevk is a random vector of zero mean and covariance Σv. As the two wheelsare driven by two different motors and the distance covered by each wheel ismeasured by two independent encoders, it is reasonable to assume that errorin distance covered by the left wheel is independent of the error in distancecovered by the right wheel [1]. The covariance matrix of this error is given bythe following equation:

Σv = E{vkvTk } =

[σ2

l 00 σ2

r

](9)

where σ2l and σ2

r are the variances of vlk and vrk and are proportional to theirabsolute values. The covariance matrix of error in uk, Uk, can be derived using

Taylor series expansion around vk as follows:

Uk = JuΣvJTu (10)

where Ju is the jacobian of uk with respect to vk evaluated at vk.

2.4 Observation Model

The robot observation model links the current state of the robot pk with itsobservation zk and is given by the following transformation:

zk = h (pk, k) + wk (11)

Robot observation is assumed to be corrupted by zero mean Gaussian noise wk

of strength R. Using Taylor series expansion of (11) around the predicted stateand retaining only the first two terms, the following expression of the observationmodel can be obtained:

zk ≈ h(pk|k−1

)+ Jzp(pk|k−1 − pk) + wk (12)

where zk|k−1 = E{zk|Zk−1} = h(pk|k−1

)is the observation prediction and Jzp

is the observation jacobian evaluated at pk|k−1. The deviation of the actual ob-servation from the predicted one is called innovation and is evaluated as follows:

zk|k−1 = Jzppk|k−1 + wk (13)

After having derived expression for innovation, expression for its covariance ma-trix can be derived as follows [17]:

S = JzpPk|k−1JTzp + R (14)

Innovation is a measure of disagreement between the actual and predicted stateof the robot and is used to evaluate the state estimate pk|k as follows [5]:

pk|k = pk|k−1 + K(zk − zk|k−1) (15)

where K is the gain. A value of K that minimizes the mean square estimationerror is called Kalman gain [5]. Using (15), the estimation error can be calculatedas follows:

pk|k = (I−KJzp)pk|k−1 −Kwk (16)

and its covariance matrix by the following expression:

Pk|k = (I−KJzp)Pk|k−1(I−KJzp)T + KRKT (17)

Equation ( 15) and ( 17) constitute the update phase of the EKF. Our robotspecific observation model is given in the next section.

Fig. 2. Illustration of the robot observation: The robot stereo vision system is used toestimate range and bearing with respect to a landmark feature

2.5 Robot Observation

The robot is equipped with a stereo vision system which provides it with values[xC0 yC0

]T , which is the location of the landmark pl =[xl yl

]T in robot coor-dinate system. These values are used to construct robot observation vector zk.The construction of the robot observation vector is illustrated in Fig. 2 and isgiven by (18).

zk =[rkψk

]=

[√(xC0)2 + (yC0)2atan2(yC0, xC0)

](18)

From its predicted position pk|k−1, the robot predicts what it may see, whichis used for position update. Construction of the robot observation predictionvector is illustrated in Fig. 2 and is given by the following expression:

zk|k−1 =[rk|k−1

ψk|k−1

]=

[ √(xl − xk|k−1)2 + (yl − yk|k−1)2

atan2(yl − yk|k−1, xl − xk|k−1)− θk|k−1

](19)

For robot observation uncertainty it is assumed that the error[ul ur

]T in[ul ur

]T is zero mean Gaussian. The covariance matrix of this error is given bythe following expression:

Σi = E{[ul ur

]T [ul ur

]} = σ2

uu

[1 00 1

](20)

where σ2uu is the variance of ul or ur. In deriving (20) it is assumed that ul and

ur are not correlated. Error in ul and ur is propagated to observation zk by thetransformation (18). It is assumed that these systems can be represented by thefirst two terms of Taylor series expansion around estimated value of ul and ur.This assumption leads to following expression for the covariance matrix of errorin robot observation [18]:

R = JzΣiJTz (21)

where Jz is the jacobian of (18) with respect to[ul ur

]T and zk is the estimatedobservation. The error model discussed above captures the uncertainty due toimage quantization, yet it fails to handle gross segmentation problems [19].

The types of features used for localization play an important rule in the suc-cess or failure of a method. Features used for localization can broadly be dividedinto two groups; natural [20,21] and artificial [22]. Artificial landmarks are spe-cially designed and placed in the environment as an aid in robot navigation. It isdesirable to base the localization algorithm on natural landmarks as engineeringthe environment is expensive and is not possible under all circumstances. Datafrom a variety of sensors can be processed to extract natural landmarks. Thedrawback of using natural landmarks is that they are difficult to detect withhigh accuracy [2].

3 Global Position Estimation

Global position estimation is a requirement for an initial startup and if/when therobot is lost. The robot is unable to rectify odometry error solely by observinglandmarks and applying corrective steps if it is moved to another location orbumps against another robot. In such situations, according to our experience,it is useless to carry on with tracking the wrong position. The best choice is todeclare the robot lost and do global localization.

The robot position p =[x y

]T can be calculated using observation xC01 and

yC01 with respect to landmark pl1 =

[xll yll

]T and absolute orientation θ asfollows [15]:

p =[xy

]= g(

ul1

ur1

θ

) =[xll − r1 cosϕyll − r1 sinϕ

](22)

where ϕ = ψ+θ, s = sin(atan2(yC01 , xC0

1 )+θ), c = cos(atan2(yC01 , xC0

1 )+θ) andr1 =

√(xC0

1 )2 + (xC01 )2. ul1 and ur1 represent the location of the landmark in

the left and right cameras. These values are used to calculate xC01 and yC0

1 . Thisequation calculates the intersection of line L′ and circle C ′ as shown in Fig. 3(a).

Perfect measurements will result in a single point of intersection betweenthe line and circle, however, measurements are never perfect and errors in inputquantities result in an uncertain position estimate as shown by the gray areain Fig. 3(a). We assume that each element of the input vector is corrupted byindependent zero mean Gaussian with covariance Σi = diag(

[σ2

uu σ2uu σ

2θθ

]).

σ2uu is the variance of ul1 or ur1 and σ2

θθ is the variance of the robot absoluteorientation. This error is propagated into position estimate by the transformation(22). The system given by (22) is nonlinear and the resulting error distributionwill not be Gaussian. However, we assume that it can be adequately representedby the first two terms of Taylor series expansion around estimated input, i. Weproceed as follows:

p = g(i) + Jii + . . . (23)

where p = g(i) is the position estimate, p ≈ JI I its error and Ji is the jacobianof p with respect to i evaluated at its estimated value i.

(a) (b)

Fig. 3. Global position estimation (a) the robot position is determined by the inter-section of line L′ and circle C′ (b) intersection of C′ and C′′ is the robot position

From (23) the expression for the covariance matrix of position estimate isderived as follows:

Σp = JiΣiJTi

When the robot orientation estimate is not reliable or not available it needsto estimate the location of two globally known landmarks in its coordinate sys-tem [3]. With this information the robot location is constrained to the intersec-tion of two circles as shown in Fig. 3(b) and given by the following expression [3]:

p =

xyθ

=

−D±√

D2−4CE2C

A+B(D±√

D2−4CE2C )

atan2 (yl1 − y, xl1 − x)− atan2(yC0, xC0

) (24)

where A = r21−r2

2+x2l2−x2

l1+y2l2−y2

l12a ,xl1−xl2

a , C = B2 + 1,D = 2AB − 2yl1B − 2xl1

, E = A2 + x2l1 + y2

l1 − 2yl1A − r21 and a = yl2 − yl1. The subscripts 1 and 2differentiate between quantities related with the two landmarks.

The input vector i =[ul1 ur1 ul2 ur2

]T has four components which corre-spond to the location of the two landmarks in the left and right camera images.The imperfection in estimation of the input vector is propagated into robot lo-cation using (24), which result in an uncertain position estimate as illustratedin Fig. 3(b). We model this imperfection with a zero mean Gaussian having thefollowing covariance matrix:

Σi = σ2uuI4×4 (25)

where I4×4 is 4× 4 identity matrix. Using the same principles as adapted in theprevious section, we arrive at the following expression for the covariance matrixof position estimate using the new method:

Σp = JiΣiJTi (26)

4 Experimental Results

The performance of our algorithm is tested with a simulation as real-world im-plementations are in progress. Our robot, Tinyphoon [23], is a two wheeled dif-ferential drive, with wheel base of 75 mm. The stereo vision system is mountedat a height of 70 mm. The separation between the two cameras (stereo baseline)is 30 mm. Image resolution for the experiment was set to be QVGA (320× 240pixels). The size of the robot and dimension of the field conforms with the FIRAMiroSot Small League6. Image resolution and other simulation parameters wereset to satisfy the constraints of the real robots.

In order to improve the statistics regarding the growth of uncertainty androbot position, different types of trajectories with multiple trials were tested. Ineach of these trials, the robot either follows a circular path of different diameters,or the starting position of the robot and/or direction of motion is changed.

As stated earlier, the whole process is repeated three times with different ini-tial conditions. Firstly, the position tracker is initialized manually with a startingposition and its corresponding uncertainty. The initial position is obtained byadding random noise to the true position. Secondly, the robot is having a ran-dom starting position with a high uncertainty. Our intension behind this is totest if the robot can recover from tracking failures. Providing the starting po-sition and the covariance matrix violates the robot autonomy. For autonomousbehavior the robot must be equipped with tools that enables it to localize itselffrom scratch. This capability of the algorithm is demonstrated in the final run ofthe experiment where initialization of the tracker is done with the help of globalposition estimation methods discussed in this paper.

A sample trial of the experiment where the robot moves along a circularpath is shown in Fig. 4. The ideal path that robot is supposed to follow is aperfect circle. However, due to imperfections of its sensors the robot deviatesfrom the true path. In Fig. 4(a) the robot starts at a known position with lowuncertainty. The robot observes its first landmark feature when it has coveredmore than a quarter of its intended path. As can be seen in the figure, uncertainty(represented as ellipses) of the robot position increases unbounded and its posedrifts away from the true value. When the robot observe one of the two featureson the left side, its uncertainty is reduced and position adjusted. However, assoon as the landmarks are out of sight, drift from the true position starts andthe uncertainty increases. This drift is corrected when the landmarks are visibleagain.

Fig. 4(b) illustrates the case where the robot is given a random startingpoint with a very high uncertainty. As can be seen in the figure the robot isconstantly moving away from its intended path and its uncertainty is growingrapidly. However, when landmark features become available to the robot, cor-rective steps are applied and the robot gets closer and closer to it true path.Global self-localization for the same path is shown in Fig. 4(c). 3σ error bound

6 http://www.fira.net

(a) (b) (c)

(d) (e) (f)

Fig. 4. Robot desired trajectory is a circle of radius 600 mm (a) known start position(b) random start position (c) here the robot does not initialize its tracker but activelysearches for landmarks until it finds enough required for global position estimation (d)3σ bound on error in x, y and θ for known start position (e) error bounds for randomstarting position (f) the improvement due to proper initialization of the tracker isobvious

on each component of the robot pose for run of this trial are shown in Fig. 4(d),Fig. 4(e) and Fig. 4(f).

Similarly, Fig. 5 shows a sample trial of the category where the robot ismoving along a straight line. The robot orientation is fixed at 180 ◦and it startsat a point on the left side and moves in reverse direction towards the right insmall steps. The orientation of the robot is such that the landmarks are alwaysin sight. First run of this trial where robot starts at a known position with lowuncertainty is shown in Fig 5(a). Also in this case we repeat the experimentwhen the robot starts at a random location with high uncertainty or does globalposition estimation as shown in Fig. 5(b) and Fig. 5(c), respectively. Error ineach component of the robot pose and their corresponding 3σ uncertainty boundsare shown in Fig. 5(d), Fig. 5(e) and Fig. 5(f).

When the robot is getting further from the landmarks effect of the robot ob-servation decreases as its uncertainty increases and the accumulation of odom-etry error dominates. Statistical results for the error in robot observation areshown in Table 1. In this table δr and δϕ refer to error in the range and bearingcomponents of the observation vector. δr is expressed in millimeters and δϕ isgiven in degrees. The first row shows values for the average error. The standard

(a) (b) (c)

(d) (e) (f)

Fig. 5. Robot desired path is a straight line (a) known starting position (b) starting ata random position (c) the location tracker is initialized by estimating the robot locationand its uncertainty using landmark based methods (d) error and 3σ error bound forknown starting location (e) Error bound for random start position (f) error bound for(c)

deviation (Std), minimum (Min) and maximum (Max) values are presented inthe second, third and fourth row. The Min and Max shown are the absolutevalues. The first two columns list error in range and bearing when the robotfollows circular paths of different diameters. Results from straight line motionare shown in third and fourth column.

Table 1. The robot observation error

Circular path Line

δr mm δϕ δr mm δϕ

Mean -5.82 -0.15 ◦ -50.808 0.03 ◦

Std 22.24 0.13 ◦ 43.66 0.52 ◦

Min 0.07 0.004 ◦ 0.15 0.001 ◦

Max 67.07 0.43 ◦ 204.60 1.17 ◦

Simulation results illustrate that the algorithm can successfully localize therobot despite the fact that landmarks are sparse. The linearization of the obser-vation model introduce problems for distant features. The robot position error is

perfectly bounded when it is following a circular path. However, the error is notalways bounded in the latter case. The reason for this is that in motion along thecircle, features are always observed from a short distance whereas in the othercase features are observed from a distance towards the end of the trials. Theprominence of this problem comes from the use of a narrow baseline stereo. Cur-rently we are in the process of implementation and will have real-world resultsin near future.

5 Conclusion

In this paper we discussed all aspects of real time mobile robot self-localizationin a known but highly dynamic environment. The problem covered tracking theposition with an initial estimate to estimating it without any prior positionknowledge. The localization algorithm is not dependent on the presence of ar-tificial landmarks or special structures such as rectangular walls in the robotenvironment. Furthermore, it is not required that features lie on or close to theground plane.

A stereo vision based approach is used to identify and measure distance tolandmark features. Distance measurements are used as they require fewer land-marks compared to methods using angle measurements. Global self-localizationis computationally slow and sometimes impossible if enough features are notavailable. Therefore the robot position is tracked between intermittent globalself-localization. Simulation parameters correspond to that of our real robots.The next step is to use all features found in the robot environment and to an-alyze the affect of error in feature location in the global coordinate system andin correspondence analysis.

References

1. Chong, K.S., Kleeman, L.: Accurate odometry and error modelling for a mobilerobot. In: Proceedings of the IEEE International Conference on Robotics andAutomation, Albuquerque, New Mexico (1997) 2783 – 2788

2. Borenstein, J., Everett, H.R., Feng, L.: Navigating Mobile Robots: Systems andTechniques. A. K. Peters, Ltd. (1996)

3. Bais, A., Sablatnig, R.: Landmark based global self-localization of mobile soccerrobots. In Narayanan, P.J., Nayar, S.K., Shum, H.Y., eds.: Proceedings of theAsian Conference on Computer Vision. Volume 3852 of LNCS., Hyderabad, India,Springer-Verlag GmbH (2006) 842 – 851

4. Crowley, J.L.: Mathematical foundations of navigation and perception for an au-tonomous mobile robot. In: Proceedings of the International Workshop on Rea-soning with Uncertainty in Robotics. (1995) 9–51

5. Newman, P.: An Introduction to Estimation and its Application to MobileRobotics. Robotics Research Group, Department of Engineering Science, Uni-versity of Oxford. 2.0 edn. (2005) Lecture notes.

6. Lee, J.M., Son, K., Lee, M.C., Choi, J.W., Han, S.H., Lee, M.H.: Localization of amobile robot using the image of a moving object. IEEE Transactions on IndustrialElectronics 50 (2003) 612–619

7. Ohno, K., Tsubouchi, T., Yuta, S.: Outdoor map building based on odometry andrtk-gps positioning fusion. In: Proceedings of the IEEE International Conferenceon Robotics and Automation. Volume 1. (2004) 684 – 690

8. Arras, K.O., Castellanos, J.A., Schilt, M., Siegwart, R.: Feature-based multi-hypothesis localization and tracking using geometric constraints. Robotics andAutonomous Systems 44 (2003) 41 – 53

9. Fox, D., Burgard, W., Dellaert, F., Thrun, S.: Markov localization for mobilerobots in dynamic environments. Artificial Intelligence 11 (1999) 391–427

10. Thrun, S., Fox, D., Burgard, W., Dellaert, F.: Robust monte carlo localization formobile robots. Artificial Intelligence 128 (2001) 99–141

11. Gutmann, J., Fox, D.: An experimental comparison of localization methods con-tinued. In: Proceedings of the IEEE/RSJ International Conference on IntelligentRobots and Systems. (2002)

12. Lamon, P., Nourbakhsh, I., Jensenl, B., Siegwart, R.: Deriving and matchingimage fingerprint sequences for mobile robot localization. In: Proceedings of theIEEE International Conference on Robotics and Automation, Seoul, Korea (2001)1609–1614

13. Nickerson, S.B., Jasiobedzki, P., Wilkes, D., Jenkin, M., Milios, E., Tsotsos, J.,Jepson, A., Bains, O.N.: The ark project: Autonomous mobile robots for knownindustrial environments. Robotics and Autonomous Systems 25 (1998) 83–104

14. Bais, A., Sablatnig, R., Novak, G.: Line-based landmark recognition for self-localization of soccer robots. In: Proceedings of the IEEE International Conferenceon Emerging Technologies, Islamabad, Pakistan (2005) 132–137

15. Bais, A., Sablatnig, R., Gu, J., Mahlknecht, S.: Active single landmark based globallocalization of autonomous mobile robots. In et al., G.B., ed.: Proceedings of theInternational Conference on Visual Computing. Number 4291 in Lecture Notes inComputer Science, Lake Tahoe, Nevada, USA, Springer-Verlag Berlin Heidelberg(2006) 202 – 211

16. Borenstein, J., Feng, L.: Measurement and correction of systematic odometry errorsin mobile robots. IEEE Transactions on Robotics and Automation 12 (1996) 869– 880

17. Bais, A., Sablatnig, R., Gu, J., Khawaja, Y.M.: Location tracker for a mobile robot.In: Proceedings of the IEEE International Conference on Industrial Informatics.(2007) 479 – 484

18. Bais, A.: Real-Time Mobile Robot Self-localization - A Stereo Vision Based Ap-proach. PhD thesis, Vienna University of Technology, Vienna, Austria (2007)

19. Matthies, L., Shafer, S.: Error modeling in stereo navigation. IEEE Transactionson Robotics and Automation RA-3 (1987) 239–248

20. Panzieri, S., Pascucci, F., Setola, R., Ulivi, G.: A low cost vision based localizationsystem for mobile robots. In: Proceedings of the Mediterranean Conference onControl and Automation, Dubrovnik, Croatia (2001)

21. Lingemann, K., Nuchter, A., Hertzberg, J., Surmann, H.: High-speed laser local-ization for mobile robots. Robotics and Autonomous Systems 51 (2005) 275 –296

22. Li, X.J., So, A.T.P., Tso, S.K.: Cad-vision-range-based self-localization for mobilerobot using one landmark. Journal of Intelligent and Robotic Systems 35 (2002)61–81

23. Novak, G., Mahlknecht, S.: TINYPHOON a tiny autonomous mobile robot.In: Proceedings of the IEEE International Symposium on Industrial Electronics.(2005) 1533–1538