decoupled visual servoing from a set of points imaged by an omnidirectional camera

6
Decoupled Visual Servoing from a set of points imaged by an omnidirectional camera Hicham Hadj-Abdelkader*, Youcef Mezouar* and Philippe Martinet* Abstract— This paper presents a hybrid decoupled vision- based control scheme valid for the entire class of central cata- dioptric sensors (including conventional perspective cameras). First, we consider the structure from motion problem using imaged 3D points. Geometrical relationships are exploited to enable a partial Euclidean reconstruction by decoupling the interaction between translation and rotation components of a homography matrix. The information extracted from the homography are then used to design a control law which allow us to fully decouple rotational motions from translational motions. Real time experimental results using an eye-to-hand robotic system with a paracatadioptric camera are presented and confirm the validity of our approach. I. I NTRODUCTION Vision-based servoing schemes are flexible and effective methods to control robot motion from camera observations. They are generally classified into three groups, namely position-based, image-based and hybrid-based control [10], [16], [20]. These three schemes make assumptions on the link between the initial, current and desired images since they require correspondences between the visual features extracted from the initial image with those obtained from the desired one. These features are then tracked during the camera (and/or the object) motion. If these steps fail the visually based robotic task can not be achieved [7]. Typical cases of failure arise when matching joint image features is impossible (for example when no joint feature belongs to initial and desired images) or when some parts of the image features get out of the field of view during the servoing. Some methods were investigated to resolve this deficiency based on path planning [21], switching control [8], zoom adjustment [24], geometrical and topological considerations [9], [26]. However, such strategies are sometimes delicate to adapt to a generic setup. Conventional cameras suffer thus from restricted field of view. There is thus significant motivation for increasing the field of view of the cameras [4]. Many applications in vision-based robotics, such as mobile robot localization [5] and navigation [29], can benefit from the panoramic field of view provided by omnidirectional cameras. In the literature, there have been several methods proposed for increasing the field of view of cameras systems [4]. One effective way is to combine mirrors with conven- tional imaging system. The obtained sensors are referred * is with LASMEA - UMR 6602 du CNRS 24, avenue des Landais, 63177 Aubiere Cedex - France hadj,mezouar,[email protected] is with ISRC - Intelligent Systems Research Center Sungkyunkwan University, Suwan, South Korea [email protected] to as catadioptric imaging systems. The resulting imaging systems have been termed central catadioptric when a single projection center describes the world to image mapping. From a theoretical and practical view point, a single center of projection is a desirable property for an imaging system [1]. Baker and Nayar in [1] derive the entire class of catadioptric systems with a single viewpoint. Clearly, visual servoing applications can also benefit from such sensors since the latter naturally overcome the visibility constraint. Vision-based control of robotic arms, single mobile robot or formation of mobile robots appear thus in the literature with omnidirectional cameras (refer for example to [3], [6], [23], [28],[22]). Image-based visual servoing with central catadioptric cameras using points was studied in [3]. The use of straight lines has also been investigated in [22]. As it is well known, the catadioptric projection of a 3D line in the image plane is a conic curve. In [22], the authors propose to use directly the coordinates of the polar lines of the image center with respect to the conic curves to define the input of the vision-based control scheme. This paper is concerned with homography-based visual servo control techniques with central catadioptric cameras. This frame- work, also called 2 1/2 D visual servoing [20] in the case where the image features are points, exploits a combination of reconstructed Euclidean information and image features in the control design. The 3D information is extracted from an homography matrix relating two views of a reference plane. As a consequence, the 2 1/2 D visual servoing scheme does not require any 3D model of the target. Unfortunately, in such approach when conventional cameras are used, the image of the target is not guaranteed to remain in the camera field of view. To overcome this deficiency 2 1/2 D visual servoing has been extented to an entire class of omnidirectional cameras in [15]. The resulting interaction matrices are triangular with partial decoupling properties (refer to [20], [15]). In this paper a new approach for homography-based visual servoing using points imaged with any type of central camera is presented. The structure from motion problem using im- aged 3D points is first studied. Geometrical relationships are exploited to linearly estimate a generic homagraphy matrix from which a partial Euclidean reconstruction is obtained. The information extracted from the homography are then used to design a control law which allow us to fully decouple rotational motions from translational motions (i.e the result- ing interaction matrix is square block-diagonal). Real time experimental results using an eye-to-hand robotic system are presented and confirm the validity of our approach. 2007 IEEE International Conference on Robotics and Automation Roma, Italy, 10-14 April 2007 ThA2.2 1-4244-0602-1/07/$20.00 ©2007 IEEE. 1697

Upload: independent

Post on 13-Nov-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

Decoupled Visual Servoing from a set of points imaged by anomnidirectional camera

Hicham Hadj-Abdelkader*, Youcef Mezouar* and Philippe Martinet*�

Abstract— This paper presents a hybrid decoupled vision-based control scheme valid for the entire class of central cata-dioptric sensors (including conventional perspective cameras).First, we consider the structure from motion problem usingimaged 3D points. Geometrical relationships are exploited toenable a partial Euclidean reconstruction by decoupling theinteraction between translation and rotation components ofa homography matrix. The information extracted from thehomography are then used to design a control law whichallow us to fully decouple rotational motions from translationalmotions. Real time experimental results using an eye-to-handrobotic system with a paracatadioptric camera are presentedand confirm the validity of our approach.

I. INTRODUCTION

Vision-based servoing schemes are flexible and effectivemethods to control robot motion from camera observations.They are generally classified into three groups, namelyposition-based, image-based and hybrid-based control [10],[16], [20]. These three schemes make assumptions on thelink between the initial, current and desired images sincethey require correspondences between the visual featuresextracted from the initial image with those obtained fromthe desired one. These features are then tracked during thecamera (and/or the object) motion. If these steps fail thevisually based robotic task can not be achieved [7]. Typicalcases of failure arise when matching joint image features isimpossible (for example when no joint feature belongs toinitial and desired images) or when some parts of the imagefeatures get out of the field of view during the servoing.Some methods were investigated to resolve this deficiencybased on path planning [21], switching control [8], zoomadjustment [24], geometrical and topological considerations[9], [26]. However, such strategies are sometimes delicateto adapt to a generic setup. Conventional cameras sufferthus from restricted field of view. There is thus significantmotivation for increasing the field of view of the cameras [4].Many applications in vision-based robotics, such as mobilerobot localization [5] and navigation [29], can benefit fromthe panoramic field of view provided by omnidirectionalcameras. In the literature, there have been several methodsproposed for increasing the field of view of cameras systems[4]. One effective way is to combine mirrors with conven-tional imaging system. The obtained sensors are referred

* is with LASMEA - UMR 6602 du CNRS24, avenue des Landais, 63177 Aubiere Cedex - Francehadj,mezouar,[email protected]� is with ISRC - Intelligent Systems Research CenterSungkyunkwan University, Suwan, South [email protected]

to as catadioptric imaging systems. The resulting imagingsystems have been termed central catadioptric when a singleprojection center describes the world to image mapping.From a theoretical and practical view point, a single centerof projection is a desirable property for an imaging system[1]. Baker and Nayar in [1] derive the entire class ofcatadioptric systems with a single viewpoint. Clearly, visualservoing applications can also benefit from such sensorssince the latter naturally overcome the visibility constraint.Vision-based control of robotic arms, single mobile robotor formation of mobile robots appear thus in the literaturewith omnidirectional cameras (refer for example to [3], [6],[23], [28],[22]). Image-based visual servoing with centralcatadioptric cameras using points was studied in [3]. Theuse of straight lines has also been investigated in [22]. Asit is well known, the catadioptric projection of a 3D linein the image plane is a conic curve. In [22], the authorspropose to use directly the coordinates of the polar lines ofthe image center with respect to the conic curves to definethe input of the vision-based control scheme. This paperis concerned with homography-based visual servo controltechniques with central catadioptric cameras. This frame-work, also called 2 1/2 D visual servoing [20] in the casewhere the image features are points, exploits a combinationof reconstructed Euclidean information and image features inthe control design. The 3D information is extracted from anhomography matrix relating two views of a reference plane.As a consequence, the 2 1/2 D visual servoing scheme doesnot require any 3D model of the target. Unfortunately, in suchapproach when conventional cameras are used, the image ofthe target is not guaranteed to remain in the camera field ofview. To overcome this deficiency 2 1/2 D visual servoing hasbeen extented to an entire class of omnidirectional camerasin [15]. The resulting interaction matrices are triangular withpartial decoupling properties (refer to [20], [15]).

In this paper a new approach for homography-based visualservoing using points imaged with any type of central camerais presented. The structure from motion problem using im-aged 3D points is first studied. Geometrical relationships areexploited to linearly estimate a generic homagraphy matrixfrom which a partial Euclidean reconstruction is obtained.The information extracted from the homography are thenused to design a control law which allow us to fully decouplerotational motions from translational motions (i.e the result-ing interaction matrix is square block-diagonal). Real timeexperimental results using an eye-to-hand robotic system arepresented and confirm the validity of our approach.

2007 IEEE International Conference onRobotics and AutomationRoma, Italy, 10-14 April 2007

ThA2.2

1-4244-0602-1/07/$20.00 ©2007 IEEE. 1697

cF

mFM

C

mX

X

ix

Image plane

ξψ 2−

ξ

Unitary sphere

Fig. 1. Central catadioptric image formation

II. MODELISATION

The central catadioptric projection can be modeled by acentral projection onto a virtual unitary sphere, followed bya perspective projection onto an image plane. This virtualunitary sphere is centered in the principal effective view pointand the image plane is attached to the perspective camera. Inthis model, called unified model and proposed by Geyer andDaniilidis in [13], conventional perspective camera appearsas a particular case.

A. Projection of point

Let Fc and Fm be the frames attached to the conventionalcamera and to the mirror respectively. respectively. In thesequel, we suppose that Fc and Fm are related by a simpletranslation along the Z-axis (Fc and Fm have the sameorientation as depicted in Figure 1). The origins C and Mof Fc and Fm will be termed optical center and principalprojection center respectively. The optical center C hascoordinates [0 0 − ξ]T with respect to Fm and the imageplane Z = f.(ψ − 2ξ) is orthogonal to the Z-axis where fis the focal length of the conventional camera and ξ and ψdescribe the type of sensor and the shape of the mirror, andare function of mirror shape parameters (refer to [2]).

Consider the virtual unitary sphere centered in M asshown in Fig.1 and let X be a 3D point with coordinatesX = [X Y Z]T with respect to Fm. The world point X isprojected in the image plane into the point of homogeneouscoordinates xi = [xi yi 1]T . The image formation processcan be split in three steps as:

- First step: The 3D world point X is first projected onthe unit sphere surface into a point of coordinates in Fm:Xm = 1

ρ

[X Y Z

]�, where

ρ = ‖X‖ =√X2 + Y 2 + Z2 (1)

The projective ray Xm passes through the principalprojection center M and the world point X .

- Second step: The point Xm lying on the unitarysphere is then perspectively projected on the normalizedimage plane Z = 1 − ξ. This projection is a point ofhomogeneous coordinates x = [xT 1]T = f(X) (where

x = [x y]T ):

x = f(X) =[

X

Z + ξρ

Y

Z + ξρ1]�

(2)

- Third step: Finally the point of homogeneous coor-dinates xi in the image plane is obtained after a plane-to-plane collineation K of the 2D projective point x: xi = Kx.The matrix K can be written as K = KcM where theupper triangular matrix Kc contains the conventional cameraintrinsic parameters, and the diagonal matrix M contains themirror intrinsic parameters:

M =

ψ − ξ 0 00 ψ − ξ 00 0 1

, Kc =

fu αuv u0

0 fv v00 0 1

Note that, setting ξ = 0, the general projection model

becomes the well known perspective projection model.

In the sequel, we assume that Z �= 0. Let us denote η =sρ/|Z| = s

√1 +X2/Z2 + Y 2/Z2, where s is the sign of

Z. The coordinates of the image point can be rewritten as:

x =X/Z

1 + ξη; y =

Y/Z

1 + ξη

By combining the two previous equations, it is easy to showthat η is the solution of the following second order equation:

η2 − (x+ y)2(1 + ξη)2 − 1 = 0

with the following potential solutions:

η1,2 =±γ − ξ(x2 + y2)ξ2(x2 + y2) − 1

(3)

where γ =√

1 + (1 − ξ2)(x2 + y2). Note that, the sign ofη is equal to the sign of Z and then it can be shown (referto Appendix) that the exact solution is:

η =−γ − ξ(x2 + y2)ξ2(x2 + y2) − 1

(4)

Equation (4) shows that η can be computed as a function ofimage coordinates x and sensor parameter ξ. Noticing that:

Xm = (η−1 + ξ)x (5)

where x = [xT 11+ξη ]T , we deduce that Xm can also be

computed as a function of image coordinates x and sensorparameter ξ.

III. SCALED EUCLIDEAN RECONSTRUCTION USING

HOMOGRAPHY MATRIX OF CATADIOPTRIC VISION

Several methods were proposed to obtain Euclidean recon-struction from two views [11]. They are generally based onthe estimation of the fundamental matrix [18] in pixel spaceor on the estimation of the essential matrix [17] in normal-ized space. However, for control purposes, the methods basedon the essential matrix are not well suited since degenerateconfigurations can occur (such as pure rotational motion).Homography matrix and Essential matrix based approachesdo not share the same degenerate configurations, for example

ThA2.2

1698

pure rotational motion is not a degenerate configuration whenusing homography-based method. The epipolar geometry ofcentral catadioptric system has been more recently investi-gated [14], [27]. The central catadioptric fundamental andessential matrices share similar degenerate configurationsthat those observed with conventional perspective cameras,it is why we will focus on homographic relationship. In thesequel, the collineation matrix K and the mirror parameterξ are supposed known. To estimate these parameters thealgorithm proposed in [2] can be used. In the next section,we show how we can compute homographic relationshipsbetween two central catadioptric views of points.

Let R and t be the rotation matrix and the translationvector between two positions Fm and F∗

m of the centralcatadioptric camera (see Figures 2). Consider a 3D referenceplane (π) given in F∗

m by the vector π∗� = [n∗ −d∗], wheren∗ is its unitary normal in F∗

m and d∗ is the distance from(π) to the origin of F∗

m.

A. Homography matrix from points

Let X be a 3D point with coordinates X =[X Y Z]� with respect to Fm and with coordinates X∗ =[X∗ Y ∗ Z∗]� with respect to F∗

m. Its projection in the unitsphere for the two camera positions are:

Xm = (η−1 + ξ)x = 1ρ

[X Y Z

]�X∗

m = (η∗−1 + ξ)x∗ = 1ρ

[X∗ Y ∗ Z∗ ]�

Using the homogenous coordinates X = [X Y Z H]� andX∗ = [X∗ Y ∗ Z∗ H∗]�, we can write:

ρ(η−1 + ξ)x =[

I3 0]X =

[R t

]X∗ (6)

The distance d(X , π) from the world point X to the plane(π) is given by the scalar product π∗� · X∗ and:

d(X∗, π∗) = ρ∗(η∗−1 + ξ)n∗�x∗ − d∗H∗

As a consequence, the unknown homogenous component H∗

is given by:

H∗ =ρ∗(η∗−1 + ξ)

d∗n∗�x∗ − d(X∗, π∗)

d∗(7)

The homogeneous coordinates of X with respect to F∗m can

be rewritten as:

X∗ = ρ∗(η∗−1 + ξ)[

I3 0]�

x∗ +[

01×3 H∗ ]� (8)

By combining the Equations (7) and (8), we obtain:

X∗ = ρ∗(η∗−1 + ξ)A∗πx∗ + b∗

π (9)

where

A∗π =

[I3 n∗

d∗

]�and b∗

π =[01×3 − d(X ,π)

d∗

]According to (9), the expression (6) can be rewritten as:

ρ(η−1 + ξ)x = ρ∗(η∗−1 + ξ)Hx∗ + αt (10)

with H = R + td∗ n∗T and α = −d(X ,π)

d∗ .

Fig. 2. Geometry of two views of points

H is the Euclidean homography matrix written as afunction of the camera displacement and of the plane co-ordinates with respect to F∗

m. It has the same form as inthe conventional perspective case (it is decomposed into arotation matrix and a rank 1 matrix). If the world point Xbelongs to the reference plane (π) (i.e α = 0) then Equation(10) becomes:

x ∝ Hx∗ (11)

Note that the Equation (11) can be turned into a linearhomogeneous equation x⊗Hx∗ = 0 (where ⊗ denotes thecross-product). As usual, the homography matrix related to(π), can thus be estimated up to a scale factor, using fourcouples of coordinates (xk;x∗

k), k = 1 · · · 4, correspondingto the projection in the image space of world points Xk

belonging to (π). If only three points belonging to (π)are available then at least five supplementary points arenecessary to estimate the homography matrix by using forexample the linear algorithm proposed in [19]. From theestimated homography matrix, the camera motion parameters(that is the rotation R and the scaled translation td∗ = t

d∗ )and the structure of the observed scene (for example thevector n∗) can thus be determined (refer to [11], [30]). Itcan also be shown that the ratio σ = ρ

ρ∗ can be estimated asfollow:

σ =ρ

ρ∗= (1 + n∗T RT td∗)

(η∗−1 + ξ)n∗T x∗

(η−1 + ξ)n∗T RT x(12)

This parameter is used in our 2 1/2 D visual servoing controlscheme.

IV. CONTROL SCHEME

As usual when designing a 2 1/2 D visual servoing, thefeature vector used as input of the control law combines 2-Dand 3-D informations [20]:

s = [s�i θu�]�

where si is a 3-dimensional vector containing the 2D featuresand, u and θ are respectively the axis and the rotation angleobtained from R (rotation matrix between the mirror framewhen the camera is in these current and desired positions).

ThA2.2

1699

Noticing that the parameter ρ does not depend of the cameraorientation and in order to decouple rotational motions fromthe translational ones, si can be chosen as follow :

si = [ log(ρ1) log(ρ2) log(ρ3) ]�

where ρk=1,2,3 are the distances from the 3D points Xk=1,2,3

to the camera center (refer to equation 1)The task function e to regulate to 0 [25] is given by:

e = s − s∗ = [Γ1, Γ2, Γ3, θuT ]T (13)

where s∗ is the desired value of s and Γk = log(

ρk

ρ∗k

)=

log(σk), {k = 1, 2, 3}. The three first components of e canbe estimated using Equation (12). The rotational part of eis estimated using partial Euclidean reconstruction from thehomography matrix derived in Section III. The exponentialdecay of e toward 0 can be obtained by imposing e = −λe(λ being a proportional gain), the corresponding control lawis:

τ = −λL−1(s − s∗) (14)

where τ is a 6-dimensional vector denoting the velocityscrew of the central catadioptric camera. It contains theinstantaneous angular velocity ω and the instantaneous linearvelocity v. L is the interaction matrix related to s. It linksthe variation of s to the camera velocity: s = Lτ . It isthus necessary to compute the interaction matrix in orderto derive the control law given by the Equation (14). Thetime derivative of the rotation vector uθ can be expressed asa function of the catadioptric camera velocity vector τ as:

d(uθ)dt

= [03 Lω] τ (15)

where Lω is given by [20]:

Lω(u, θ) = I3 − θ

2[u]× +

(1 − sinc(θ)

sinc2( θ2 )

)[u]2× (16)

with sinc(θ) = sin(θ)θ and [u]× being the antisymmetric

matrix associated to vector u.To control the 3 translational degrees of freedom, the

visual observations and the ratio σ expressed in (12) areused. Consider a 3-D point Xi, the time derivative of itscoordinates, with respect to the current catadioptric frameFm, is given by:

Xk = [−I3 [Xk]×] τ (17)

[Xk]× being the antisymmetric matrix associated to thevector Xk. The time derivative of Γk can be written as:

Γk =∂Γk

∂XkXk (18)

with:∂Γk

∂Xk=

1ρ2

[ Xk Yk Zk ]

By combining the equations (17), (18) and (12), it can beshown that:

si = [A 03] τ (19)

with

A=

Φ1

σ1ρ∗1

0 00 Φ2

σ2ρ∗2

00 0 Φ3

σ3ρ∗3

−x1 −y1 ξ2(x21+y2

1)−11+γ1ξ

−x2 −y2 ξ2(x22+y2

2)−11+γ2ξ

−x3 −y3 ξ2(x23+y2

3)−11+γ3ξ

(20)

where Φk = 1+γkξγk+ξ(x2

k+y2k)

. The task function e (see Equation(13)) can thus be regulated to 0 using the control law (14)with the following interaction matrix L:

L =[

A 03

03 Lω

](21)

In practice, an approximated interaction matrix L is used.The parameter ρ∗ can be estimated only once during a off-line learning stage.

V. EXPERIMENTAL RESULTS

END−EFFECTOR

TARGET

OMNIDIRECTIONALCAMERA

Fig. 3. Experimental setup : eye-to-hand configuration

The proposed control law has been tested on a six d-o-feye-to-hand system (refer to Figure 3). In this configuration,the interaction matrix has to take into account the mappingfrom the camera frame onto the robot control frame [12]. Ifwe denote [Re, te] this mapping, the eye-to-hand interactionmatrix Le is related to the eye-in-hand one L by :

Le = L[

Re [te]×Re

03 Re

](22)

where [te]× is the skew symmetric matrix associated withtranslation vector te. The interation matrix Le is used inthe control law (14). The omnidirectional camera used is aparabolic mirror combined with an orthographic lens. Sincewe were not interested in image processing in this paper,the target is composed of white marks (see Figure 3). Theextracted visual features are the image coordinates of thecenter of gravity of each mark. From an initial position therobot has to reach a desired position known as a desired 21/2 D observation vector s∗. Three experiments are presented:

• First experiment (Figures 4 and 5): rotational motiononly (θux = 18dg, θuy = 20dg, θuz = 25dg),

ThA2.2

1700

• second experiment (Figures 6 and 7): translational mo-tion only (tx = 0.3 m, ty = −0.4 m, tz = 0.1 m),

• third experiment (Figures 8 and 9): generic motion((tx = 0.5 m, ty = −0.35 m, tz = 0.1 m, θux = 2dg,θuy = 35dg, θuz = 31dg).

For each experiment, the images corresponding to the initialand desired configuration, the trajectories of four imagepoints, the error si − s∗i , the rotational error uθ, the trans-lational and the rotational velocities are presented. Theconvergence of the error s − s∗ demonstrates the correctrealization of the task. The computed control laws aregiven in Figures 5(c)-(d),7(c)-(d),9(c)-(d). We can note itssatisfactory variations due to the full decoupling betweenrotational and translational motions.

(a) (b)

Fig. 4. First experiment: (a) Initial, (b) desired images of the target andtrajectories of four image points

0 50 100 150 200 250 300−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0 50 100 150 200 250 300−0.05

0

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

(a) (b)

0 50 100 150 200 250 300−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

0 50 100 150 200 250 300−0.045

−0.04

−0.035

−0.03

−0.025

−0.02

−0.015

−0.01

−0.005

0

0.005

(c) (d)

Fig. 5. First experiment: (a) error si − s∗i , (b) rotational error uθ (rad),(c)translational velocities (m/s)(d) rotational velocities (rad/s)

(a) (b)

Fig. 6. Second experiment: (a) Initial, (b) desired images of the target andtrajectories of four image points

0 50 100 150 200 250 300 350 400 450−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0 50 100 150 200 250 300 350 400 450−0.2

−0.15

−0.1

−0.05

0

0.05

0.1

0.15

0.2

(a) (b)

0 50 100 150 200 250 300 350 400 450−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

0 50 100 150 200 250 300 350 400 450−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0.03

0.04

(c) (d)

Fig. 7. Second experiment:(a) error si − s∗i , (b) rotational error uθ (rad),(c)translational velocities (m/s)(d) rotational velocities (rad/s)

(a) (b)

Fig. 8. Third experiment: (a) Initial, (b) desired images of the target andtrajectories of four image points

0 50 100 150 200 250 300 350 400−0.2

−0.1

0

0.1

0.2

0.3

0.4

0.5

0.6

0 50 100 150 200 250 300 350 400−0.6

−0.4

−0.2

0

0.2

0.4

0.6

0.8

(a) (b)

0 50 100 150 200 250 300 350 400−0.05

−0.04

−0.03

−0.02

−0.01

0

0.01

0.02

0 50 100 150 200 250 300 350 400−0.08

−0.06

−0.04

−0.02

0

0.02

0.04

0.06

(c) (d)

Fig. 9. Third experiment:(a) error si − s∗i , (b) rotational error uθ (rad),(c)translational velocities (m/s)(d) rotational velocities (rad/s)

VI. CONCLUSION

In this paper a hybrid decoupled vision-based controlscheme valid for the entire class of central cameras waspresented. Geometrical relationship between two views ofimaged points was exploited to estimate a generic homogra-phy matrix from which partial Euclidean reconstruction canbe obtained. The information extracted from the homography

ThA2.2

1701

matrix were then used to design a hybrid control lawwhich allowed us to fully decouple rotational motion fromtranslational motions. Experimental results show the validityof the proposed approach.

REFERENCES

[1] S. Baker and S. K. Nayar. A theory of single-viewpoint catadioptricimage formation. International Journal of Computer Vision, 35(2):1–22, November 1999.

[2] J. Barreto and H. Araujo. Geometric properties of central catadioptricline images. In 7th European Conference on Computer Vision,ECCV’02, pages 237–251, Copenhagen, Denmark, May 2002.

[3] J. P. Barreto, F. Martin, and R. Horaud. Visual servoing/trackingusing central catadioptric images. In ISER2002 - 8th InternationalSymposium on Experimental Robotics, pages 863–869, Bombay, India,July 2002.

[4] R. Benosman and S. Kang. Panoramic Vision. Springer Verlag ISBN0-387-95111-3, 2000.

[5] P. Blaer and P.K. Allen. Topological mobile robot localization usingfast vision techniques. In IEEE International Conference on Roboticsand Automation, pages 1031–1036, Washington, USA, May 2002.

[6] D. Burshka, J. Geiman, and G. Hager. Optimal landmark configurationfor vision based control of mobile robot. In IEEE InternationalConference on Robotics and Automation, pages 3917–3922, Tapei,Taiwan, September 2003.

[7] F. Chaumette. Potential problems of stability and convergence inimage-based and position-based visual servoing. The Confluence ofVision and Control, D. Kriegman, G. Hager, A. Morse (eds), LNCISSeries, Springer Verlag, 237:66–78, 1998.

[8] G. Chesi, K. Hashimoto, D. Prattichizzo, and A. Vicino. A switchingcontrol law for keeping features in the field of view in eye-in-handvisual servoing. In IEEE International Conference on Robotics andAutomation, pages 3929–3934, Taipei, Taiwan, September 2003.

[9] Noah J. Cowan, Joel D. Weingarten, and Daniel E. Koditschek. Visualservoing via navigation functions. IEEE Transactions on Robotics andAutomation, 18(4):521–533, August 2002.

[10] B. Espiau, F. Chaumette, and P. Rives. A new approach to visualservoing in robotics. IEEE Transactions on Robotics and Automation,8(3):313–326, June 1992.

[11] O. Faugeras and F. Lustman. Motion and structure from motion ina piecewise planar environment. Int. Journal of Pattern Recognitionand Artificial Intelligence, 2(3):485–508, 1988.

[12] G. Flandin, F. Chaumette, and E. Marchand. Eye-in-hand / eye-to-hand cooperation for visual servoing. In IEEE Int. Conf. on Roboticsand Automation, San Francisco, CA, Avril 2000.

[13] C. Geyer and K. Daniilidis. A unifying theory for central panoramicsystems and practical implications. In European Conference onComputer Vision, volume 29, pages 159–179, Dublin, Ireland, May2000.

[14] C. Geyer and K. Daniilidis. Mirrors in motion: Epipolar geometry andmotion estimation. In International Conference on Computer Vision,ICCV03, pages 766–773, Nice, France, 2003.

[15] H. Hadj Abdelkader, Y. Mezouar, N. Andreff, and P. Martinet. 2 1/2d visual servoing with central catadioptric cameras. In IEEE/RSJ Int.Conf. on Intelligent Robots and Systems, IROS’05, pages 2342–2347,Edmonton, Canada, August 2005.

[16] S. Hutchinson, G.D. Hager, and P.I. Corke. A tutorial on visual servocontrol. IEEE Transactions on Robotics and Automation, 12(5):651–670, October 1996.

[17] H.C. Longuet-Higgins. A computer algorithm for reconstructing ascene from two projections. Nature, 293:133–135, September 1981.

[18] Quang-Tuan Luong and Olivier Faugeras. The fundamental matrix:theory, algorithms, and stability analysis. Int. Journal of ComputerVision, 17(1):43–76, 1996.

[19] E. Malis and F. Chaumette. 2 1/2 d visual servoing with respectto unknown objects through a new estimation scheme of cameradisplacement. International Journal of Computer Vision, 37(1):79–97, June 2000.

[20] E. Malis, F. Chaumette, and S. Boudet. 2 1/2 d visual servoing. IEEETransactions on Robotics and Automation, 15(2):238–250, April 1999.

[21] Y. Mezouar and F. Chaumette. Path planning for robust image-basedcontrol. IEEE Transactions on Robotics and Automation, 18(4):534–549, August 2002.

[22] Y. Mezouar, H. Haj Abdelkader, P. Martinet, and F. Chaumette. Centralcatadioptric visual servoing from 3d straight lines. In IEEE/RSJ Int.Conf. on Intelligent Robots and Systems, IROS’04, volume 1, pages343–349, Sendai, Japan, September 2004.

[23] A. Paulino and H. Araujo. Multiple robots in geometric formation:Control structure and sensing. In International Symposium on Intel-ligent Robotic Systems, pages 103–112, University of Reading, UK,July 2000.

[24] E. Malis S. Benhimane. Vision-based control with respect to planarand non-planar objects using a zooming camera. In IEEE InternationalConference on Advanced Robotics, pages 863–869, July 2003.

[25] C. Samson, B. Espiau, and M. Le Borgne. Robot Control : The TaskFunction Approach. Oxford University Press, 1991.

[26] B. Thuilot, C. Cariou, P. Martinet, and M. Berducat. Automaticguidance of a farm tractor relying on a single cp-dgps. AutonomousRobots, 13, 2002.

[27] Tomas Pajdla Tomas Svoboda and Vaclav Hlavac. Motion estimationusing central panoramic cameras. In IEEE Conference on IntelligentVehicles, Stuttgart, Germany, October 1998.

[28] R. Vidal, O. Shakernia, and S. Sastry. Formation control of non-holonomic mobile robots with omnidirectional visual servoing andmotion segmentation. In IEEE International Conference on Roboticsand Automation, pages 584–589, Taipei, Taiwan, September 2003.

[29] N. Winter, J. Gaspar, G. Lacey, and J. Santos-Victor. Omnidirectionalvision for robot navigation. In Proc. IEEE Workshop on Omnidi-rectional Vision, OMNIVIS, pages 21–28, South Carolina, USA, June2000.

[30] Z. Zhang and A. R. Hanson. Scaled euclidean 3d reconstruction basedon externally uncalibrated cameras. In IEEE Symposium on ComputerVision, Coral Gables, FL, 1995.

ThA2.2

1702