This article was downloaded by [Dalhousie University]On 12 November 2014 At 1948Publisher Taylor amp FrancisInforma Ltd Registered in England and Wales Registered Number 1072954Registered office Mortimer House 37-41 Mortimer Street London W1T3JH UK
Advanced RoboticsPublication details including instructions forauthors and subscription informationhttpwwwtandfonlinecomloitadr20
Localization and guidancewith an embarked camera ona mobile robotK Achour amp A O DjekounePublished online 02 Apr 2012
To cite this article K Achour amp A O Djekoune (2002) Localization and guidancewith an embarked camera on a mobile robot Advanced Robotics 161 87-102DOI 101163156855302317413754
To link to this article httpdxdoiorg101163156855302317413754
PLEASE SCROLL DOWN FOR ARTICLE
Taylor amp Francis makes every effort to ensure the accuracy of allthe information (the ldquoContentrdquo) contained in the publications on ourplatform However Taylor amp Francis our agents and our licensorsmake no representations or warranties whatsoever as to the accuracycompleteness or suitability for any purpose of the Content Any opinionsand views expressed in this publication are the opinions and views ofthe authors and are not the views of or endorsed by Taylor amp FrancisThe accuracy of the Content should not be relied upon and should beindependently verified with primary sources of information Taylor andFrancis shall not be liable for any losses actions claims proceedingsdemands costs expenses damages and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with inrelation to or arising out of the use of the Content
This article may be used for research teaching and private studypurposes Any substantial or systematic reproduction redistributionreselling loan sub-licensing systematic supply or distribution in any formto anyone is expressly forbidden Terms amp Conditions of access and usecan be found at httpwwwtandfonlinecompageterms-and-conditions
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Advanced Robotics Vol 16 No 1 pp 87ndash102 (2002)Oacute VSP and Robotics Society of Japan 2002
Short paper
Localization and guidance with an embarked cameraon a mobile robot
K ACHOUR curren and A O DJEKOUNERobotics and Arti cial Intelligence Laboratory Development Center of the Advanced Technologies128 Chemin Mohamed Gacem El-Madania 16075 Algiers Algeria
Received 28 December 2000 accepted 5 August 2001
AbstractmdashIn this paper a new method using the Hough transform to correct the drift of the mobilerobot CESA is presented The corrections are made by direct observation As an illustration analgorithm implemented is detailed and experiment results for CESA navigation in our laboratoryrsquoscorridor and trajectory generation are given
Keywords Robotic vision mobile robot Hough transform navigation strategy calibration
1 INTRODUCTION
Autonomous vehicles are being increased used in automated factories and othersuch reasonably well-structured environments They are capable of intelligentmotion (and action) without requiring either a guide to follow or teleoperatorcontrol
For practical applications in the real world sensory information is required forautonomous navigation in unstructured environments Ultrasonic sensors have beenemployed by many researchers for environment recognition and obstacle avoidanceThey are relatively inexpensive and easy to use but provide limited information ofthe environment Moreover the wide beam-opening angle and specular re ectionsof ultrasonic waves can cause measurement errors [1]
CCD cameras are analogous to human eyes which project the three-dimensional(3D) environment to a 2D plane They can provide much useful information to thevehicle for autonomous navigation Considerable work has been reported concern-ing the development of image sensing devices and vision-based navigation algo-rithms for intelligent autonomous vehicles Image processing system for extracting
To whom correspondence should be addressed E-mail kachourcdtadz
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
88 K Achour and A O Djekoune
(a)
(b)
Figure 1 (a) The CESA vehicle (b) Structure of the vehicle and its reference center coordinates
image features in real time was developed in [2] A feedback control scheme was de-veloped for high-speed motion of an autonomous vehicle A multisensor perceptionsystem for the mobile robot Hilare was developed in [3] In their system a scene ac-quisition module was developed using stereo cameras and a laser range nder Thisdynamic vision module was used for robot position correction and feature trackingSong and Tang proposed a sensor system exploiting double ultrasonic sensors and aCCD camera for environment perception [4] Several investigators have presenteddynamic feature tracking systems [3 5] In their approach the vision subsystemand control subsystem work in parallel to accomplish visual servoing On the otherhand image sequences have been used for motion detection Aloimonos proposeda design for obstacle avoidance of mobile robots based on image ow [6]
This paper presents a visual guidance control design with its experimentation forthe CESA autonomous vehicle (Fig 1a) In this experimentation visual guidancewas solved as a dynamic control problem where real-time image processing was
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 89
required to provide feedback signals As an illustration we present the CESA navi-gation experiment results in our laboratoryrsquos corridor and free trajectory generationThe drift corrections are made by direct observation and the rotationtranslationparameters are computed taking into account the robot neighborhood parametersThese parameters are obtained by applying the Hough transform (HT) on the sceneimages acquired by an embarked CCD camera We have used the HT because ithas long been recognized as a robust technique for detecting multi-dimensional fea-tures in an image and estimating their parameters It has many applications as mostmanufactured parts contain feature boundaries which can be described by regularcurves or straight lines The main advantage of the HT is that it is tolerant of gapsin feature boundary descriptions and is relatively unaffected by image noise
Algorithms for the autonomous vehicle navigation system are being developedand tested both on a computer-aided design workstation and on the experimentalCESA autonomous vehicle
The CESA vehicle shown in Fig 1a is an experimental vehicle designed tooperate autonomously within a structured of ce or factory environment It has beendesigned with several goals in mind First and foremost CESA is a testbed withwhich to experiment with such things as robot programming language and sensorintegrationdata fusion techniques Second CESA is designed to be low cost anddepend on only one camera and two active telemetric sensors
CESA is a tricycle con guration with a single front wheel which serves bothfor steering and driving the vehicle and two passive load-bearing rear wheels Theonboard control system uses active telemetric sensors and a vision system to providethe position and heading information needed to guide the vehicle along speci edpaths between stations The external or off-board part of the system consists of aglobal path planner which stores the structured environment data and produces thepath plans from one station to the next It also includes a radio communication linkbetween this supervisory controller and the vehicle
This paper is organized as follow in Section 2 we present the CESA onboardcontrol system The vision system calibration and the embarked sensors arepresented in Section 3 In Section 4 we show the line segment extraction algorithmbased on the HT and the developed and used algorithms for CESA navigation inour laboratoryrsquos corridor Finally we present the obtained results and a conclusion
2 VEHICLE CONFIGURATION
21 Mechanical con guration
Figure 1b illustrates the basic structure of the CESA vehicle a photograph of whichis shown in Fig 1a It consists of a single steerable drive wheel at the front and twopassive rear wheels ie it is a tricycle con guration
An optical encoder is mounted on the steering motor to provide feedback controlIn addition two optical encoders are mounted on each of two rear independentnon-load-bearing wheels
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
90 K Achour and A O Djekoune
The vehicle is powered by two sealed 12 V 60 A batteries which in the currentcon guration provide a useful lifetime of approximately 7 h
22 Computer hardware and software
Control of the vehicle is composed on several modules Software development isperformed on a SUN Spark station running under UNIX All software is written inC language The diagnostic data from the vehicle may be downloaded to the hostvia a radio communication link for analysis
3 SENSORS
The vehicle is equipped with three different sensors odometry on each wheelultrasonic and infrared sensors and a vision system These sensors provide all thenavigational sensing information available to the vehicle They deliver informationwhich could be used to de ne the planning tasks strategy of the vehicle and neededto guide it along speci ed paths between stations without collisions
31 Odometry
Odometry ie the estimation of position and orientation by counting the revolutionsof a wheel dates back to the time of Archimedes [7] The advantage of odometryis that it is both simple and inexpensive However it is prone to several sourcesof errors (i) surface roughness and undulations may cause the distance to be over-estimated (ii) wheel slippage can cause the distance to be under-estimated and(iii) variations in load can distort the odometer wheels and introduce additionalerrors
32 Ultrasonic and infrared sensors
The CESA vehicle is equipped with two active telemetric sensors The proximetricsensors (infrared) at the front detect the presence of an object situated at a distanceof less than 1 m and the ultrasonic sensors around detect objects situated at a moreimportant distance for the mobile robot
33 Vision system
The CESA vision system is composed of a camera iVC 500 CCD and a microcon-troller-based data acquisition card The camera delivers images in the CIIR standard(624 lines 50 Hz) A AD9502 CM video digitalizer performs the conversion ofthe analog video signal to digital data and separates the synchronization signals(horizontal and vertical)
The digitized image has a resolution of 464pound256 pixels and it is transferred to thesupervisory controller at 9600 baud The 80C51 microcontroller controls the wholeoperation of the card The acquisition is performed in real time in 40 ms
A binary edge-detection algorithm was implemented on a eld programmable gatearray (FPGA) on this card
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 91
Figure 2 The camera support
Figure 3 Image transverse section
331 Camera calibration The CCD camera is modeled on by the pin-holemodel [8] It is sustained by a support and makes an reg angle with the horizontalConsidering both the geometric model of the vehicle and the embarked camera(Figs 2 and 3) any binary edge image point i j could be represented in thevehicle coordinate system by the point x y z by applying the following equationsystems [9]
8lt
x D Cy curren sin reg C Cz curren cos reg C dx
y D Cx C dy
z D Cz curren cos reg iexcl Cz curren sin reg C dz
(1)
with
8gtgtgtgtgtgtgtgtlt
gtgtgtgtgtgtgtgt
Cx D iexclCz iexcl cedil
cedillx
sup3j
rx
iexcl 05
acute
Cy D iexclCz iexcl cedil
cedilly
sup3i
ry
iexcl 05
acute
Cz Dd
sin regC
sup3d
sin regiexcl cedil
acutesin reg curren cos macr
sinreg iexcl macr
(2)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
92 K Achour and A O Djekoune
and
macr D tgiexcl1
microly
cedil
sup3i
ry
iexcl 05
acutepara
where rx is the column number of the screen (pixel) ry is the row number of thescreen (pixel) lx is the width of image plan (m) ly is the height of image plan (m) cedil
is the focal distance of the used lentil d is the vertical distance between the cameraand the ground (mm) reg is the camera slant angle measured with the horizontal anddx dy dz is the component of the vector from the reference point of the mobilerobot to the image plan center of the camera (mm)
4 LOCALIZATION AND NAVIGATION STRATEGY
41 Robotrsquos localization
The current vehicle position is obtained by interpreting the visual informationextracted from binary edge images The vehicle displacement is then insured bya position control in the Cartesian space (Fig 4) The control position is realized bylsquostatic look and moversquo [9] The measurement errors in Fig 4 are the result of thecorridorrsquos surface roughness and undulations wheel slippage and active telemetricsensor interference These errors drift the vehicle from its desired position whichin our case is the middle of the corridor (robotrsquos position consign)
The used attributes are the segments They are used for mobile robot localizationand navigation as in [9 10] because they faithfully produce the robot environmentevolution and present a better compromise among the different primitives in termsof hardiness and time processing The segments are obtained by the HT on imagesdirectly at the output of the vision system because it is a well-known methodfor detection of parametric curves in binary images and it was recognized as animportant means of searching for objects and features in binary images It converts
Figure 4 Position control
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
or howsoever caused arising directly or indirectly in connection with inrelation to or arising out of the use of the Content
This article may be used for research teaching and private studypurposes Any substantial or systematic reproduction redistributionreselling loan sub-licensing systematic supply or distribution in any formto anyone is expressly forbidden Terms amp Conditions of access and usecan be found at httpwwwtandfonlinecompageterms-and-conditions
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Advanced Robotics Vol 16 No 1 pp 87ndash102 (2002)Oacute VSP and Robotics Society of Japan 2002
Short paper
Localization and guidance with an embarked cameraon a mobile robot
K ACHOUR curren and A O DJEKOUNERobotics and Arti cial Intelligence Laboratory Development Center of the Advanced Technologies128 Chemin Mohamed Gacem El-Madania 16075 Algiers Algeria
Received 28 December 2000 accepted 5 August 2001
AbstractmdashIn this paper a new method using the Hough transform to correct the drift of the mobilerobot CESA is presented The corrections are made by direct observation As an illustration analgorithm implemented is detailed and experiment results for CESA navigation in our laboratoryrsquoscorridor and trajectory generation are given
Keywords Robotic vision mobile robot Hough transform navigation strategy calibration
1 INTRODUCTION
Autonomous vehicles are being increased used in automated factories and othersuch reasonably well-structured environments They are capable of intelligentmotion (and action) without requiring either a guide to follow or teleoperatorcontrol
For practical applications in the real world sensory information is required forautonomous navigation in unstructured environments Ultrasonic sensors have beenemployed by many researchers for environment recognition and obstacle avoidanceThey are relatively inexpensive and easy to use but provide limited information ofthe environment Moreover the wide beam-opening angle and specular re ectionsof ultrasonic waves can cause measurement errors [1]
CCD cameras are analogous to human eyes which project the three-dimensional(3D) environment to a 2D plane They can provide much useful information to thevehicle for autonomous navigation Considerable work has been reported concern-ing the development of image sensing devices and vision-based navigation algo-rithms for intelligent autonomous vehicles Image processing system for extracting
To whom correspondence should be addressed E-mail kachourcdtadz
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
88 K Achour and A O Djekoune
(a)
(b)
Figure 1 (a) The CESA vehicle (b) Structure of the vehicle and its reference center coordinates
image features in real time was developed in [2] A feedback control scheme was de-veloped for high-speed motion of an autonomous vehicle A multisensor perceptionsystem for the mobile robot Hilare was developed in [3] In their system a scene ac-quisition module was developed using stereo cameras and a laser range nder Thisdynamic vision module was used for robot position correction and feature trackingSong and Tang proposed a sensor system exploiting double ultrasonic sensors and aCCD camera for environment perception [4] Several investigators have presenteddynamic feature tracking systems [3 5] In their approach the vision subsystemand control subsystem work in parallel to accomplish visual servoing On the otherhand image sequences have been used for motion detection Aloimonos proposeda design for obstacle avoidance of mobile robots based on image ow [6]
This paper presents a visual guidance control design with its experimentation forthe CESA autonomous vehicle (Fig 1a) In this experimentation visual guidancewas solved as a dynamic control problem where real-time image processing was
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 89
required to provide feedback signals As an illustration we present the CESA navi-gation experiment results in our laboratoryrsquos corridor and free trajectory generationThe drift corrections are made by direct observation and the rotationtranslationparameters are computed taking into account the robot neighborhood parametersThese parameters are obtained by applying the Hough transform (HT) on the sceneimages acquired by an embarked CCD camera We have used the HT because ithas long been recognized as a robust technique for detecting multi-dimensional fea-tures in an image and estimating their parameters It has many applications as mostmanufactured parts contain feature boundaries which can be described by regularcurves or straight lines The main advantage of the HT is that it is tolerant of gapsin feature boundary descriptions and is relatively unaffected by image noise
Algorithms for the autonomous vehicle navigation system are being developedand tested both on a computer-aided design workstation and on the experimentalCESA autonomous vehicle
The CESA vehicle shown in Fig 1a is an experimental vehicle designed tooperate autonomously within a structured of ce or factory environment It has beendesigned with several goals in mind First and foremost CESA is a testbed withwhich to experiment with such things as robot programming language and sensorintegrationdata fusion techniques Second CESA is designed to be low cost anddepend on only one camera and two active telemetric sensors
CESA is a tricycle con guration with a single front wheel which serves bothfor steering and driving the vehicle and two passive load-bearing rear wheels Theonboard control system uses active telemetric sensors and a vision system to providethe position and heading information needed to guide the vehicle along speci edpaths between stations The external or off-board part of the system consists of aglobal path planner which stores the structured environment data and produces thepath plans from one station to the next It also includes a radio communication linkbetween this supervisory controller and the vehicle
This paper is organized as follow in Section 2 we present the CESA onboardcontrol system The vision system calibration and the embarked sensors arepresented in Section 3 In Section 4 we show the line segment extraction algorithmbased on the HT and the developed and used algorithms for CESA navigation inour laboratoryrsquos corridor Finally we present the obtained results and a conclusion
2 VEHICLE CONFIGURATION
21 Mechanical con guration
Figure 1b illustrates the basic structure of the CESA vehicle a photograph of whichis shown in Fig 1a It consists of a single steerable drive wheel at the front and twopassive rear wheels ie it is a tricycle con guration
An optical encoder is mounted on the steering motor to provide feedback controlIn addition two optical encoders are mounted on each of two rear independentnon-load-bearing wheels
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
90 K Achour and A O Djekoune
The vehicle is powered by two sealed 12 V 60 A batteries which in the currentcon guration provide a useful lifetime of approximately 7 h
22 Computer hardware and software
Control of the vehicle is composed on several modules Software development isperformed on a SUN Spark station running under UNIX All software is written inC language The diagnostic data from the vehicle may be downloaded to the hostvia a radio communication link for analysis
3 SENSORS
The vehicle is equipped with three different sensors odometry on each wheelultrasonic and infrared sensors and a vision system These sensors provide all thenavigational sensing information available to the vehicle They deliver informationwhich could be used to de ne the planning tasks strategy of the vehicle and neededto guide it along speci ed paths between stations without collisions
31 Odometry
Odometry ie the estimation of position and orientation by counting the revolutionsof a wheel dates back to the time of Archimedes [7] The advantage of odometryis that it is both simple and inexpensive However it is prone to several sourcesof errors (i) surface roughness and undulations may cause the distance to be over-estimated (ii) wheel slippage can cause the distance to be under-estimated and(iii) variations in load can distort the odometer wheels and introduce additionalerrors
32 Ultrasonic and infrared sensors
The CESA vehicle is equipped with two active telemetric sensors The proximetricsensors (infrared) at the front detect the presence of an object situated at a distanceof less than 1 m and the ultrasonic sensors around detect objects situated at a moreimportant distance for the mobile robot
33 Vision system
The CESA vision system is composed of a camera iVC 500 CCD and a microcon-troller-based data acquisition card The camera delivers images in the CIIR standard(624 lines 50 Hz) A AD9502 CM video digitalizer performs the conversion ofthe analog video signal to digital data and separates the synchronization signals(horizontal and vertical)
The digitized image has a resolution of 464pound256 pixels and it is transferred to thesupervisory controller at 9600 baud The 80C51 microcontroller controls the wholeoperation of the card The acquisition is performed in real time in 40 ms
A binary edge-detection algorithm was implemented on a eld programmable gatearray (FPGA) on this card
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 91
Figure 2 The camera support
Figure 3 Image transverse section
331 Camera calibration The CCD camera is modeled on by the pin-holemodel [8] It is sustained by a support and makes an reg angle with the horizontalConsidering both the geometric model of the vehicle and the embarked camera(Figs 2 and 3) any binary edge image point i j could be represented in thevehicle coordinate system by the point x y z by applying the following equationsystems [9]
8lt
x D Cy curren sin reg C Cz curren cos reg C dx
y D Cx C dy
z D Cz curren cos reg iexcl Cz curren sin reg C dz
(1)
with
8gtgtgtgtgtgtgtgtlt
gtgtgtgtgtgtgtgt
Cx D iexclCz iexcl cedil
cedillx
sup3j
rx
iexcl 05
acute
Cy D iexclCz iexcl cedil
cedilly
sup3i
ry
iexcl 05
acute
Cz Dd
sin regC
sup3d
sin regiexcl cedil
acutesin reg curren cos macr
sinreg iexcl macr
(2)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
92 K Achour and A O Djekoune
and
macr D tgiexcl1
microly
cedil
sup3i
ry
iexcl 05
acutepara
where rx is the column number of the screen (pixel) ry is the row number of thescreen (pixel) lx is the width of image plan (m) ly is the height of image plan (m) cedil
is the focal distance of the used lentil d is the vertical distance between the cameraand the ground (mm) reg is the camera slant angle measured with the horizontal anddx dy dz is the component of the vector from the reference point of the mobilerobot to the image plan center of the camera (mm)
4 LOCALIZATION AND NAVIGATION STRATEGY
41 Robotrsquos localization
The current vehicle position is obtained by interpreting the visual informationextracted from binary edge images The vehicle displacement is then insured bya position control in the Cartesian space (Fig 4) The control position is realized bylsquostatic look and moversquo [9] The measurement errors in Fig 4 are the result of thecorridorrsquos surface roughness and undulations wheel slippage and active telemetricsensor interference These errors drift the vehicle from its desired position whichin our case is the middle of the corridor (robotrsquos position consign)
The used attributes are the segments They are used for mobile robot localizationand navigation as in [9 10] because they faithfully produce the robot environmentevolution and present a better compromise among the different primitives in termsof hardiness and time processing The segments are obtained by the HT on imagesdirectly at the output of the vision system because it is a well-known methodfor detection of parametric curves in binary images and it was recognized as animportant means of searching for objects and features in binary images It converts
Figure 4 Position control
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Advanced Robotics Vol 16 No 1 pp 87ndash102 (2002)Oacute VSP and Robotics Society of Japan 2002
Short paper
Localization and guidance with an embarked cameraon a mobile robot
K ACHOUR curren and A O DJEKOUNERobotics and Arti cial Intelligence Laboratory Development Center of the Advanced Technologies128 Chemin Mohamed Gacem El-Madania 16075 Algiers Algeria
Received 28 December 2000 accepted 5 August 2001
AbstractmdashIn this paper a new method using the Hough transform to correct the drift of the mobilerobot CESA is presented The corrections are made by direct observation As an illustration analgorithm implemented is detailed and experiment results for CESA navigation in our laboratoryrsquoscorridor and trajectory generation are given
Keywords Robotic vision mobile robot Hough transform navigation strategy calibration
1 INTRODUCTION
Autonomous vehicles are being increased used in automated factories and othersuch reasonably well-structured environments They are capable of intelligentmotion (and action) without requiring either a guide to follow or teleoperatorcontrol
For practical applications in the real world sensory information is required forautonomous navigation in unstructured environments Ultrasonic sensors have beenemployed by many researchers for environment recognition and obstacle avoidanceThey are relatively inexpensive and easy to use but provide limited information ofthe environment Moreover the wide beam-opening angle and specular re ectionsof ultrasonic waves can cause measurement errors [1]
CCD cameras are analogous to human eyes which project the three-dimensional(3D) environment to a 2D plane They can provide much useful information to thevehicle for autonomous navigation Considerable work has been reported concern-ing the development of image sensing devices and vision-based navigation algo-rithms for intelligent autonomous vehicles Image processing system for extracting
To whom correspondence should be addressed E-mail kachourcdtadz
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
88 K Achour and A O Djekoune
(a)
(b)
Figure 1 (a) The CESA vehicle (b) Structure of the vehicle and its reference center coordinates
image features in real time was developed in [2] A feedback control scheme was de-veloped for high-speed motion of an autonomous vehicle A multisensor perceptionsystem for the mobile robot Hilare was developed in [3] In their system a scene ac-quisition module was developed using stereo cameras and a laser range nder Thisdynamic vision module was used for robot position correction and feature trackingSong and Tang proposed a sensor system exploiting double ultrasonic sensors and aCCD camera for environment perception [4] Several investigators have presenteddynamic feature tracking systems [3 5] In their approach the vision subsystemand control subsystem work in parallel to accomplish visual servoing On the otherhand image sequences have been used for motion detection Aloimonos proposeda design for obstacle avoidance of mobile robots based on image ow [6]
This paper presents a visual guidance control design with its experimentation forthe CESA autonomous vehicle (Fig 1a) In this experimentation visual guidancewas solved as a dynamic control problem where real-time image processing was
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 89
required to provide feedback signals As an illustration we present the CESA navi-gation experiment results in our laboratoryrsquos corridor and free trajectory generationThe drift corrections are made by direct observation and the rotationtranslationparameters are computed taking into account the robot neighborhood parametersThese parameters are obtained by applying the Hough transform (HT) on the sceneimages acquired by an embarked CCD camera We have used the HT because ithas long been recognized as a robust technique for detecting multi-dimensional fea-tures in an image and estimating their parameters It has many applications as mostmanufactured parts contain feature boundaries which can be described by regularcurves or straight lines The main advantage of the HT is that it is tolerant of gapsin feature boundary descriptions and is relatively unaffected by image noise
Algorithms for the autonomous vehicle navigation system are being developedand tested both on a computer-aided design workstation and on the experimentalCESA autonomous vehicle
The CESA vehicle shown in Fig 1a is an experimental vehicle designed tooperate autonomously within a structured of ce or factory environment It has beendesigned with several goals in mind First and foremost CESA is a testbed withwhich to experiment with such things as robot programming language and sensorintegrationdata fusion techniques Second CESA is designed to be low cost anddepend on only one camera and two active telemetric sensors
CESA is a tricycle con guration with a single front wheel which serves bothfor steering and driving the vehicle and two passive load-bearing rear wheels Theonboard control system uses active telemetric sensors and a vision system to providethe position and heading information needed to guide the vehicle along speci edpaths between stations The external or off-board part of the system consists of aglobal path planner which stores the structured environment data and produces thepath plans from one station to the next It also includes a radio communication linkbetween this supervisory controller and the vehicle
This paper is organized as follow in Section 2 we present the CESA onboardcontrol system The vision system calibration and the embarked sensors arepresented in Section 3 In Section 4 we show the line segment extraction algorithmbased on the HT and the developed and used algorithms for CESA navigation inour laboratoryrsquos corridor Finally we present the obtained results and a conclusion
2 VEHICLE CONFIGURATION
21 Mechanical con guration
Figure 1b illustrates the basic structure of the CESA vehicle a photograph of whichis shown in Fig 1a It consists of a single steerable drive wheel at the front and twopassive rear wheels ie it is a tricycle con guration
An optical encoder is mounted on the steering motor to provide feedback controlIn addition two optical encoders are mounted on each of two rear independentnon-load-bearing wheels
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
90 K Achour and A O Djekoune
The vehicle is powered by two sealed 12 V 60 A batteries which in the currentcon guration provide a useful lifetime of approximately 7 h
22 Computer hardware and software
Control of the vehicle is composed on several modules Software development isperformed on a SUN Spark station running under UNIX All software is written inC language The diagnostic data from the vehicle may be downloaded to the hostvia a radio communication link for analysis
3 SENSORS
The vehicle is equipped with three different sensors odometry on each wheelultrasonic and infrared sensors and a vision system These sensors provide all thenavigational sensing information available to the vehicle They deliver informationwhich could be used to de ne the planning tasks strategy of the vehicle and neededto guide it along speci ed paths between stations without collisions
31 Odometry
Odometry ie the estimation of position and orientation by counting the revolutionsof a wheel dates back to the time of Archimedes [7] The advantage of odometryis that it is both simple and inexpensive However it is prone to several sourcesof errors (i) surface roughness and undulations may cause the distance to be over-estimated (ii) wheel slippage can cause the distance to be under-estimated and(iii) variations in load can distort the odometer wheels and introduce additionalerrors
32 Ultrasonic and infrared sensors
The CESA vehicle is equipped with two active telemetric sensors The proximetricsensors (infrared) at the front detect the presence of an object situated at a distanceof less than 1 m and the ultrasonic sensors around detect objects situated at a moreimportant distance for the mobile robot
33 Vision system
The CESA vision system is composed of a camera iVC 500 CCD and a microcon-troller-based data acquisition card The camera delivers images in the CIIR standard(624 lines 50 Hz) A AD9502 CM video digitalizer performs the conversion ofthe analog video signal to digital data and separates the synchronization signals(horizontal and vertical)
The digitized image has a resolution of 464pound256 pixels and it is transferred to thesupervisory controller at 9600 baud The 80C51 microcontroller controls the wholeoperation of the card The acquisition is performed in real time in 40 ms
A binary edge-detection algorithm was implemented on a eld programmable gatearray (FPGA) on this card
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 91
Figure 2 The camera support
Figure 3 Image transverse section
331 Camera calibration The CCD camera is modeled on by the pin-holemodel [8] It is sustained by a support and makes an reg angle with the horizontalConsidering both the geometric model of the vehicle and the embarked camera(Figs 2 and 3) any binary edge image point i j could be represented in thevehicle coordinate system by the point x y z by applying the following equationsystems [9]
8lt
x D Cy curren sin reg C Cz curren cos reg C dx
y D Cx C dy
z D Cz curren cos reg iexcl Cz curren sin reg C dz
(1)
with
8gtgtgtgtgtgtgtgtlt
gtgtgtgtgtgtgtgt
Cx D iexclCz iexcl cedil
cedillx
sup3j
rx
iexcl 05
acute
Cy D iexclCz iexcl cedil
cedilly
sup3i
ry
iexcl 05
acute
Cz Dd
sin regC
sup3d
sin regiexcl cedil
acutesin reg curren cos macr
sinreg iexcl macr
(2)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
92 K Achour and A O Djekoune
and
macr D tgiexcl1
microly
cedil
sup3i
ry
iexcl 05
acutepara
where rx is the column number of the screen (pixel) ry is the row number of thescreen (pixel) lx is the width of image plan (m) ly is the height of image plan (m) cedil
is the focal distance of the used lentil d is the vertical distance between the cameraand the ground (mm) reg is the camera slant angle measured with the horizontal anddx dy dz is the component of the vector from the reference point of the mobilerobot to the image plan center of the camera (mm)
4 LOCALIZATION AND NAVIGATION STRATEGY
41 Robotrsquos localization
The current vehicle position is obtained by interpreting the visual informationextracted from binary edge images The vehicle displacement is then insured bya position control in the Cartesian space (Fig 4) The control position is realized bylsquostatic look and moversquo [9] The measurement errors in Fig 4 are the result of thecorridorrsquos surface roughness and undulations wheel slippage and active telemetricsensor interference These errors drift the vehicle from its desired position whichin our case is the middle of the corridor (robotrsquos position consign)
The used attributes are the segments They are used for mobile robot localizationand navigation as in [9 10] because they faithfully produce the robot environmentevolution and present a better compromise among the different primitives in termsof hardiness and time processing The segments are obtained by the HT on imagesdirectly at the output of the vision system because it is a well-known methodfor detection of parametric curves in binary images and it was recognized as animportant means of searching for objects and features in binary images It converts
Figure 4 Position control
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
88 K Achour and A O Djekoune
(a)
(b)
Figure 1 (a) The CESA vehicle (b) Structure of the vehicle and its reference center coordinates
image features in real time was developed in [2] A feedback control scheme was de-veloped for high-speed motion of an autonomous vehicle A multisensor perceptionsystem for the mobile robot Hilare was developed in [3] In their system a scene ac-quisition module was developed using stereo cameras and a laser range nder Thisdynamic vision module was used for robot position correction and feature trackingSong and Tang proposed a sensor system exploiting double ultrasonic sensors and aCCD camera for environment perception [4] Several investigators have presenteddynamic feature tracking systems [3 5] In their approach the vision subsystemand control subsystem work in parallel to accomplish visual servoing On the otherhand image sequences have been used for motion detection Aloimonos proposeda design for obstacle avoidance of mobile robots based on image ow [6]
This paper presents a visual guidance control design with its experimentation forthe CESA autonomous vehicle (Fig 1a) In this experimentation visual guidancewas solved as a dynamic control problem where real-time image processing was
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 89
required to provide feedback signals As an illustration we present the CESA navi-gation experiment results in our laboratoryrsquos corridor and free trajectory generationThe drift corrections are made by direct observation and the rotationtranslationparameters are computed taking into account the robot neighborhood parametersThese parameters are obtained by applying the Hough transform (HT) on the sceneimages acquired by an embarked CCD camera We have used the HT because ithas long been recognized as a robust technique for detecting multi-dimensional fea-tures in an image and estimating their parameters It has many applications as mostmanufactured parts contain feature boundaries which can be described by regularcurves or straight lines The main advantage of the HT is that it is tolerant of gapsin feature boundary descriptions and is relatively unaffected by image noise
Algorithms for the autonomous vehicle navigation system are being developedand tested both on a computer-aided design workstation and on the experimentalCESA autonomous vehicle
The CESA vehicle shown in Fig 1a is an experimental vehicle designed tooperate autonomously within a structured of ce or factory environment It has beendesigned with several goals in mind First and foremost CESA is a testbed withwhich to experiment with such things as robot programming language and sensorintegrationdata fusion techniques Second CESA is designed to be low cost anddepend on only one camera and two active telemetric sensors
CESA is a tricycle con guration with a single front wheel which serves bothfor steering and driving the vehicle and two passive load-bearing rear wheels Theonboard control system uses active telemetric sensors and a vision system to providethe position and heading information needed to guide the vehicle along speci edpaths between stations The external or off-board part of the system consists of aglobal path planner which stores the structured environment data and produces thepath plans from one station to the next It also includes a radio communication linkbetween this supervisory controller and the vehicle
This paper is organized as follow in Section 2 we present the CESA onboardcontrol system The vision system calibration and the embarked sensors arepresented in Section 3 In Section 4 we show the line segment extraction algorithmbased on the HT and the developed and used algorithms for CESA navigation inour laboratoryrsquos corridor Finally we present the obtained results and a conclusion
2 VEHICLE CONFIGURATION
21 Mechanical con guration
Figure 1b illustrates the basic structure of the CESA vehicle a photograph of whichis shown in Fig 1a It consists of a single steerable drive wheel at the front and twopassive rear wheels ie it is a tricycle con guration
An optical encoder is mounted on the steering motor to provide feedback controlIn addition two optical encoders are mounted on each of two rear independentnon-load-bearing wheels
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
90 K Achour and A O Djekoune
The vehicle is powered by two sealed 12 V 60 A batteries which in the currentcon guration provide a useful lifetime of approximately 7 h
22 Computer hardware and software
Control of the vehicle is composed on several modules Software development isperformed on a SUN Spark station running under UNIX All software is written inC language The diagnostic data from the vehicle may be downloaded to the hostvia a radio communication link for analysis
3 SENSORS
The vehicle is equipped with three different sensors odometry on each wheelultrasonic and infrared sensors and a vision system These sensors provide all thenavigational sensing information available to the vehicle They deliver informationwhich could be used to de ne the planning tasks strategy of the vehicle and neededto guide it along speci ed paths between stations without collisions
31 Odometry
Odometry ie the estimation of position and orientation by counting the revolutionsof a wheel dates back to the time of Archimedes [7] The advantage of odometryis that it is both simple and inexpensive However it is prone to several sourcesof errors (i) surface roughness and undulations may cause the distance to be over-estimated (ii) wheel slippage can cause the distance to be under-estimated and(iii) variations in load can distort the odometer wheels and introduce additionalerrors
32 Ultrasonic and infrared sensors
The CESA vehicle is equipped with two active telemetric sensors The proximetricsensors (infrared) at the front detect the presence of an object situated at a distanceof less than 1 m and the ultrasonic sensors around detect objects situated at a moreimportant distance for the mobile robot
33 Vision system
The CESA vision system is composed of a camera iVC 500 CCD and a microcon-troller-based data acquisition card The camera delivers images in the CIIR standard(624 lines 50 Hz) A AD9502 CM video digitalizer performs the conversion ofthe analog video signal to digital data and separates the synchronization signals(horizontal and vertical)
The digitized image has a resolution of 464pound256 pixels and it is transferred to thesupervisory controller at 9600 baud The 80C51 microcontroller controls the wholeoperation of the card The acquisition is performed in real time in 40 ms
A binary edge-detection algorithm was implemented on a eld programmable gatearray (FPGA) on this card
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 91
Figure 2 The camera support
Figure 3 Image transverse section
331 Camera calibration The CCD camera is modeled on by the pin-holemodel [8] It is sustained by a support and makes an reg angle with the horizontalConsidering both the geometric model of the vehicle and the embarked camera(Figs 2 and 3) any binary edge image point i j could be represented in thevehicle coordinate system by the point x y z by applying the following equationsystems [9]
8lt
x D Cy curren sin reg C Cz curren cos reg C dx
y D Cx C dy
z D Cz curren cos reg iexcl Cz curren sin reg C dz
(1)
with
8gtgtgtgtgtgtgtgtlt
gtgtgtgtgtgtgtgt
Cx D iexclCz iexcl cedil
cedillx
sup3j
rx
iexcl 05
acute
Cy D iexclCz iexcl cedil
cedilly
sup3i
ry
iexcl 05
acute
Cz Dd
sin regC
sup3d
sin regiexcl cedil
acutesin reg curren cos macr
sinreg iexcl macr
(2)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
92 K Achour and A O Djekoune
and
macr D tgiexcl1
microly
cedil
sup3i
ry
iexcl 05
acutepara
where rx is the column number of the screen (pixel) ry is the row number of thescreen (pixel) lx is the width of image plan (m) ly is the height of image plan (m) cedil
is the focal distance of the used lentil d is the vertical distance between the cameraand the ground (mm) reg is the camera slant angle measured with the horizontal anddx dy dz is the component of the vector from the reference point of the mobilerobot to the image plan center of the camera (mm)
4 LOCALIZATION AND NAVIGATION STRATEGY
41 Robotrsquos localization
The current vehicle position is obtained by interpreting the visual informationextracted from binary edge images The vehicle displacement is then insured bya position control in the Cartesian space (Fig 4) The control position is realized bylsquostatic look and moversquo [9] The measurement errors in Fig 4 are the result of thecorridorrsquos surface roughness and undulations wheel slippage and active telemetricsensor interference These errors drift the vehicle from its desired position whichin our case is the middle of the corridor (robotrsquos position consign)
The used attributes are the segments They are used for mobile robot localizationand navigation as in [9 10] because they faithfully produce the robot environmentevolution and present a better compromise among the different primitives in termsof hardiness and time processing The segments are obtained by the HT on imagesdirectly at the output of the vision system because it is a well-known methodfor detection of parametric curves in binary images and it was recognized as animportant means of searching for objects and features in binary images It converts
Figure 4 Position control
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 89
required to provide feedback signals As an illustration we present the CESA navi-gation experiment results in our laboratoryrsquos corridor and free trajectory generationThe drift corrections are made by direct observation and the rotationtranslationparameters are computed taking into account the robot neighborhood parametersThese parameters are obtained by applying the Hough transform (HT) on the sceneimages acquired by an embarked CCD camera We have used the HT because ithas long been recognized as a robust technique for detecting multi-dimensional fea-tures in an image and estimating their parameters It has many applications as mostmanufactured parts contain feature boundaries which can be described by regularcurves or straight lines The main advantage of the HT is that it is tolerant of gapsin feature boundary descriptions and is relatively unaffected by image noise
Algorithms for the autonomous vehicle navigation system are being developedand tested both on a computer-aided design workstation and on the experimentalCESA autonomous vehicle
The CESA vehicle shown in Fig 1a is an experimental vehicle designed tooperate autonomously within a structured of ce or factory environment It has beendesigned with several goals in mind First and foremost CESA is a testbed withwhich to experiment with such things as robot programming language and sensorintegrationdata fusion techniques Second CESA is designed to be low cost anddepend on only one camera and two active telemetric sensors
CESA is a tricycle con guration with a single front wheel which serves bothfor steering and driving the vehicle and two passive load-bearing rear wheels Theonboard control system uses active telemetric sensors and a vision system to providethe position and heading information needed to guide the vehicle along speci edpaths between stations The external or off-board part of the system consists of aglobal path planner which stores the structured environment data and produces thepath plans from one station to the next It also includes a radio communication linkbetween this supervisory controller and the vehicle
This paper is organized as follow in Section 2 we present the CESA onboardcontrol system The vision system calibration and the embarked sensors arepresented in Section 3 In Section 4 we show the line segment extraction algorithmbased on the HT and the developed and used algorithms for CESA navigation inour laboratoryrsquos corridor Finally we present the obtained results and a conclusion
2 VEHICLE CONFIGURATION
21 Mechanical con guration
Figure 1b illustrates the basic structure of the CESA vehicle a photograph of whichis shown in Fig 1a It consists of a single steerable drive wheel at the front and twopassive rear wheels ie it is a tricycle con guration
An optical encoder is mounted on the steering motor to provide feedback controlIn addition two optical encoders are mounted on each of two rear independentnon-load-bearing wheels
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
90 K Achour and A O Djekoune
The vehicle is powered by two sealed 12 V 60 A batteries which in the currentcon guration provide a useful lifetime of approximately 7 h
22 Computer hardware and software
Control of the vehicle is composed on several modules Software development isperformed on a SUN Spark station running under UNIX All software is written inC language The diagnostic data from the vehicle may be downloaded to the hostvia a radio communication link for analysis
3 SENSORS
The vehicle is equipped with three different sensors odometry on each wheelultrasonic and infrared sensors and a vision system These sensors provide all thenavigational sensing information available to the vehicle They deliver informationwhich could be used to de ne the planning tasks strategy of the vehicle and neededto guide it along speci ed paths between stations without collisions
31 Odometry
Odometry ie the estimation of position and orientation by counting the revolutionsof a wheel dates back to the time of Archimedes [7] The advantage of odometryis that it is both simple and inexpensive However it is prone to several sourcesof errors (i) surface roughness and undulations may cause the distance to be over-estimated (ii) wheel slippage can cause the distance to be under-estimated and(iii) variations in load can distort the odometer wheels and introduce additionalerrors
32 Ultrasonic and infrared sensors
The CESA vehicle is equipped with two active telemetric sensors The proximetricsensors (infrared) at the front detect the presence of an object situated at a distanceof less than 1 m and the ultrasonic sensors around detect objects situated at a moreimportant distance for the mobile robot
33 Vision system
The CESA vision system is composed of a camera iVC 500 CCD and a microcon-troller-based data acquisition card The camera delivers images in the CIIR standard(624 lines 50 Hz) A AD9502 CM video digitalizer performs the conversion ofthe analog video signal to digital data and separates the synchronization signals(horizontal and vertical)
The digitized image has a resolution of 464pound256 pixels and it is transferred to thesupervisory controller at 9600 baud The 80C51 microcontroller controls the wholeoperation of the card The acquisition is performed in real time in 40 ms
A binary edge-detection algorithm was implemented on a eld programmable gatearray (FPGA) on this card
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 91
Figure 2 The camera support
Figure 3 Image transverse section
331 Camera calibration The CCD camera is modeled on by the pin-holemodel [8] It is sustained by a support and makes an reg angle with the horizontalConsidering both the geometric model of the vehicle and the embarked camera(Figs 2 and 3) any binary edge image point i j could be represented in thevehicle coordinate system by the point x y z by applying the following equationsystems [9]
8lt
x D Cy curren sin reg C Cz curren cos reg C dx
y D Cx C dy
z D Cz curren cos reg iexcl Cz curren sin reg C dz
(1)
with
8gtgtgtgtgtgtgtgtlt
gtgtgtgtgtgtgtgt
Cx D iexclCz iexcl cedil
cedillx
sup3j
rx
iexcl 05
acute
Cy D iexclCz iexcl cedil
cedilly
sup3i
ry
iexcl 05
acute
Cz Dd
sin regC
sup3d
sin regiexcl cedil
acutesin reg curren cos macr
sinreg iexcl macr
(2)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
92 K Achour and A O Djekoune
and
macr D tgiexcl1
microly
cedil
sup3i
ry
iexcl 05
acutepara
where rx is the column number of the screen (pixel) ry is the row number of thescreen (pixel) lx is the width of image plan (m) ly is the height of image plan (m) cedil
is the focal distance of the used lentil d is the vertical distance between the cameraand the ground (mm) reg is the camera slant angle measured with the horizontal anddx dy dz is the component of the vector from the reference point of the mobilerobot to the image plan center of the camera (mm)
4 LOCALIZATION AND NAVIGATION STRATEGY
41 Robotrsquos localization
The current vehicle position is obtained by interpreting the visual informationextracted from binary edge images The vehicle displacement is then insured bya position control in the Cartesian space (Fig 4) The control position is realized bylsquostatic look and moversquo [9] The measurement errors in Fig 4 are the result of thecorridorrsquos surface roughness and undulations wheel slippage and active telemetricsensor interference These errors drift the vehicle from its desired position whichin our case is the middle of the corridor (robotrsquos position consign)
The used attributes are the segments They are used for mobile robot localizationand navigation as in [9 10] because they faithfully produce the robot environmentevolution and present a better compromise among the different primitives in termsof hardiness and time processing The segments are obtained by the HT on imagesdirectly at the output of the vision system because it is a well-known methodfor detection of parametric curves in binary images and it was recognized as animportant means of searching for objects and features in binary images It converts
Figure 4 Position control
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
90 K Achour and A O Djekoune
The vehicle is powered by two sealed 12 V 60 A batteries which in the currentcon guration provide a useful lifetime of approximately 7 h
22 Computer hardware and software
Control of the vehicle is composed on several modules Software development isperformed on a SUN Spark station running under UNIX All software is written inC language The diagnostic data from the vehicle may be downloaded to the hostvia a radio communication link for analysis
3 SENSORS
The vehicle is equipped with three different sensors odometry on each wheelultrasonic and infrared sensors and a vision system These sensors provide all thenavigational sensing information available to the vehicle They deliver informationwhich could be used to de ne the planning tasks strategy of the vehicle and neededto guide it along speci ed paths between stations without collisions
31 Odometry
Odometry ie the estimation of position and orientation by counting the revolutionsof a wheel dates back to the time of Archimedes [7] The advantage of odometryis that it is both simple and inexpensive However it is prone to several sourcesof errors (i) surface roughness and undulations may cause the distance to be over-estimated (ii) wheel slippage can cause the distance to be under-estimated and(iii) variations in load can distort the odometer wheels and introduce additionalerrors
32 Ultrasonic and infrared sensors
The CESA vehicle is equipped with two active telemetric sensors The proximetricsensors (infrared) at the front detect the presence of an object situated at a distanceof less than 1 m and the ultrasonic sensors around detect objects situated at a moreimportant distance for the mobile robot
33 Vision system
The CESA vision system is composed of a camera iVC 500 CCD and a microcon-troller-based data acquisition card The camera delivers images in the CIIR standard(624 lines 50 Hz) A AD9502 CM video digitalizer performs the conversion ofthe analog video signal to digital data and separates the synchronization signals(horizontal and vertical)
The digitized image has a resolution of 464pound256 pixels and it is transferred to thesupervisory controller at 9600 baud The 80C51 microcontroller controls the wholeoperation of the card The acquisition is performed in real time in 40 ms
A binary edge-detection algorithm was implemented on a eld programmable gatearray (FPGA) on this card
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 91
Figure 2 The camera support
Figure 3 Image transverse section
331 Camera calibration The CCD camera is modeled on by the pin-holemodel [8] It is sustained by a support and makes an reg angle with the horizontalConsidering both the geometric model of the vehicle and the embarked camera(Figs 2 and 3) any binary edge image point i j could be represented in thevehicle coordinate system by the point x y z by applying the following equationsystems [9]
8lt
x D Cy curren sin reg C Cz curren cos reg C dx
y D Cx C dy
z D Cz curren cos reg iexcl Cz curren sin reg C dz
(1)
with
8gtgtgtgtgtgtgtgtlt
gtgtgtgtgtgtgtgt
Cx D iexclCz iexcl cedil
cedillx
sup3j
rx
iexcl 05
acute
Cy D iexclCz iexcl cedil
cedilly
sup3i
ry
iexcl 05
acute
Cz Dd
sin regC
sup3d
sin regiexcl cedil
acutesin reg curren cos macr
sinreg iexcl macr
(2)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
92 K Achour and A O Djekoune
and
macr D tgiexcl1
microly
cedil
sup3i
ry
iexcl 05
acutepara
where rx is the column number of the screen (pixel) ry is the row number of thescreen (pixel) lx is the width of image plan (m) ly is the height of image plan (m) cedil
is the focal distance of the used lentil d is the vertical distance between the cameraand the ground (mm) reg is the camera slant angle measured with the horizontal anddx dy dz is the component of the vector from the reference point of the mobilerobot to the image plan center of the camera (mm)
4 LOCALIZATION AND NAVIGATION STRATEGY
41 Robotrsquos localization
The current vehicle position is obtained by interpreting the visual informationextracted from binary edge images The vehicle displacement is then insured bya position control in the Cartesian space (Fig 4) The control position is realized bylsquostatic look and moversquo [9] The measurement errors in Fig 4 are the result of thecorridorrsquos surface roughness and undulations wheel slippage and active telemetricsensor interference These errors drift the vehicle from its desired position whichin our case is the middle of the corridor (robotrsquos position consign)
The used attributes are the segments They are used for mobile robot localizationand navigation as in [9 10] because they faithfully produce the robot environmentevolution and present a better compromise among the different primitives in termsof hardiness and time processing The segments are obtained by the HT on imagesdirectly at the output of the vision system because it is a well-known methodfor detection of parametric curves in binary images and it was recognized as animportant means of searching for objects and features in binary images It converts
Figure 4 Position control
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 91
Figure 2 The camera support
Figure 3 Image transverse section
331 Camera calibration The CCD camera is modeled on by the pin-holemodel [8] It is sustained by a support and makes an reg angle with the horizontalConsidering both the geometric model of the vehicle and the embarked camera(Figs 2 and 3) any binary edge image point i j could be represented in thevehicle coordinate system by the point x y z by applying the following equationsystems [9]
8lt
x D Cy curren sin reg C Cz curren cos reg C dx
y D Cx C dy
z D Cz curren cos reg iexcl Cz curren sin reg C dz
(1)
with
8gtgtgtgtgtgtgtgtlt
gtgtgtgtgtgtgtgt
Cx D iexclCz iexcl cedil
cedillx
sup3j
rx
iexcl 05
acute
Cy D iexclCz iexcl cedil
cedilly
sup3i
ry
iexcl 05
acute
Cz Dd
sin regC
sup3d
sin regiexcl cedil
acutesin reg curren cos macr
sinreg iexcl macr
(2)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
92 K Achour and A O Djekoune
and
macr D tgiexcl1
microly
cedil
sup3i
ry
iexcl 05
acutepara
where rx is the column number of the screen (pixel) ry is the row number of thescreen (pixel) lx is the width of image plan (m) ly is the height of image plan (m) cedil
is the focal distance of the used lentil d is the vertical distance between the cameraand the ground (mm) reg is the camera slant angle measured with the horizontal anddx dy dz is the component of the vector from the reference point of the mobilerobot to the image plan center of the camera (mm)
4 LOCALIZATION AND NAVIGATION STRATEGY
41 Robotrsquos localization
The current vehicle position is obtained by interpreting the visual informationextracted from binary edge images The vehicle displacement is then insured bya position control in the Cartesian space (Fig 4) The control position is realized bylsquostatic look and moversquo [9] The measurement errors in Fig 4 are the result of thecorridorrsquos surface roughness and undulations wheel slippage and active telemetricsensor interference These errors drift the vehicle from its desired position whichin our case is the middle of the corridor (robotrsquos position consign)
The used attributes are the segments They are used for mobile robot localizationand navigation as in [9 10] because they faithfully produce the robot environmentevolution and present a better compromise among the different primitives in termsof hardiness and time processing The segments are obtained by the HT on imagesdirectly at the output of the vision system because it is a well-known methodfor detection of parametric curves in binary images and it was recognized as animportant means of searching for objects and features in binary images It converts
Figure 4 Position control
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
92 K Achour and A O Djekoune
and
macr D tgiexcl1
microly
cedil
sup3i
ry
iexcl 05
acutepara
where rx is the column number of the screen (pixel) ry is the row number of thescreen (pixel) lx is the width of image plan (m) ly is the height of image plan (m) cedil
is the focal distance of the used lentil d is the vertical distance between the cameraand the ground (mm) reg is the camera slant angle measured with the horizontal anddx dy dz is the component of the vector from the reference point of the mobilerobot to the image plan center of the camera (mm)
4 LOCALIZATION AND NAVIGATION STRATEGY
41 Robotrsquos localization
The current vehicle position is obtained by interpreting the visual informationextracted from binary edge images The vehicle displacement is then insured bya position control in the Cartesian space (Fig 4) The control position is realized bylsquostatic look and moversquo [9] The measurement errors in Fig 4 are the result of thecorridorrsquos surface roughness and undulations wheel slippage and active telemetricsensor interference These errors drift the vehicle from its desired position whichin our case is the middle of the corridor (robotrsquos position consign)
The used attributes are the segments They are used for mobile robot localizationand navigation as in [9 10] because they faithfully produce the robot environmentevolution and present a better compromise among the different primitives in termsof hardiness and time processing The segments are obtained by the HT on imagesdirectly at the output of the vision system because it is a well-known methodfor detection of parametric curves in binary images and it was recognized as animportant means of searching for objects and features in binary images It converts
Figure 4 Position control
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 93
the problem of detecting collinear points or lines in feature space to the problemof locating concurrent or intersecting curves in parameter space
Vehicle localization in the corridor consists of de ning its reference centercoordinates (XG YG (Fig 1b) with regard to both the left and right corridor bordersextremities extracted from the scene image by using the two above-mentionedequation systems (1) and (2)
42 Image segmentation and corridor borders extraction
From binary edge images the HT allows us to extract pertinent information likelines curves etc according to the needs of applications In our case we seekto detect all line segments contained in an image The HT used in the normalparameterization of a line in an image given by [11 12]
x cos micro C y sin micro D frac12 (3)
where frac12 is the normal distance of the line to the origin micro is the angle between frac12 andthe x-axis and x y are points on the line (Fig 5)
The parameters of all lines going through a point xi yi in the image constitutea sinusoidal curve in the parameter space given by (Fig 5)
xi cos micro C yi sin micro D frac12 (4)
The sinusoidal curves corresponding to collinear points of a line with parametersfrac12j microj will cross each other at the point frac12j microj in the parameter space So lineparameters can be obtained from the crossing-point of sinusoidal curves in theparameter space
The HT of a point xi yi is performed by computing frac12 from (4) for all n valuesof microk into which micro is quantized 0 6 micro lt frac14 The values of frac12 are quantized in nfrac12
intervals of width frac12k In this way a quantized sinusoidal curve is obtained and alongthe quantized curve each cell is incremented with one lsquo1rsquo value This procedure isrepeated for all points of the binary edge image
The line segment extraction from the binary edge image is performed by searchingthe collinear points in the binary edge image These points show up as peaks in the
Figure 5 The line segment parameter
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
94 K Achour and A O Djekoune
Figure 6 The parameter space
parameter space (Fig 6) So for each peak we eliminate the effect of all pointsbelonging to this peak in the parameter space [9]
The result of the line segment extraction is a line segment list where each linesegment is stored with the following attributes
sup2 An index its position in the line segments list
sup2 frac12 its normal distance value relative to the image coordinate system
sup2 micro its orientation value in this coordinate system
sup2 x1 y1 and x2 y2 its extremities coordinates
We model the corridor by its two left and right borders situated at the corridorrsquoswalls (Fig 7) Their identi cation among other segments contained in the observedscene must satisfy the following conditions
sup2 The corridorrsquos borders are always oblique
sup2 The corridorrsquos right border orientation micro belongs to the intervalcurren0 frac14=2
pound
sup2 The corridorrsquos left border orientation micro belongs to the intervalcurrenfrac14=2 frac14
pound
sup2 The normal distance frac12 of the corridorrsquos border is minimal
sup2 The detected corridorrsquos borders belong to the oor
This selection is very important because one detection error can generate a poorcontrol position and therefore poor trajectory generation for the vehicle
43 Navigation
To permit its displacement in the corridor it is necessary that the oor should not bedamaged and no obstacles should be present on the path The CCD camera providesa very detailed description of the robot environment Therefore the mobile robotcan be in one of the possible situations that is illustrated in Fig 7
The observed scene is segmented with the HT and then the corridor bordersidenti cation algorithm is started This algorithm gives the polar coordinates (frac12RmicroR frac12L and microL) and the extremities coordinates (PR1 PR2 PL1 and PL2 ) of eachdetected corridorrsquos border These coordinates are necessary for the robotrsquos positioncalculation and path execution
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 95
Figure 7 The different positions of the mobile robot in the corridor
Figure 8 Our laboratoryrsquos corridor
By referring from its station the vehicle follows a path that permits it to movefrom its station toward the next target station by xing the end of the corridor(Figs 8 and 9)
The target position XDest YDest in the robot coordinate system is computedaccording to the detected corridorrsquos borders coordinate extremities (PR1 PR2 PL1
and PL2 ) (Fig 9) It is formulated by the following expressionIf two corridorrsquos borders are detected
XDest DXPR1 C XPR2 C XPL1 C XPL2
4
YDest DY PR1 C Y PR2 C Y PL1 C Y PL2
4
(5)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
96 K Achour and A O Djekoune
Figure 9 The navigation strategy
If one corridorrsquos border is detected
XDest DXP1 C XP2
2
YDest DY P1 C Y P2
2C iexcl1nLarg
(6)
with
n Draquo
0 if the right cooridorrsquos border is detected1 if the left cooridorrsquos border is detected
and Larg is the half robot thicknessThree operations are therefore necessary for onboard control of the vehicle
sup2 A rst deviation (microD1 or microG1) toward the target station in the corridor
sup2 The distance D to travel in order to arrive at this target station
sup2 A second deviation (microD2 or microG2) so that the vehicle xes the corridorrsquos end
These operations are calculated from the following equationsThe rst deviation micro1
micro1 D tgiexcl1
AacuteshyshyYDest
shyshy
XDest
Draquo
microG1 Left deviation if YDest gt 0microD1 Right deviation if YDest lt 0
(7)
The second deviation micro2
micro2 Draquo
microG2 Left deviationmicroD2 Right deviation
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 97
If only the right corridorrsquos border frac12A microA is detected
microG2 Draquo
microA iexcl microG1 if microG1 6 microAmicroA C microD1 if microD1 6 microA
microD2 D microG1 iexcl microA if microG1 gt microA
(8)
If only the left corridorrsquos border frac12B microB is detected
microD2 Draquo
microG1 C frac14 iexcl microB if microG1 6 frac14 iexcl microBfrac14 iexcl microB iexcl microD1 if microD1 6 frac14 iexcl microB
microG2 D microD1 iexcl frac14 C microB if microD1 gt frac14 iexcl microB
(9)
If both left and right corridorrsquos borders frac12A microA and frac12B microB are detectedLet
1micro DmicroB iexcl microA
2
microG2 D
8gtgtgtgtgtlt
gtgtgtgtgt
microD1 C microB iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA
microD1 C microA C 1micro iexclfrac14
2if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2lt microD1
sup2
microB iexcl microG1 iexcl 1micro iexclfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2gt microG1
sup2
microD2 D
8gtgtgtgtgtlt
gtgtgtgtgt
frac14
2iexcl microA C microD1 iexcl 1micro if microB lt frac14 iexcl microA and
plusmnplusmnfrac14
2iexcl microA iexcl 1micro
sup2gt microD1
sup2
microG1 iexcl microB C 1micro Cfrac14
2if microB gt frac14 iexcl microA and
plusmnplusmnmicroB iexcl
frac14
2iexcl 1micro
sup2lt microG1
sup2
microG1 iexcl microA iexcl 1micro Cfrac14
2if microB lt frac14 iexcl microA
(10)
Once these three operations are executed a new corridorrsquos image acquisition ismade and then segmented After identi cation of the corridorrsquos borders the CESAvehicle localizes itself again and then follows a path provided by the off-board partof the system which always permits to it to stay at the middle of the corridor (robotrsquosconsign position) The cycle restarts until the CESA vehicle arrives at its nalstation
5 EXPERIMENTAL RESULTS
The CESA vehicle localization and navigation procedures outlined in Sections 3 and4 have been implemented on a computer-aided design workstation The program iswritten in C language and runs on a Sun Sparc station The current version of theprogram requires as input data the state information from the vehicle It producesas output the run operations downloaded via radio waves to the onboard real-time
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
98 K Achour and A O Djekoune
Table 1The CESA navigation experimental results
Image The polar coordi-nates of the de-tected borders
ThecourseD
Firstdeviationmicro1
Seconddeviationmicro2
Figure
Right Left (mm)frac12R microR frac12L microL
Res1 98 50plusmn 117 130plusmn 7168 117plusmn 166plusmn 10on the left on the right
Resd1 72 50plusmn 125 120plusmn 7314 268plusmn 768plusmn 11on the left on the right
Resg1 115 60plusmn 100 140plusmn 7180 078plusmn 1078plusmn 12on the right on the left
Resg3 mdash iexcl10 120plusmn 7136 765plusmn 5235plusmn 13on the right on the right
system The diagnostic data from the vehicle may be downloaded to the supervisorycontroller for analysis
To illustrate the bene ts of our approach we have tested these procedures inCESA navigation in our laboratoryrsquos corridor The vehicle moves with a velocityof approximately 048 ms within an area of about 12 pound 20 m The vision systemoutput image consists of 464 pound 256 pixels and two gray levelspixels It is thensegmented by the HT with 10plusmn step sizes for the micro dimension and one pixel for thefrac12 dimension
Table 1 illustrates the experimental results for four corridor station images(Figs 10ndash13) taken at four different stations in the corridor The results have beenexecuted with the following parameters
sup2 The focal distance of the camera cedil D 25
sup2 The camera slant angle that makes with the horizontal reg D 9plusmn
sup2 The vertical distance between the camera and the ground d D 550
sup2 The column number of the screen rx D 464
sup2 The row number of the screen ry D 256
sup2 The component of the vector active from the reference point of the mobile robotto the image plan center of the camera dx dy dz D 317 0 240
In the Figs 10ndash13 we represented the detected corridorrsquos borders by the fullsegments the target station in the corridor by a small square and the orientationthat the vehicle must have once at this target station by a small triangle
During this experience the mobile robot is completely autonomous but thisautonomy is sensitive both to environment lighting and the corridorrsquos bordersdetection algorithm Several algorithms have been developed for solving thisproblem (i) the algorithm developed by [13] to restore the gray levels imagesto obtain some best edge images and best segmentation and (ii) the algorithm
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 99
Figure 10 Image taken by the camera when CESA is in the middle of the corridor
Figure 11 Image taken by the camera when CESA is in the middle and near to the right wall of thecorridor
developed by [6 14 15] using the stereo matching technique to compute the 3Dmodel of the environment to allow to the mobile robot to control its trajectoryand eventually have automatic guidance These algorithms are now the in theexperimentation stage in our laboratory
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
100 K Achour and A O Djekoune
Figure 12 Image taken by the camera when CESA is in the middle and near to the left wall of thecorridor
Figure 13 Image taken by the camera when CESA is in the middle and facing the left wall of thecorridor
6 CONCLUSION
Guidance and control procedures to enable a robot vehicle to accurately andautonomously navigate around a structured environment given a set of operationsfrom the host supervisory controller have been described These procedures havebeen implemented in an experimental robot vehicle lsquoCESArsquo They permit the vehicleto correct its drift by direct observation using a unique embarked CCD camera
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
Localization and guidance of a mobile robot 101
The rotationtranslation parameters are computed taking into account the robotneighborhood parameters They are obtained by the HT from the environment sceneimages The HT is used because it is a well-known method for the detection ofparametric curves in binary images and it was recognized as an important means ofsearching for objects and features in binary images
During this procedure the mobile robot is completely autonomous but this au-tonomy is sensitive both to environment lighting and the corridor borders detectionalgorithm Several algorithms have been developed for solving to this problem egthe algorithms to restore the observed scene and to compute the 3D model of themobile robot environment
REFERENCES
1 A Allemagne P Dario E Guglielmelli B Allotta and M C Carrozza Robotics for medicalapplications IEEE Robotics Automat Mag 3 (3) 44ndash56 (1996)
2 E D Dickmanns and V Graefe Dynamic monocular machine vision Machine Vision Applica-tion 1 (4) 223ndash240 (1988)
3 P A Grandjean Robert and V De Saint 3-D modeling of indoor scenes by fusion of noisy rangeand stereo data in Proc IEEE Int Conf on Robotics and Automation Scottsdale AZ Vol 2pp 681ndash687 (1989)
4 W H Tang and K T Song Environment perception for a mobile robot using double Ultrasonicsensor and a CCD camera IEEE Trans Ind Electron 46 372ndash379 (1996)
5 L E Weiss A C Sanderson and C P Neuman Dynamic sensor-based control of robots withvisual feedback J Robotics Automat 3 404ndash417 (1987)
6 Y Aloimonos Is visual reconstruction necessary Obstacle avoidance without passive rangingJ Robotic Syst 9 843ndash 858 (1992)
7 A W Sleeswyk Vitruviusrsquo odometer Sci Am (October) 188ndash 198 (1981)8 Tarel Jean-PhilippeCalibrationde cameacutera fondeacutee sur les ellipses Research Report 2200 INRIA
(1994)9 O Djekoune Localisation et guidage par cameacutera embarqueacutee drsquoun robot mobile Thegravese de
Magister ref0797_EL Institut drsquoeacuteleacutectronique USTHB Algiers (1997)10 H Mori H Ishiguro and S Kotani A mobile robot strategy applied to HARINBU-4 in Proc
9th IEEE Int Conf on Pattern Recognition Rome Italy pp 525ndash530 (1988)11 O Richard and E P Hart Use of the Hough transform to detect lines and curves in pictures
Commun ACM 15 11ndash15 (1972)12 S Tagzout K Achour and O Djekoune Hough transform algorithm for FPGA implementation
J Signal Process 81 1295ndash 1301 (2001)13 K Achour and M Benkhelif Contribution to contours restauration by Markovian approach in
Proc ICSPAT rsquo96 Boston MA p 1017 (1996)14 K Achour and R Belaiden A new stereo matching using multiscale algorithm in Proc Int
Conf on Recent Advances in Mechatronics Istambul (1995)15 O Djekoune M Benkhelif and N Zenati Application de la transformeacutee de Hough dans
lrsquoappariement des images de scegravenes in Proc 5e Colloque Africain sur la Recherche enInformatique Antananarivo Madagascar pp 343ndash 350 (2000)
16 K Achour and S Monchaud Creation of 3D data base with a panoramic multisensor system inProc 2nd World Conf on Robotics Research Scottsdale AZ p 86-758MS86-75819 (1986)
17 T Tsumura Survey of automated guided vehicle in Japanese factory in Proc IEEE Int Confon Robotics and Automation San Francisco CA Vol II p 1329 (1986)
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014
102 K Achour and A O Djekoune
ABOUT THE AUTHORS
A Oualid Djekoune was born in Algiers (Algeria) in October 1969 He receivedthe Electronic Engineering degree from the Electronic Institute of USTHBAlgiers in 1993 and the Magister degree in signals and systems in real time fromUSTHB in 1997 Today he works as a Researcher in vision equipment in theRobotics and Arti cial Intelligence Laboratory of the Development Center of theAdvanced Technologies
Karim Achour was born in Tizi-Ouzou (Algeria) in 1958 He received his PhDdegree in Electrical and Computer Engineering from Rennes University (France)in 1987 the MS Degree in Electrical Engineering from ENSM Nantes (France)and the BS at the Polytechnic School in Algiers (Algeria) He is the author ofnumerous publications for conference proceedings and journals He has beenactive in research on perception computer vision and pattern recognition Hehas participated in many conferences as an author a panel member and a sessionchairman He has been a reviewer for many conferences He was a Head of theRobotics and AI laboratory in the High Research Center in Algiers
Dow
nloa
ded
by [
Dal
hous
ie U
nive
rsity
] at
19
48 1
2 N
ovem
ber
2014