Transcript
Page 1: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

Journal of Intelligent and Robotic Systems30: 227–248, 2001.© 2001Kluwer Academic Publishers. Printed in the Netherlands.

227

Mobile Robot with Wide Capture Active LaserSensor and Environment Definition

J. L. LÁZARO, A. GARDEL, M. MAZO, C. MATAIX, J. C. GARCÍA andR. MATEOSElectronics Department, Alcalá University, Campus Universitario, s/n. 28805 Alcalá de Henares,Madrid, Spain; e-mail: [email protected]

(Received: 18 May 1999; in final form: 22 May 2000)

Abstract. The sensorial system developed is based on the emission of an infrared beam, recoveringthe reflected beam and measuring distances to significant points in the environment. This system isable to detect and model obstacles in unknown environments. Features of the capture system allowlarge fields of view to be caught at short distances, aiding the robot’s mobility. Several algorithmsare used to find the formation centre of the image and so model the distortion introduced by thewide-angle lens. Parameters of the optical model are inserted into the calibration matrix to obtainthe camera model. We present also an algorithm which extracts the points on the image that belongto the laser beam. All of the above work is in unknown environments with variable conditions ofillumination. The robot’s trajectory is obtained and modified in real time, with aspline function,using four different reference systems. Finally, empirical tests have been carried out on a mobileplatform, using a CCD camera with a wide-angle lens of 65◦ and 110◦, a 15 mW laser emitter and aframe grabber for image processing.

Key words: robots, laser beam, wide-angle optics, environments modelling, obstacle, telemetry,trajectory planning.

1. Introduction

One of the models for an unknown environment in which a mobile robot is to beguided, obtains 3-dimensional coordinates by both emitting and capturing struc-tured light (Jarvis, 1983). However, before this can be achieved, the entire systemof both detector and emitter must be jointly calibrated and referenced to the sameoriginal coordinates.

If the structured light emitted is a light plane, the third coordinate can be de-duced from each of 2-dimensional CCD pixels resulting from the impact of theabove-said plane in the environment.

Obtaining as many coordinates as possible allows one to calculate the physicallimits of the environment. It also makes the recognition of the precise locationof objects that may hinder the robot’s movements possible and allows the robotto circumnavigate them without colliding. Related works, making use of similartechniques to determine the position of objects and their orientation with respect to

Page 2: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

228 J. L. LAZARO ET AL.

a robot, can be found in (Blais et al., 1988; Sato and Otsuki, 1993; Kemmotsu andKanade, 1995; Lázaro et al., 1988). All these papers have been developed usingsmall field depths and homogeneous background images. Motyl et al. (1993) andKhadroui et al. (1995) made use of these techniques to position a robot arm withprior knowledge of the position of polygonal and spherical objects in an environ-ment. Evans et al. (1990) and King (1990) used a structured light plane to detectobstacles encountered by a mobile robot (MOB). Fabrici and Ulivi (1998) usedit to create a plan of the robot’s workspace by plotting its movements, but onlyin environments that allow the reflection of the laser beam to be segmented by asimple matching.

Other projects use the depletion of a laser beam to generate different light pat-terns. Gustavson and Davis (1995) and RIEGL (1994) obtained good resolutionswithin a range of several metres. However, measurement time becomes too longwhen the number of points to be converted to 3 D coordinates increases. Moreover,in percentage terms the number of mistakes made over short distances is large, alsothe black zones between samples at long distances is extensive. If the generationpattern is combined with geometric detection techniques (Loney and Beach, 1995),the measurements obtained are very accurate, but this technique can only be appliedin small environments with a homogeneous background and well defined objects.

The map of distances obtained may be used to plot a route through the obstaclesupdating the information as different obstacles come within the robot’s field ofvision. Among the many path options available, the one that gives the greatestguarantee of safety should be chosen.

This paper builds paths based on pieces of cubic-spline curves. Latombe (1993)has developed a navigation algorithm with which the space configuration of the sur-rounding environment can be obtained. Koch (1985), Oommen (1987) and Kroghand Feng (1989) developed some path planning algorithms to extend the objectivepoint. Payton (1986), Nitao and Parodi (1986), and Brooks (1986) work with dy-namic trajectories, which are only recalculated when the surrounding environmentchanges. Whilst the precise position of the robot must be known at all times, it isalso necessary to update the relative positions of the objects that crop up in its pathin order to steer it towards its final goal. Finally, control of the robot’s steering andmovement will enable it to follow a particular path inside the global control loop.

This application involves the need to recognise environments with a large fieldof vision at short distances, thereby justifying the use of a wide-angle lens. As thistype of optic has no linear response, traditional camera calibration methods cannotbe used. To correct this fault the lens response has been modelled.

In (Theodoracatos and Calkins, 1993 and Willson, 1994) the problem of deter-mining the parameters of the system, such as the centre of image formation, hasbeen solved by setting the lens distortion at a long focal length in order to takemeasurements. Stein (1993) has used geometrical spheres to find out the aspectrelation and parallel lines to solve for lens distortion, main point, and focal length.This process limits the position of the image’s centre to a square of 20× 20 pixels

Page 3: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 229

(Lázaro, 1998). The distortion aberration for wide-angle optics is modelled, takinginto consideration its symmetrical axial revolution system. The centre of the imagecan be found with in a margin of error of just 8 pixels and a linear error of 1%.

At the present time, there are several research projects attempting to determinethe physical boundaries of the robot’s workspace by calculating the amount of thefree space available. Fabrizi and Ulivi (1998) use a CCD sensor and a laser plane.They segment the reflected laser beam according to its brightness, but they areworking in structured environments. In (Lozano-Perez, 1983; Latombe, 1994, andKrong and Feng, 1989) some projects concerning the generation of virtual objectsand their configuration (with prior knowledge of their location) and reducing thegeometric robot’s navigation to a single point movement are demonstrated. Theyalso put forward algorithms for generating an optimum path using as the minimumdistance criteria, potential fields and so on.

Tests carried out prove that the system developed is able to follow walls, crossdoors and skirt obstacles, redefining its course online and easily and accuratelyarriving at a certain goal. The aim of this work is to set up this system aboardan autonomous mobile robot that will be capable of avoiding head-on crashes andreaching its final objective without colliding.

2. Robot Guidance

2.1. MODELLING WIDE-ANGLE OPTICS

As an alternative, we propose to calculate the parameters of the optical systemmodel based on the knowledge of the image formation centre. It considers all theerrors: distortion without revolution symmetry etc, therefore making it necessaryto model it in all directions.

Searching sensor centre. The algorithm consists of capturing the image of ascenario with parallel lines drawn on it. The hough transformation is applied toeach line, analysing their curvature by means of the accumulation of points andthe adjacency of large accumulations within the transformed space. The aboveprocedure is repeated to capture straight lines in different directions (Lázaro, 1998).The image formation centre is a pixel located between the two straightest lines ofeach capture. The hough transformation is also applied to the lines crossing theintersection zone in different directions.

Bidimensional correction. A relation is made between the real 3-dimensionalcoordinates and the 2-dimensional CCD camera coordinates (Figure 1, dashedlines). We assume that the lens produces errors at two different points on thecaptured image. One of those errors will correspond to a phase error and the otherto modulus. Correction to these errors depends not only on the distance from theimage formation centre, but also on the position of the points to be analysed. Theangle, at which the projection of a real point is received, corresponds to the angle

Page 4: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

230 J. L. LAZARO ET AL.

Figure 1. Ideal curve and error extraction.

of the real coordinates. Data obtained from multiple points chosen at random areused to calculate the correction polynomials in Equations (1) and (2) using the leastsquares method. These polynomials can be obtained by regression methods usingleast squares.

The following procedure will be employed. A derivative of the polynomial atpoint (0,0) is calculated thus giving the hypothetical linear response (Figure 1,a continuous line). Then, tables can be made with the error produced in pixels.By extrapolating the distance in the scene to the difference between the pixelscaptured on the camera, it is possible to locate the ideal pixel, thereby achievingfield depth independence. In this way, errors at different points can be obtained.Finally, working with the table of errors obtained, the correction coefficients canbe calculated.

εphase = au+ bv + cuv + du2+ ev2+ f uv2 + gu2v ++hu3+ jv3+ · · · , (1)

εmodulus = au+ bv + cuv + du2+ ev2+ f uv2 + gu2v ++hu3+ jv3+ · · · . (2)

The errors of a sensor plane pixel are calculated as follows:1. Phaseβ of the real point is calculated using the centre of the grid (3) and phaseα from its corresponding point on the CCD image referenced to the imageformation centre (4) (herepi (i = u, v) is the coordinate of the pixel insidethe image,pci is the coordinate of the image’s centre pixel,tsi is the size of thesensor, andpti the total number of pixels).

β = arctg

(y

x

), (3)

Page 5: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 231

α = arctg

(u

v

)= (pv − pcv)(tsv/ptv)(pu − pcu)(tsu/ptu) . (4)

2. The module error can be calculated in the following way. Bearing in mind thatvery close to the optical centre of the lens there are almost no errors, the imageof a point captured by the sensor at a short distance from the centre can be takenas error-free. Proportionately the ideal module is deduced using the image ofany point in the scene.Subtracting this from the real module, the error is

εphase= β − α, (5)

εmodulus=(k1

k2

√x2 + y2

)− (√v2+ u2

). (6)

The phase and module of the corrected pixel can be expressed as

α′ = α + εphase, (7)

mod′ = mod+ εmodulus. (8)

The value of the corrected coordinates can be expressed as

u′ = mod· sinα′, (9)

v′ = mod· cosα′. (10)

2.2. MATHEMATICAL MODEL OF THE CAMERA LENS WITH 2-DIMENSIONAL

CORRECTION

Lens correction can be included in a traditional camera modelling system so that amatrix system can be obtained by nonlinear optical systems. A traditional cameramodel can be expressed in homogenous coordinates

u = U

t; v = V

t(11)

in the following form:(C11 C12 C13 C14

C21 C22 C23 C24

C31 C32 C33 C34

) X

Y

Z

1

= (uvt

). (12)

Developing the expressions ofU ′ andV ′ (for systems with distortion) in order tomodel the parameters in accordance with (9) and (10) we have

U ′ = [(U2+ V 2

)1/2+ aU + bV + cUV + · · · ]×× sin

[arctg

(V

U

)+ a′U + b′V + c′UV + · · ·

], (13)

Page 6: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

232 J. L. LAZARO ET AL.

V ′ = [(U2+ V 2

)1/2+ aU + bV + cUV + · · · ]×× cos

[arctg

(V

U

)+ a′U + b′V + c′UV + · · ·

]. (14)

If the coordinates of the points represented as

U ′ = δ1(U, V ); V ′ = δ2(U, V ) (15)

are inserted into the camera’s calibration matrix, the following system of Equa-tions (15) is obtained:

C11X + C12Y + C13Z − C14− δ1(U, V )(C31X + C32Y + C33Z + C34) = 0, (16)

C21X + C22Y + C23Z − C24− δ2(U, V )(C31X + C32Y + C33Z + C34) = 0. (17)

This system of equations, together with the equation from the laser light strip, givesus the real coordinates(x, y, z) with optical aberrations corrected.

2.3. CCD LASER COUPLING. OBTAINING 3D COORDINATES: POSITION AND

ORIENTATION

The usual way to obtain 3D coordinates is to solve the equation created by thecamera model (18) for each laser point captured and Equation (19) of the laserplane. This gives three equations, which, in turn, give the coordinates(x, y, z):

a1x + b1y + c1z+ d1 = 0,a1x + b2y + c2z+ d2 = 0,

(18)

Ax + By + Cz+ 1= 0. (19)

The aim of the transmitter camera coupling is to calibrate both systems to the samereference whilst keeping the error minimum. Once the optimal position has beenanalysed, the transmitter is coupled to a global system and calibrated to a knownreference (Figure 2).

Before the plane equation can be obtained, all the light that is not emitted bythe laser must be eliminated from the image. It is also convenient to reduce thethickness of the laser beam to one pixel. To begin with an optical filter is used toeliminate all possible forms of light except the wavelength emitted as structuredlight. An active search is then made to locate very bright points belonging to thelaser and a spatial filter is then passed over the image. Points belonging to hori-zontal or near horizontal lines are enhanced and the threshold is set on the functionof mean grey level. Thus, all those points that do not belong to narrow beams areeliminated. Afterwards, using the plane equation, which is always the same, and theequations given by the camera model for each laser point the system of equationsis solved, and the coordinates of the captured laser points can be deduced. The

Page 7: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 233

Figure 2. System scheme and obtaining plane equation.

depth of each pixel is thez-coordinate of the corresponding point in the scene.Using this application, any static object in the environment, the recognition ofphysical boundaries, and the obtaining of robot errors of position or orientationcan be processed and used in the control loop.

2.4. SEGMENTATION OF THE REFLECTION OF THE LASER BEAM

A large part of the light captured by the detector is external to the Laser Beam(LB) usually coming from the sun, lighting areas or other incidental light sourcesand produces intense spots with random forms on the image. To find the LB thedifferent absorption indices that the different materials in the scene present, mustbe taken into account. Changes in the level of intensity of the LB can be used as anindication of the distance at which the objects are found, since the intensity of thelight emitted reduces as the distance from the source increases.

The light emitted by the LB becomes more attenuate as the angle of separationformed with the main axis increases. It also becomes attenuate when the distancefrom impact is great. For this reason a threshold which is dependent upon theposition of the pixel has been chosen to analyse each case. In accordance withEquation (19), a pixel of image 1(i, j) will be considered as part of the LB if itfulfils two conditions. Firstly the grey level value of that pixel must not be greaterthan the thresholdT (i, j) obtained by using Equation (20). The second conditiontakes into account the fact that the luminosity of the LB will be different to that ofthe surrounding area in the environment. The binary imageG(i, j) with points thatfulfil both conditions can be expressed as

Page 8: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

234 J. L. LAZARO ET AL.

If I (i, j) > T (i, j) andI (i, j)− Average (local environment)> difference

}H⇒ G(i, j) = 1.(20)

The expression of the threshold value proposed and employed in the tests startingfrom the mean grey level (MG) of the image is

T (i, j) = MG · k1

(e−(1/2)·(δ(j−(jmax/2))/σ )2

(1− k2

ϑi

imax

)). (21)

Once the threshold value of the horizontal and vertical axis have been deducedaccording to a Gaussian distribution, the lateral origin and the depth of the pointfrom the LB can be calculated. Thus,k1 is a constant that depends on the powerof the light emitted,σ determines the attenuation of lateral energy,k2 is the atten-uation constant in the depth function,j represents the column,i the row,ϑ(i) therelationship between the row of the detector pixels and the depth, andδ(j−jmax/2)the relationship between the column of the detector pixels and the lateral visionangle.

To obtain the expression forϑ(i), it must be considered how the attenuation isproduced with the distance and the row of the CCD that is capturing the image. Allof the above is tested for different depths. The compensation between the energydistribution and the narrowing of the laser beam (as the camera is focused to themaximum distance) gives to the parameter a unity value.

Determination of the constants to use.To determine the constantsσ , MG·k1 andk2, several tests should be made on multiple images captured in different situationswith random contents. Thus, to deduce the value ofσ , images of an object situatedat the same distance from the LB are selected, noting down the grey level values forthe different columns. Since all the LB impact points are in the same conditions,the grey level obtained in the different columns indicates the variation of the LBenergy distribution with the lateral deviation. Obviously, the values should not besaturated in any point, to guarantee useful measurements.

To obtain thek2 value, an image sequence is captured with the LB impacting onthe same object at multiple distances. Maintaining constant the scene characteris-tics, the image grey levels from the LB captured in the central column are recorded.Thus, the changes in the luminous intensity indicate how the energy of the LB isattenuated with the distance to the detector, since it is the only varying parameter.

Another possibility to find this optimum value is to analyse the threshold valueneeded to capture the same length of LB in those images. This last method providesworse results, because the change in the object dimensions modifies the sceneconditions.

To get the optimum value of the constantk1, the application of the searchingalgorithm with dynamical parameters is needed. The value ofMG · k1 is varied insteps of 5 units for each value, modifying also the parameter “difference” 5× 5units. The tests have shown that the constantk1 depends on the emitted power andthe environment workspace.

Page 9: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 235

With the results obtained, it is proven which is the optimum value ofk1 andthe “difference” in order to get the largest number of LB pixels, selecting in theircontiguous zones the smallest number of wrong pixels. Since the values ofk2 andσare known, upon applying this method to each image, the optimum value ofk1

is obtained. It is solved empirically that the optimum value to use for the globalthreshold (MG · k1) is a percentage of the mean grey level that depends on theenvironment and how clean is the image. Also for the “difference” value with theneighbourhood, the conclusions reached out are that a fixed percentage of the greylevel pixel should be used as a threshold in each pixel of the image.

Characterisation of the laser segments.The binary image will have differentstains but only some of them are segments belonging to the LB. To remove er-roneous segments, the characteristics vector of each one is obtained. This vectoris composed of blob features such as area, elongation, width, position or gravitycentre. In Figure 3, the flow chart is shown, with the parts composing the differentphases of features calculation and segment elimination. Knowing the number ofsegments and their area, the small objects are deleted, to reduce the number ofobjects to process. The threshold value of the minimum segment area to removedepends on the smaller size of the segmented blobs of the LB that could be elim-inated. The assumed risk, if a high value is used, is to lose parts of the LB in theimage sides that it is where less energy is captured. LB segments must be more orless horizontal with respect to the CCD sensor, so the following characteristics ofeach segment, height (H ) and width (W ), may serve as starting point to calculateits elongation and eliminate blobs without fulfil this additional restriction. Thus,the segment elongationξ is given byξ = W/H . The segments that do not pos-sess a elongation higher than a minimum one, (calculated considering the largestslope that could have the LB upon impacting on a surface of the environment),are eliminated, since only segments having elongated shape are sought. However,there are environments whose image has spots that, because of their similarity withthe characteristics of the searched LB, would be impossible to detect correctly. InFigure 4, an example of this phenomenon is shown.

Differential method. To eliminate the noisy blobs similar in form to the LB, twoimages are captured subsequently, the first one without the LB emission and thesecond one with it. This part of the process will be referenced to as the differentialmethod.

Once the two images needed for the differential method are captured, all seg-ment characteristics are computed (area, gravity centre and elongation) and a cor-relation coefficient between zones is calculated. Thus, those segments presenting ahigh degree of correlation are eliminated, maintaining the rest of them. In Figure 5,the flow chart of all the previously described operations can be seen.

Page 10: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

236 J. L. LAZARO ET AL.

Figure 3. Flow chart of characteristics calculation and segment elimination.

Figure 4. Environments whose image has spots with similar characteristics of the searchedLB.

Page 11: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 237

Figure 5. Flow chart of differential method.

2.5. ENVIRONMENT DEFINITION

Processing the information about the environment provided by the perception sys-tem (coordinate matrix), the environment is analysed with a sufficient depth (safetydistance) to be able to react to any obstacle. Due to the large number of pointsanalysed, they are previously filtered to unify values of adjacent points, therebydeducing vacant and obstructed zones in a first sectorisation.

Straight line segments are used to approximately reproduce the perimeter of theobstacles, so that later processing is easier. The number of straight line segmentsto be used is a compromise between the model precision and the processing time.

Page 12: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

238 J. L. LAZARO ET AL.

Figure 6. Environment analysis sequence.

In order to prepare the quantity of values obtained for later information process-ing, a “median filter” is applied. This type of filtering consists of grouping togetherconsecutive elements from the aforementioned matrix and assigning to each ele-ment the median value of the group. From a practical point of view, a check hasto be made of if there is sufficient vacant space for the robot to pass between theobstacles detected. If not, they are unified as a single obstacle. The problem ofobstruction between obstacles is also studied, analysing the distance matrix.

Overlapping between obstacles should be also detected and, in this case, theyshould be separated. This separation is based on the search for infinite(dZ/dX) orvery abrupt slopes, using the definition of a “slope sensitivity vector”. The above-mentioned analysis is carried out on each of the obstacle zones, bearing in mindthe different cases which may present themselves, and which are susceptible toproduce errors (Figure 6).

After these steps, the detected outline of the obstacles is generated, modifyingit, if it presents concavities, since this would guide the robot into a “cul de sac”or cause it to make unnecessary movements. Now the problem is tackled of deter-

Page 13: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 239

Figure 7. (a) Obstacle segmentated into groups of pairs(X,Z); (b) groups of points regresionline; (c) linearization and concavity elimination; (d) obstacle reconstruction.

mining where the robot may be set up in terms of its orientation and dimensions,and how to move it from one position to another without producing collisions.The adopted solution is to widen the intermediate objects in each iteration, inaccordance with the size of the robot and the direction it is moving in, consideringit as a point.

Generation of the visible obstacle outlineusing straight lines, a 2D polygon isgenerated which represents the visible outline of each of the obstacles to whichthe expanding algorithm is applied. Figure 7 illustrates the process followed. Eachof the obstacles is segmented into groups of pairs(X,Z), of a size smaller thanthe sequence which constitutes the obstacle. From each of the groups of points,the regression line is calculated. Next, the “linearisation” and “concavity elimina-tion” processes take place (Figure 7). This process is repeated in a loop until thenecessary linearisation is achieved.

Intermediate objectives selection algorithm.Once the object outline has beenmodelled and theexpandingalgorithm executed, a list of virtual vertices associatedwith each of the obstacles detected is made available (Figure 8).

Page 14: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

240 J. L. LAZARO ET AL.

Figure 8. Flow diagram of the intermediate goals selection algorithm.

One of the vertices of the virtual obstacles will be selected to determine theintermediate point or objective. In each iteration of the algorithm, a single inter-mediate objective is generated. If the obstacle is located between the robot and thegoal, an intermediate objective is generated and the path is deviated. A vertex ofthe virtual objects is chosen as the objective, laying down a path that passes bysame, so that the objects are skirted according to the criteria of safe movement andthe shortest path.

Page 15: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 241

Figure 9. (a) Segments vertices; (b) intersection points between the trajectory and theobstacle.

In order to achieve this, a list of virtualsi (segments) andri (straight lines towhich these belong and which constitute the virtual obstacle) are defined, as isshown in Figure 9. Therefore, a check should be carried out to determine whetheror not an intersection exists between the trajectory curve and any of theri .

Expression (22) gives the equation for the calculation of the points at which thetrajectory is cut:

Zi − Zi+1

Xi −Xi+1· Z − Zi + Xi = ai + biZ + ciZ2+ diZ3+ · · · . (22)

3. Experimental Results

The equipment outlined below was used in the tests. It is composed of a CCDcamera previously corrected, coupled to a laser plane emitter with an aperture angleof 80◦, a Matrox Comet image digitiser board, a PCLTA card, a Neuron Chip motor

Page 16: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

242 J. L. LAZARO ET AL.

Figure 10. Test platforms.

controller, boards controlling the robot’s position, and a platform with two drivewheels and two idle wheels. Once the emitter and the CCD have been fixed andcalibrated, the calculation of the maximum error due to the pixel quantification ofthe sensor is made. A measurement test with a static scene is then carried out toevaluate the errors produced. Figure 10 shows an image used in the tests.

A host processor makes all the operations and communicates with motor andposition control modules via a PCLTA card over a LonWork network. The laserdiode used has a low-power of only 15 mW. The CCD detector (equipped withCosmicar lens with a focal length of 4.8 mm) gives a 604× 588 pixels resolution.Using an odometric system position and orientation, errors are calculated at eachstep, to accomplish the wished path. These errors are used to generate the valuesof linear and angular speed that are communicated via LonWork to the low-levelcontrol in order to manage the driver of the DC motors.

Results of the optical system calibrationare shown in Table I. These resultshave been obtained by using the two-dimensional correction method previouslydescribed in this paper. The method consists of a few steps: selection of points;with the derivative at the origin obtain the ideal linear response of the lens systemused; error tables from the real and ideal linear behaviour, obtained in the previousstep; finally, getting the correction regression polynomials.

Choosing random points, through inverse perspective, their real 3D positionhas been computed. Also, it has been obtained the relative error in the distancemeasurement in(X, Y,Z) point coordinates. In order to obtain this error, we startwith the knowledge of the points depth captured in the image and the coordinates

Page 17: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 243

Table I. Results of the optical system calibration

Real coordinate Obtained coordinate Relative error

(X, Y,Z) (mm) (X, Y ) (mm) (%)

−400, 300, 993 −397.615 299.301 0.0939

−300, −100, 993 −298.182, −98.204 0.0665

500, 200, 993 497.324, 197.480 0.1439

−500, 400, 1490 −498.213, 401.029 0.0182

700, 600, 1490 702.702, 597.334 0.0097

800, −400, 1490 799.009, −404.494 0.0336

−600, −500, 1994 −600.615, −501.243 0.0216

−800, 500, 1994 −801.435, 499.128 0.0147

700, 300, 1994 699.278, 303.114 0.0095

−800, −300, 2490 −800.604, −302.138 0.0163

0, −900, 2490 0.956, −908.787 0.1133

600, −400, 2490 599.386, −403.782 0.0171

−800, 300, 3002 −803.619, 303.949 0.0420

−700, −500, 3002 −699, 316, −503.721 0.0142

500, 500, 3002 503.229, 498.953 0.0115

400, −200, 993 397.990, −200.959 0.0514

100, 300, 993 98.960, 296.167 0.1148

−600, −300, 1490 −599.919, −297.719 0.0274

0, 500, 1490 2.200, 500.387 0.0079

700, 100, 1490 702.885, 98.999 0.0707

−700, −100, 1994 −702.437, −99.547 0.0372

800, −400, 1994 800.257, −395.623 0.0322

−800, 500, 2490 −802.645, 495.406 0.0024

0, 200, 2490 −1.036, 198.096 0.0061

900, 500, 2490 898.096, 500.003 0.0236

−200, −100, 3002 −201.916, −97.890 0.0019

600, −600, 3002 601.919, −602.671 0.0283

800, 200, 3002 803.417, 201.384 0.0311

(X, Y,Z) given by the designed system. With these data, the difference with theknown real coordinates of the used points is computed. Relative mean error in thecalculation of the distance from the points evaluated to the origin is reduced to the0.037%.

To evaluate the obtained linearity, points belonging to a horizontal line in thescene are corrected and then tested (Table II). Data for a horizontal line located70 cm above of the origin have been captured. Its points are corrected, and theerror has been calculated, giving a maximum of 0.5%.

Page 18: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

244 J. L. LAZARO ET AL.

Table II. Data for a horizontal line located to 70 cm linearity analysis

Real point(X,Y ) (cm) Detected pixel Corrected pixel 5 degree

−100, 70 −263, 236 −291, 261

−90, 70 −240, 238 −262, 260

−80, 70 −215, 241 −232, 260

−70, 70 −190, 243 −203, 260

−60, 70 −164, 244 −174, 259

−50, 70 −138, 246 −145, 259

−40, 70 −111, 247 −116, 259

−30, 70 −84, 249 −88, 260

−20, 70 −56, 249 −58, 259

−10, 70 −28, 250 −29, 259

0, 70 0, 250 0, 259

10, 70 28, 250 29, 259

20, 70 56, 250 58, 260

30, 70 85, 249 89, 260

40, 70 112, 248 117, 260

50, 70 139, 246 147, 261

60, 70 165, 245 175, 260

70, 70 191, 243 205, 260

80, 70 216, 241 234, 261

90, 70 240, 239 262, 261

100, 70 264, 236 292, 261

110, 70 287, 234 323, 263

Parameter calculation to segment the laser beam.In order to use the proposedexpression of segmentation of the laser beam, it is necessary, in first place, tocalculate the value of the parameters used in. Therefore, the process described inthe paper has been followed, using multiple points always without saturating theimages and at different distances. In Figure 11, it is shown the representation withthe empirically obtained values forσ parameter, the curves enclosing up to 90%of the points and the curve with the best approximation. As it can be observed, theresult describes a Gaussian function. The values ofσ are in the range between 82and 92; 87.6 being its optimum value.

Besides, several images with different global threshold valuesMG · k1 havebeen evaluated, and for each one, multiple values of the parameter “difference”,have been tried.

Laser segmentation results and environment space definition.Tests in sceneswith polyhedral objects have been carried out, being reached a detection of 100%

Page 19: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 245

Figure 11. Empirically obtained values.

of such. In all cases, it has been identified the vertex more significant that representseach object, and by which pass the trajectory that must describe the robot to avoidcollisions with obstacles, through an optimal way. In the case of curved objects,their model has been obtained by means of rectilinear sections, and so the numberof sections is limited (by the programmer), the contour has not been reconstructedas accurately as in the case of the polyhedral objects.

The images of Figure 12 have been captured with the system describing asudden turning movement. The represented phases are, respectively, the detectionprocess of possible LB points, elimination and finally the result accomplishedwith the differential process. With image sizes 280× 240, in a processor systemequipped with a PII 300 Mhz. seven frames per second (fps) have been processed,while in other system based on Power PC the performance has been increased till18 fps, optimising the computation of the measurement in the algorithm. The twofirst images of Figure 12 correspond to sequential captures of a concrete scenewhile in movement, one with the reflection of the LB and another without it. Thealgorithm described is applied to both, reaching the final solution of the laser beamsegmentation.

4. Conclusions

We have studied aspects related to modelling and calibration of detectors with dis-tortion, use of structured light as modelling method of surroundings, segmentationof the laser beam reflection, determination of surroundings while in movement andpath planning. They were all oriented to the implementation of an integral system

Page 20: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

246 J. L. LAZARO ET AL.

Figure 12. Differential process result: (a), (b) sequential capture with and without laser;(c), (d) dinamic umbralization and laser segmentation; (e) final result.

that allows us to obtain a mobile robot, which is able to capture its surroundingsand to move without colliding.

There has been presented a method for correction of optical aberrations, modeland calibration of the sensor used as feedback element. The system gives suffi-ciently high precision results for later application to safety movements of the robot,diminishing the errors and reaching great linearity, not obtained in other referencedprevious works.

Page 21: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 247

In this work, an algorithm is proposed for the segmentation of the points inthe image that corresponds to a low-power laser beam, in different types of en-vironment, fundamentally indoors. Thus, the searched zones with the form of thelaser beam and luminous intensity dependent on the environment, together withthe utilisation of a cold mirror, is a method that supplies good results in modellingdifferent environments. It provides a solution for the detection of light points issuedin a structured way that permits to activate the laser used in crowded environments.

As conclusion, note that good results in movement are even obtained, discrimi-nating the laser beam from the noise with high quality, which gives the robustnessnecessary to use it in industrial applications.

There has been developed an algorithm for modelling environments, startingwith the information supplied by the infrared sensor, for the path planning withtheR in movement, avoiding any collision. The environment modelling approxi-mates the outline of the scene objects by straight segments. The different objects inthe scene are detected, in order to analyse their influence on the robot movement,obtaining thereinafter, a dependent virtual outline from the geometry and currentdirection of the robot. This provides a quick and simple solution that determinesprecisely the location and form of the objects. That considers the information of theenvironment model to decide if some of the obstacles are interposed in its path. Theenvironment calculated, recursively, constitutes a good method to obtain the spacefor paths with smooth slope changes and with the possibility to fix the directionsof the origin and destination. This makes the algorithm independent of the timeemployed in the segmentation of the laser beam points; the unreachable cases areavoided and facilitates to obtain the smooth path.

Acknowledgements

Previous work in this field has been sponsored by the CICYT (InterministerialScience and Technology Commission). At the moment this work is being financedby the CICYT through the project TER96-1957-CO3-01.

References

Blais, F., Rioux, M., and Beraldin, J. A.: 1988, Practical considerations for a design of a high preci-sion 3-D laser scanner system. Opto-mechanical and electro-optical design of industrial system,SPIE958, 225–246.

Brooks, R. A.: 1986, A robust layered control system for a mobile robot,IEEE J. Robotics Automat.2, 14–23.

Evans, J. M., King, S. J., and Weiman, C. F. R.: 1990, Visual navigation and obstacle avoidancestructures light systems, U.S. Patent No. 4-954-962-4.

Fabrizi, F. and Ulivi, G.: 1998, Sensor fusion for a mobile robot whit ultrasonic and laser rangefinder,in: 3rd. IFAC Symposium on Intell. Comp. and Instrum. for Control Application SICICA’97,Annecy, France, June 1998, pp. 387–392.

Gustavson, R. L. and Davis, T. E.: 1992, Diode-laser radar for low-cost weapon guidance,SPIE1633,Laser-Radar VII, Los Angeles, CA, 21–32.

Page 22: Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

248 J. L. LAZARO ET AL.

Jarvis, R. A.: 1983, A perspective on range-finder techniques for computer vision,IEEE Trans.Pattern Analysis Mach. Intelligence5(2), 122–139.

Kemmotsu, K. and Kanade, T.: 1995, Uncertainty in object pose determination with three ligth-striperange measurements,IEEE Trans. Robotics Automat.11(5), 741–747.

Khadraoui, D., Motyl, G., Martinet, P., Gallice, J. and Chaumet, F.: 1996, Visual servoing in roboticsscheme using a camera-laser stripe sensor,IEEE Trans. Robotics Automat.12(5), 743–750.

King, S. J. and Weiman, C. F. R.: 1990, HelpMate autonomous mobile robot navigation system,SPIE1388, Mobile Robots V, Boston, MA, 90–98.

Koeh, E., Teh, C., Hillel, G., Meystel, A. and Isik, C.: 1985, Simulation of path planning for a systemwith vision and map updating, in:Proc. of IEEE Internat. Conf. on Robotics and Automation,St. Louis, MO.

Krogh, B. H. and Feng, D.: 1989, Dynamic generation of subgoal for autonomous mobile robotsusing local feedback information,IEEE Trans. Automat. Control34(5), 483–493.

Latombe, J. C.: 1993,Robot Motion Planning, Kluwer Academic, Dordrecht.Lázaro, J. L.: 1998, Modelado de entornos mediante infrarrojos. Aplicación al guiado de robots

móviles, Tesis doctoral, Escuela Politécnica, Universidad de Alcalá.Lázaro, J. L., Mazo, M., and Mataix, C.: 1998, Guidance of autonomuos vehicles by means of

structures light, in:IFAC Seminar on Intelligent Components for Vehicles, Sevilla, Spain.Loney, G. C. and Beach, M. E.: 1995, Imaging studies of a real-time galvanorneters based point

scaning system, Technical Report, General Scaning Inc.Lozano-Pérez, T.: 1983, Spatial planning: A configuration space approach,IEEE Trans. Computer

32(2), 395–407.Motyl, G., Martinet, P., and Gallice, J.: 1993, Visual servoing respect to a target sphere using a

camera-laser stripe sensor, in:Internat. Conf. on Advanced Robotics, ICAR’93, Tokyo, November1993, pp. 591–596.

Nitao, J. J. and Parodi, A. M.: 1986, A real-tirne reflexive pilot for an autonomous land vehicle,IEEEConst. Syst. Mag.6, 10–14.

Oommen, B. J.: 1987, Robots navigation in unknowm terrains using leamed visibility graphics,Part 1: Disjoint convex obstacle case,IEEE J. Robotics Automat.3, 672–681.

Payton, D. W.: 1986, An architecture for reflexive autonomous vehicle control, in:Proc. of IEEEInternat. Conf. on Robotics and Automation, San Francisco, CA, April, 1986, pp. 1838–1845.

RIEGL: 1994, Laser distance, level and speed sensor LD90-3, Product Data Sheet 3l94, RIEGL LaserMeasurements Systems, RIEGL, Orlando, FL, USA.

Sato, Y. and Otsuki, M.: 1993, Three-dimensional shape reconstruction by active rangefinder,IEEETrans. Pattern Analysis Mach. Intelligence, 142–147.

Stela, G. P.: 1993, Internal camera calibration using rotative and geometric shapes, Technical Report,Massachusetts Institute of Technology in EECS, MA.

Theodoracatos, V. E. and Calkins, D. E.: 1993, A 3D system model for automatic objects surfacesensing,Internat. J. Vision11(1), 75–99.

Willson, R. G.: 1994, Modelling and calibration of automated Zoom lenses, PhD Thesis, RoboticsInstitute, Carnegie Mellon University, Technical Report. CmU-RI-TR.94-03, Carnegie MellonUniversity.


Top Related