Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

Download Mobile Robot with Wide Capture Active Laser Sensor and Environment Definition

Post on 02-Aug-2016

213 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

<ul><li><p>Journal of Intelligent and Robotic Systems 30: 227248, 2001. 2001 Kluwer Academic Publishers. Printed in the Netherlands. 227</p><p>Mobile Robot with Wide Capture Active LaserSensor and Environment Definition</p><p>J. L. LZARO, A. GARDEL, M. MAZO, C. MATAIX, J. C. GARCA andR. MATEOSElectronics Department, Alcal University, Campus Universitario, s/n. 28805 Alcal de Henares,Madrid, Spain; e-mail: lazaro@depeca.alcala.es</p><p>(Received: 18 May 1999; in final form: 22 May 2000)Abstract. The sensorial system developed is based on the emission of an infrared beam, recoveringthe reflected beam and measuring distances to significant points in the environment. This system isable to detect and model obstacles in unknown environments. Features of the capture system allowlarge fields of view to be caught at short distances, aiding the robots mobility. Several algorithmsare used to find the formation centre of the image and so model the distortion introduced by thewide-angle lens. Parameters of the optical model are inserted into the calibration matrix to obtainthe camera model. We present also an algorithm which extracts the points on the image that belongto the laser beam. All of the above work is in unknown environments with variable conditions ofillumination. The robots trajectory is obtained and modified in real time, with a spline function,using four different reference systems. Finally, empirical tests have been carried out on a mobileplatform, using a CCD camera with a wide-angle lens of 65 and 110, a 15 mW laser emitter and aframe grabber for image processing.</p><p>Key words: robots, laser beam, wide-angle optics, environments modelling, obstacle, telemetry,trajectory planning.</p><p>1. Introduction</p><p>One of the models for an unknown environment in which a mobile robot is to beguided, obtains 3-dimensional coordinates by both emitting and capturing struc-tured light (Jarvis, 1983). However, before this can be achieved, the entire systemof both detector and emitter must be jointly calibrated and referenced to the sameoriginal coordinates.</p><p>If the structured light emitted is a light plane, the third coordinate can be de-duced from each of 2-dimensional CCD pixels resulting from the impact of theabove-said plane in the environment.</p><p>Obtaining as many coordinates as possible allows one to calculate the physicallimits of the environment. It also makes the recognition of the precise locationof objects that may hinder the robots movements possible and allows the robotto circumnavigate them without colliding. Related works, making use of similartechniques to determine the position of objects and their orientation with respect to</p></li><li><p>228 J. L. L AZARO ET AL.</p><p>a robot, can be found in (Blais et al., 1988; Sato and Otsuki, 1993; Kemmotsu andKanade, 1995; Lzaro et al., 1988). All these papers have been developed usingsmall field depths and homogeneous background images. Motyl et al. (1993) andKhadroui et al. (1995) made use of these techniques to position a robot arm withprior knowledge of the position of polygonal and spherical objects in an environ-ment. Evans et al. (1990) and King (1990) used a structured light plane to detectobstacles encountered by a mobile robot (MOB). Fabrici and Ulivi (1998) usedit to create a plan of the robots workspace by plotting its movements, but onlyin environments that allow the reflection of the laser beam to be segmented by asimple matching.</p><p>Other projects use the depletion of a laser beam to generate different light pat-terns. Gustavson and Davis (1995) and RIEGL (1994) obtained good resolutionswithin a range of several metres. However, measurement time becomes too longwhen the number of points to be converted to 3 D coordinates increases. Moreover,in percentage terms the number of mistakes made over short distances is large, alsothe black zones between samples at long distances is extensive. If the generationpattern is combined with geometric detection techniques (Loney and Beach, 1995),the measurements obtained are very accurate, but this technique can only be appliedin small environments with a homogeneous background and well defined objects.</p><p>The map of distances obtained may be used to plot a route through the obstaclesupdating the information as different obstacles come within the robots field ofvision. Among the many path options available, the one that gives the greatestguarantee of safety should be chosen.</p><p>This paper builds paths based on pieces of cubic-spline curves. Latombe (1993)has developed a navigation algorithm with which the space configuration of the sur-rounding environment can be obtained. Koch (1985), Oommen (1987) and Kroghand Feng (1989) developed some path planning algorithms to extend the objectivepoint. Payton (1986), Nitao and Parodi (1986), and Brooks (1986) work with dy-namic trajectories, which are only recalculated when the surrounding environmentchanges. Whilst the precise position of the robot must be known at all times, it isalso necessary to update the relative positions of the objects that crop up in its pathin order to steer it towards its final goal. Finally, control of the robots steering andmovement will enable it to follow a particular path inside the global control loop.</p><p>This application involves the need to recognise environments with a large fieldof vision at short distances, thereby justifying the use of a wide-angle lens. As thistype of optic has no linear response, traditional camera calibration methods cannotbe used. To correct this fault the lens response has been modelled.</p><p>In (Theodoracatos and Calkins, 1993 and Willson, 1994) the problem of deter-mining the parameters of the system, such as the centre of image formation, hasbeen solved by setting the lens distortion at a long focal length in order to takemeasurements. Stein (1993) has used geometrical spheres to find out the aspectrelation and parallel lines to solve for lens distortion, main point, and focal length.This process limits the position of the images centre to a square of 20 20 pixels</p></li><li><p>MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 229</p><p>(Lzaro, 1998). The distortion aberration for wide-angle optics is modelled, takinginto consideration its symmetrical axial revolution system. The centre of the imagecan be found with in a margin of error of just 8 pixels and a linear error of 1%.</p><p>At the present time, there are several research projects attempting to determinethe physical boundaries of the robots workspace by calculating the amount of thefree space available. Fabrizi and Ulivi (1998) use a CCD sensor and a laser plane.They segment the reflected laser beam according to its brightness, but they areworking in structured environments. In (Lozano-Perez, 1983; Latombe, 1994, andKrong and Feng, 1989) some projects concerning the generation of virtual objectsand their configuration (with prior knowledge of their location) and reducing thegeometric robots navigation to a single point movement are demonstrated. Theyalso put forward algorithms for generating an optimum path using as the minimumdistance criteria, potential fields and so on.</p><p>Tests carried out prove that the system developed is able to follow walls, crossdoors and skirt obstacles, redefining its course online and easily and accuratelyarriving at a certain goal. The aim of this work is to set up this system aboardan autonomous mobile robot that will be capable of avoiding head-on crashes andreaching its final objective without colliding.</p><p>2. Robot Guidance</p><p>2.1. MODELLING WIDE-ANGLE OPTICS</p><p>As an alternative, we propose to calculate the parameters of the optical systemmodel based on the knowledge of the image formation centre. It considers all theerrors: distortion without revolution symmetry etc, therefore making it necessaryto model it in all directions.</p><p>Searching sensor centre. The algorithm consists of capturing the image of ascenario with parallel lines drawn on it. The hough transformation is applied toeach line, analysing their curvature by means of the accumulation of points andthe adjacency of large accumulations within the transformed space. The aboveprocedure is repeated to capture straight lines in different directions (Lzaro, 1998).The image formation centre is a pixel located between the two straightest lines ofeach capture. The hough transformation is also applied to the lines crossing theintersection zone in different directions.</p><p>Bidimensional correction. A relation is made between the real 3-dimensionalcoordinates and the 2-dimensional CCD camera coordinates (Figure 1, dashedlines). We assume that the lens produces errors at two different points on thecaptured image. One of those errors will correspond to a phase error and the otherto modulus. Correction to these errors depends not only on the distance from theimage formation centre, but also on the position of the points to be analysed. Theangle, at which the projection of a real point is received, corresponds to the angle</p></li><li><p>230 J. L. L AZARO ET AL.</p><p>Figure 1. Ideal curve and error extraction.</p><p>of the real coordinates. Data obtained from multiple points chosen at random areused to calculate the correction polynomials in Equations (1) and (2) using the leastsquares method. These polynomials can be obtained by regression methods usingleast squares.</p><p>The following procedure will be employed. A derivative of the polynomial atpoint (0, 0) is calculated thus giving the hypothetical linear response (Figure 1,a continuous line). Then, tables can be made with the error produced in pixels.By extrapolating the distance in the scene to the difference between the pixelscaptured on the camera, it is possible to locate the ideal pixel, thereby achievingfield depth independence. In this way, errors at different points can be obtained.Finally, working with the table of errors obtained, the correction coefficients canbe calculated.</p><p>phase = au+ bv + cuv + du2 + ev2 + f uv2 + gu2v ++hu3 + jv3 + , (1)</p><p>modulus = au+ bv + cuv + du2 + ev2 + f uv2 + gu2v ++hu3 + jv3 + . (2)</p><p>The errors of a sensor plane pixel are calculated as follows:1. Phase of the real point is calculated using the centre of the grid (3) and phase from its corresponding point on the CCD image referenced to the imageformation centre (4) (here pi (i = u, v) is the coordinate of the pixel insidethe image, pci is the coordinate of the images centre pixel, tsi is the size of thesensor, and pti the total number of pixels).</p><p> = arctg(y</p><p>x</p><p>), (3)</p></li><li><p>MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 231</p><p> = arctg(u</p><p>v</p><p>)= (pv pcv)(tsv/ptv)(pu pcu)(tsu/ptu) . (4)</p><p>2. The module error can be calculated in the following way. Bearing in mind thatvery close to the optical centre of the lens there are almost no errors, the imageof a point captured by the sensor at a short distance from the centre can be takenas error-free. Proportionately the ideal module is deduced using the image ofany point in the scene.Subtracting this from the real module, the error isphase = , (5)</p><p>modulus =(k1</p><p>k2</p><p>x2 + y2</p><p>) (v2 + u2 ). (6)</p><p>The phase and module of the corrected pixel can be expressed as = + phase, (7)mod = mod+ modulus. (8)</p><p>The value of the corrected coordinates can be expressed asu = mod sin , (9)v = mod cos . (10)</p><p>2.2. MATHEMATICAL MODEL OF THE CAMERA LENS WITH 2-DIMENSIONALCORRECTION</p><p>Lens correction can be included in a traditional camera modelling system so that amatrix system can be obtained by nonlinear optical systems. A traditional cameramodel can be expressed in homogenous coordinates</p><p>u = Ut; v = V</p><p>t(11)</p><p>in the following form:(C11 C12 C13 C14C21 C22 C23 C24C31 C32 C33 C34</p><p>) X</p><p>Y</p><p>Z</p><p>1</p><p> = (uvt</p><p>). (12)</p><p>Developing the expressions of U and V (for systems with distortion) in order tomodel the parameters in accordance with (9) and (10) we have</p><p>U = [(U 2 + V 2)1/2 + aU + bV + cUV + ] sin</p><p>[arctg</p><p>(V</p><p>U</p><p>)+ aU + bV + cUV + </p><p>], (13)</p></li><li><p>232 J. L. L AZARO ET AL.</p><p>V = [(U 2 + V 2)1/2 + aU + bV + cUV + ] cos</p><p>[arctg</p><p>(V</p><p>U</p><p>)+ aU + bV + cUV + </p><p>]. (14)</p><p>If the coordinates of the points represented as</p><p>U = 1(U, V ); V = 2(U, V ) (15)are inserted into the cameras calibration matrix, the following system of Equa-tions (15) is obtained:C11X + C12Y + C13Z C14 1(U, V )(C31X + C32Y + C33Z + C34) = 0, (16)C21X + C22Y + C23Z C24 2(U, V )(C31X + C32Y + C33Z + C34) = 0. (17)This system of equations, together with the equation from the laser light strip, givesus the real coordinates (x, y, z) with optical aberrations corrected.</p><p>2.3. CCD LASER COUPLING. OBTAINING 3D COORDINATES: POSITION ANDORIENTATION</p><p>The usual way to obtain 3D coordinates is to solve the equation created by thecamera model (18) for each laser point captured and Equation (19) of the laserplane. This gives three equations, which, in turn, give the coordinates (x, y, z):</p><p>a1x + b1y + c1z+ d1 = 0,a1x + b2y + c2z+ d2 = 0, (18)</p><p>Ax + By + Cz+ 1 = 0. (19)The aim of the transmitter camera coupling is to calibrate both systems to the samereference whilst keeping the error minimum. Once the optimal position has beenanalysed, the transmitter is coupled to a global system and calibrated to a knownreference (Figure 2).</p><p>Before the plane equation can be obtained, all the light that is not emitted bythe laser must be eliminated from the image. It is also convenient to reduce thethickness of the laser beam to one pixel. To begin with an optical filter is used toeliminate all possible forms of light except the wavelength emitted as structuredlight. An active search is then made to locate very bright points belonging to thelaser and a spatial filter is then passed over the image. Points belonging to hori-zontal or near horizontal lines are enhanced and the threshold is set on the functionof mean grey level. Thus, all those points that do not belong to narrow beams areeliminated. Afterwards, using the plane equation, which is always the same, and theequations given by the camera model for each laser point the system of equationsis solved, and the coordinates of the captured laser points can be deduced. The</p></li><li><p>MOBILE ROBOT WITH WIDE CAPTURE ACTIVE LASER SENSOR 233</p><p>Figure 2. System scheme and obtaining plane equation.</p><p>depth of each pixel is the z-coordinate of the corresponding point in the scene.Using this application, any static object in the environment, the recognition ofphysical boundaries, and the obtaining of robot errors of position or orientationcan be processed and used in the control loop.</p><p>2.4. SEGMENTATION OF THE REFLECTION OF THE LASER BEAM</p><p>A large part of the light captured by the detector is external to the Laser Beam(LB) usually coming from the sun, lighting areas or other incidental light sourcesand produces intense spots with random forms on the image. To find the LB thedifferent absorption indices that the different materials in the scene present, mustbe taken into account. Changes in the level of intensity of the LB can be used as anindication of the distance at which the objects are found, since the intensity of thelight emitted reduces as the distance from the source increases.</p><p>The light emitted by the LB becomes more attenuate as...</p></li></ul>