omni-vision based autonomous mobile robotic platform

12
Omni-vision based autonomous mobile robotic platform Zuoliang Cao *a , Jun Hu * a , Jin Cao **b , Ernest L. Hall ***c a Tianjin University of Technology, Tianjin / China; b Fatwire Corporation, New York /USA; c University of Cincinnati, Cincinnati/ USA ABSTRACT As a laboratory demonstration platform, TUT-I mobile robot provides various experimentation modules to demonstrate the robotics technologies that are involved in remote control, computer programming, teach-and-playback operations. Typically, the teach-and-playback operation has been proved to be an effective solution especially in structured environments. The path generated in the teach mode and path correction in real-time using path error detecting in the playback mode are demonstrated. The vision-based image database is generated as the given path representation in the teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory capability is employed to provide environment perception. A unique omni directional vision (omni-vision) system is used for localization and navigation. The omni directional vision involves an extremely wide-angle lens, which has the feature that a dynamic omni-vision image is processed in real time to respond the widest view during the movement. The beacon guidance is realized by observing locations of points derived from over-head features such as predefined light arrays in a building. The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic sensors is employed for obstacle avoidance. Keywords: Mobile robot, omni directional vision, navigation, teach-and-playback, image database. 1. INTRODUCTION The robotics laboratory is a major research division of Tianjin University of Technology (TUT). It is equipped with an industrial robot work cell and a mobile robotic platform with a series of lab-demo system modules. The TUT mobile robotic platform, as a wheel-based demo system, was developed to adapt to current education and research needs. The emphasis of design concepts is placed on the following considerations. (1) Mobile robotic devices promise a wide range of applications for industry automation in which stationary robots cannot produce satisfactory results because of limited working space. Such applications include material transfer and *[email protected]; phone 86 22 23688585; fax 86 22 23362948; http://www.tjut.edu.cn; Tianjin University of Technology, Tianjin, 300191. China; **[email protected]; phone 1 516 3289473; fax 1 516739-5069; http://www.Fatwire.com; Fatwire Co. 330 Old Country Road, Suite 207 Mineola, New York, USA 11501; ***[email protected]; phone 1 513 5562730; fax 1 513 5563390; http://www.uc.edu ; Univ. of Cincinnati , Cincinnati, USA, OH 45242

Upload: others

Post on 03-Feb-2022

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Omni-vision based autonomous mobile robotic platform

Omni-vision based autonomous mobile robotic platform Zuoliang Cao*a, Jun Hu *a, Jin Cao**b, Ernest L. Hall***c

aTianjin University of Technology, Tianjin / China; bFatwire Corporation, New York /USA; cUniversity of Cincinnati, Cincinnati/ USA

ABSTRACT

As a laboratory demonstration platform, TUT-I mobile robot provides various experimentation modules to demonstrate

the robotics technologies that are involved in remote control, computer programming, teach-and-playback operations.

Typically, the teach-and-playback operation has been proved to be an effective solution especially in structured

environments. The path generated in the teach mode and path correction in real-time using path error detecting in the

playback mode are demonstrated. The vision-based image database is generated as the given path representation in the

teaching procedure. The algorithm of an online image positioning is performed for path following. Advanced sensory

capability is employed to provide environment perception. A unique omni directional vision (omni-vision) system is

used for localization and navigation. The omni directional vision involves an extremely wide-angle lens, which has the

feature that a dynamic omni-vision image is processed in real time to respond the widest view during the movement.

The beacon guidance is realized by observing locations of points derived from over-head features such as predefined

light arrays in a building. The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic

sensors is employed for obstacle avoidance.

Keywords: Mobile robot, omni directional vision, navigation, teach-and-playback, image database.

1. INTRODUCTION

The robotics laboratory is a major research division of Tianjin University of Technology (TUT). It is equipped with an

industrial robot work cell and a mobile robotic platform with a series of lab-demo system modules. The TUT mobile

robotic platform, as a wheel-based demo system, was developed to adapt to current education and research needs. The

emphasis of design concepts is placed on the following considerations.

(1) Mobile robotic devices promise a wide range of applications for industry automation in which stationary robots

cannot produce satisfactory results because of limited working space. Such applications include material transfer and

*[email protected]; phone 86 22 23688585; fax 86 22 23362948; http://www.tjut.edu.cn; Tianjin University of Technology, Tianjin,

300191. China; **[email protected]; phone 1 516 3289473; fax 1 516739-5069; http://www.Fatwire.com; Fatwire Co. 330 Old

Country Road, Suite 207 Mineola, New York, USA 11501; ***[email protected]; phone 1 513 5562730; fax 1 513 5563390;

http://www.uc.edu; Univ. of Cincinnati , Cincinnati, USA, OH 45242

Page 2: Omni-vision based autonomous mobile robotic platform

tool handling in factories or automatically piloted carts for delivering loads such as mail around office buildings and

service in hospitals. Most of the advanced manufacturing systems use some kind of computer transportation system.

(2) The facilities of the laboratory support the education of Robotics and Mechatronics which include a series of the

primary and typical robotics courses such as Kinematics, Dynamics, Motion control, Robot programming, Machine

intelligence.

(3) The laboratory experiments are the major resources for studying and researching the future technologies. Some

particular application demonstrations are necessary to create basic concepts and approaches in new development areas

to cope with the needs for research experience and capability.

Navigation techniques is one of the most common and important factors in robotics research. The use of a fixed

guidance method is currently the most common and reliable technique. An alternative method uses a signal-carrying

wire buried in the floor with on-board inductive coil or a reflective stripe painted on the floor with an on-board

photoelectric device. In the laboratory, a magnetic strip is paved on the floor and the vehicle is equipped with a

detecting sensor to direct the vehicle to follow it. Recent preferred approach employs laser beam reflecting or digital

imaging units1 to navigate a free-range vehicle.

It is feasible and desirable to use sensor-driven, computer controlled and software programmed automatic system for the

AGV development. A series of special sensors may be employed for mobile robots. Various methods have been tested

for mobile control using ultrasonic2, 3, infrared, optical and laser emitting units4. The data fusion, fuzzy logic5, 6, neural

network7 and machine intelligence technologies8~10 have been developed.

Machine vision capability may have a significant effect on many robotics applications11, 12. An intelligent machine, such

as autonomous mobile robot, must be equipped with vision system to collect vision information which is used to adapt

to its environment. With regard to the limitation of the view scope, the current imaging system affects the performance

of the robotics system. The development of the omni directional vision system appears to have more advantages. The

omnidirecitonal vision navigation program is an example.

The mobile robotic platform, as a lab-demo system, is featured by its hybrid (in the sense of variety of actuators and

sensors), reconfigurable, easily accessible, changeable controller and some advanced features. Both laboratory hardware

modules and advanced computer software packages provide an appropriate environment supplemented by a graphics

simulation system support. The two laboratory phases form an experimentation base from fundamentals of robot

operation through integration with new control technologies. The advanced experiment is divided into two phases as

following:

Phase one: robotic control, navigation, obstacle avoidance, sensory fusion.

Phase two: computer vision, image processing, path tracking and planning, machine intelligence.

TUT-I mobile robotic platform can be operated through four modes: human remote control, programmed control,

Page 3: Omni-vision based autonomous mobile robotic platform

teach-and-playback, and automatic path planning in structured environments. Typically, the main function is

teach-and-playback operation for indoor transportation. The autonomous mobile robots can be considered as automated

path planning, free-ranging vehicle. However, the cooperation of autonomous capability with a supervising operator

appears to be the engineering compromise that provides a framework for the applications of mobile robots especially in

the structured environments. The operation mode of teaching path and playback is an effective solution not only for

manipulator arms but also for vehicles. The vehicle records the beacon data progressively in an on-board computer

memory during a manually given teaching mode trip along with a desired course. On subsequent unmanned trips, the

vehicle directs itself along the chosen course by observing the beacon and comparing the data. The steering correction

will allow the vehicle automatically to follow the taught course. The path generation by teaching mode and path

correction by real-time path error detecting in the playback mode are demonstrated by the TUT-I robotic system. The

vision-based image database is generated as the given path representation in the teaching procedure. The algorithm of

the image processing is performed for path following. The path tracking involves an online positioning method.

Advanced sensory capability is employed to provide environment perception. A unique omni-vision system is used for

localization and navigation. The omni directional vision involves an extremely wide angle lens with a CCD camera,

which has the feature that a dynamic omni-vision image can be responded to in real time. The beacon guidance is

realized by observing locations of points derived from over-head features such as predefined light array in a building.

The navigation approach is based upon the omni-vision characteristics. A group of ultrasonic sensor is employed to

avoid obstacles. The other sensors are utilized for system reliability. The multi-sensory data fusion technique is

developed for collision-free trajectory pilot.

TUT-I mobile platform can be decomposed into five distinct subsystems: locomotion with control/man-machine

interface, sensors for obstacle avoidance, omni-vision module with image processor, navigator with central controller

and power source units. The paper provides a concise series of descriptions in path planning, obstacle avoidance,

control strategies, navigation, vision systems, ranging systems and various application modules.

2. CONFIGURATION OF THE OMNIDIRECTIONAL VISION NAVIGATION UNITS

The platform comprises an unmanned vehicle, which is comprised of two driven wheels and two front free swiveling

wheels. The chassis contains the necessary elements to power, propel and steer other on-board equipment. Near the

chassis, a number of ultrasonic ranging sensors are mounted. The ultrasonic devices act to sense the presence of the

objects within its path and prevent a collision. As a further precaution, safety switches will stop the vehicle if it contacts

anything.

Omni directional vision means that an entire hemispherical field of view is seen simultaneously. This is realized by

means of an extremely wide-angle optical image device, called a fisheye, with a CCD camera. The omnidirectionan

vision guidance is a new and unique navigation technique. The omni-vision appears to have a definite significance in

navigation application for various autonomous guided vehicles. Omni-vision automated guiding system referred by

overhead light consists of the following five components shown in Fig.1.

Page 4: Omni-vision based autonomous mobile robotic platform

(1) Fisheye lens, CCD camera with an automated electric

shutter. An electronic shutter control system uses a single

chip microcomputer. It is suitable for TK-60 CCD camera.

When the illumination light changes, the system still

enables the camera have better output image.

(2) A camera stand with 5 coordinate degrees of freedom.

(3) Beacon tracker: an image-processing computer for

real-time data acquisition of the targets.

(4) Omni-image distortion corrector.

(5) Navigator, which includes three function modules: Path

Generator, Path Error Detector and Corrector.

System will perform path planning and tracking on the

teach-and-playback mode under the environments by

referring to overhead lights. The system guides itself by

referring to overhead visual targets that are universally

found in the buildings. A group of predefined overhead

lights is usually selected as landmark, it is not necessary

to install any special features. Since at least two points can

determine the vehicle’s position and orientation, the beacon group should consist of at least two lights as guiding targets

in each frame of image. However, the algorithms may be adjusted to handle any specified number of targets. The

number must be greater than two. This would produce a more accurate and robust system. Even if the target may be lost

from time to time, the vehicle would still have at least the minimum number of targets for guidance.

The tracker is real-time digital image processor. Although multiple targets can be multiplexed through a single target

tracker, a multi-target tracker is used to eliminate the need for the multiplexing software. Up to four image windows or

gates can be used to track the apparent movement of the overhead lights simultaneously. The beacon tracker is the major

on-board hardware. It outputs the coordinates of a pair of lights that are shown on the monitor screen and placed a

tracking gate around each of them.

3. TEACHING AND PLAYBACK OPERATION MODE BASED ON IMAGE DATABASE

For industrial manipulators the teaching and playback operation mode through a teach pendant handled by operator is a

traditional technique. For an automated guided vehicle, it appears to be an unusual method but still to be an effective

solution. The vision-based image database is employed as the desired path generator. In the teaching mode, the operator

manually causes the vehicle to move forward along the desired path at a selected speed. The selected targets with their

gates will move downward on the monitor screen as the vehicle passes underneath the overhead lights. Then the datum

is recorded as described in the section in the reference database. When a target is nearly out of the field of view, the

frame will be terminated and a pair of targets will be selected. The process will be repeated. Some combination of

Page 5: Omni-vision based autonomous mobile robotic platform

vision frames and position tracking will continue until the desired path has been generated and the reference data for

that track is recorded in memory. In the playback mode thereafter, the vehicle automatically maintains itself in the

desired path by comparing between the analogous record and the vehicle path of movement and steering corrections

made to bring the path errors: Ex, the lateral error, Ey, the error along the path and Ea, the angle orientation error to zero,

thereby keeping the vehicle on the intended course. As the value of Ey diminishes and approaches to zero the next

reference frame will be called up. The cycle will be

repeated. The procedure will cause the vehicle to

follow the desired path.

A database with the tree structure shown in Fig.2 is

built up to record the data stream. The vision-based

image database is three-layer data structure. The

data structure is a group of array pointer. Each

array element points the next layer’s node. The array index means the serial number of the sampling, which represents

frames, fields and records respectively. The method creates an exclusive property that the desired path is defined by an

image database.

There are three coordinate systems: the beacon coordinate system, the vehicle coordinate system and the image

coordinate system. On the basis of the principle of the coordinate conversion, the path errors can be easily calculated

and obtained when the vehicle departs from the desired path as detected from the recorded locations of geometric points

derived from the various elements of the existing pattern. At this point it is possible to compare the observed target

coordinates (Tx, Ty) with the reference target coordinates (Rx, Ry) and determine the angle error Ea, the lateral position

error Ex and the longitudinal position error Ey.

4. LENS DISTORTION AND PIXEL CORRECTION

For vision-based measurement, in order to obtain the real world position of the vehicle the lens distortion and camera

chip pixel distortion correction have to be considered. The target

image coordinates supplied by the video tracker require correction

for two inherent errors before they can provide accurate input for

guiding the vehicle. One is lens distortion error and another is the

pixel direction error.

The wide angle lens used in this system has considerable distortion

and consequently the image coordinates of the overhead lights as

reported by the tracker are not proportional to the true real world

coordinates of those lights. The lens distortion will cause the image

range Ir vary as a function of the zenith angle B and depending on

the lens used, the function may be linear, parabolic or trigonometric.

Page 6: Omni-vision based autonomous mobile robotic platform

The distortion is determined experimentally by laboratory measurements from which the function is determined.

The lens used herein the distortion correction refers to a linear formula as shown in Fig.3

Ir = KB

Where K is a constant derived from the distortion measurement. The correction factor for a specific lens distortion is

provided by the lens correction software.

The pixels in the CCD chip are different scale between X and Y direction. This causes the targets coordinates to be

different in X and Y axis. It is necessary to divide the Y coordinates by a factor R which is the ratio of pixel length

toheight. This brings the X and Y coordinates to the same scale. The following equations give this conversion:

dx = (xi – xc)

dy = (yi – yc)/R

where (xc, yc) is origin coordinates of the camera center. Then we know the image range Ir.

The image coordinates (dx, dy,) on focal plane are converted into the target coordinates (Tx, Ty,) at the known height H.

show in Fig .3.

Tr = H tan B

It is known that B = Ir / K

Tr = H tan (Ir / K)

Where Tr is the target range in real world coordinate system.

Since the azimuth angle C to a target point is invariant in the image and real world coordinates system. The target world

coordinates (Tx, Ty) can be calculated as:

Tx = dx Tr/Ir

Ty = dy Tr/Ir

In order to transfer the image coordinates from an origin at the camera centerline to the vehicle centerline, the

calibration of the three coordinates as discussed above is necessary to determinate the center point of the coordinates.

5. THE MOTION CONTROL MODULES

A programmable two-axis motor controller is used. A non-programmable model may be chosen. The necessary

programming modules can be incorporated into the vehicle computer. The controller must selectively control velocity or

position of two motors, utilize appropriate feedback, such as from optical encoders and coupled tachogenerators to

either the motor or the wheels, which involves a two freedom closed-loop servo system.

The vehicle has two self-aligning front wheels and two driving rear wheels. Velocity mode control is used when the

Page 7: Omni-vision based autonomous mobile robotic platform

vision system is guiding the vehicle. In this mode, steering is accomplished by the two driving motors turning at

different speeds due to the inputs of VL and VR (left and right wheel velocities) from the computer. The wheel encoders

provide local feedback for the motor controller to maintain the velocity profile programmed during the teaching mode.

By speeding one wheel and slowing another one equally by an amount dV, the motor control strategy steers the vehicle

to return to its desired path. Since the sample frequency is large enough, we can consider the system as a continuous

feedback system. In practice, the conventional PID compensator can be designed to achieve desired performance

specifications. A simple control formula is used as followings:

dV = k1 Ex + k2 Ea

where Ex and Ea are output from the path error measurement circuit. K1 and K2 are the constraints which can be

mathematically calculated from significant parameters of vehicle dynamics and kinematics or be determined

experimentally.

VL = Vm + dV

VR = Vm – dV

where Vm is velocity at the centerline of the vehicle. The sign of dV and the magnitude of dV will determine the

turning direction by giving a turn radius. The control formula will bring the vehicle back onto course in the shortest

possible time without overshooting. The following block diagram represents the closed-loop transfer function. For the

output Ex, it involves a PD compensation shown in Fig.4.

Where D is the diameter of the rear drive wheel, w is the distance between two bear wheel, kn is a constant related to

motor output, V0 is the velocity of the mass center of the vehicle. The right diagram is a simplified unit feedback system

derived from left diagram. We could select a pair of the optimal values of K1 and K2 through calculation or experiments

in practice to result in a desired performance of the system.

6. CONCLUSION

The photographs of the TUT-I omni-vision based autonomous mobile robotic platform are shown in Fig.5. The

Page 8: Omni-vision based autonomous mobile robotic platform

technique points of the application for omni-vision based autonomous mobile robotic platform include following

properties:

(1) The omni-vision provides an entire scene using a fish eye lens. It appears useful in a variety of applications for

robotics. An overall view is always required for safe and reliable operation of a vehicle. While conventional method

with camera scanning appears generically deficient, dynamic omni-vision is considered as a definite advantage

particularly for mobile navigation.

(2) The preferred approach is that an unmanned vehicle guides itself by referring to overhead visual targets such as

predefined overhead lights. Overhead lights are universally found in structured environments and not easily blocked by

floor obstacles. The guidance system does not require the installation of any special equipment in the work area. The

point matrix pattern of the beacon as an environment map is very simple to be processed and understood.

(3) The teach-and-playback operation mode seems appropriate not only for robotic manipulator but also for other

vehicles as well. Vision-based image database, as a teaching path record or a desired path generator, create a unique

technique to expand robot capability.

Page 9: Omni-vision based autonomous mobile robotic platform

Fig. 5: TUT-I Omni-vision based autonomous mobile robotic platform

ACKNOWLEDGMENTS

The author gratefully acknowledges the support of K.C.Wong education foundation, Hong Kong.

REFERENCES

1. E.L. Hall, “Fundamental principles of robot vision,” Handbook of Pattern Recognition and Image Processing:

Computer Vision, Academic Press, New York, pp.543-575, 1994.

2. Qing-ha0 Meng, Yicai Sun, and Zuoliang Cao, “Adaptive extended Kalman filter (AEKF)-based mobile robot

localization using sonar,” Robotica, Vol.18, pp.459-473.2000

3. Gordon Kao, and Penny Probert, “Feature extraction from a broadband sonar sensor for mapping structured

environments efficiently,” The International Journal of Robotics Research, Vol.19, No.10, pp.895-913, 2000.

4. Xiaoqun Liao, Jin Cao, Ming Cao, Tayib Samu, and Ernest Hall, “Computer vision system for an autonomous

mobile robot,” Proc. SPIE Intelligent Robots and Computer Vision Conference, November, Boston, 1998.

5. T.I Samu, N. Kelkar, and E.L. Hall, “Fuzzy logic system for three dimension line following for a mobile robot,”

Proc. of Adaptive distribute parallel computing symposium, Dayton, Oh. pp137-148, 1996.

6. Hong Xu, and Zuoliang Cao, “A three-dimension-fuzzy wall-following controller for a mobile robot”, ROBOT,

Vol.18, pp.548-551 1996.

7. Minglu Zhang, Shangxian Peng, and Zuoliang Cao, “The artificial neural network and fuzzy logic used for the

avoiding of mobile robot,” China mechanical engineering, vol.18, pp.21-24, 1997.

8. Zuoliang Cao, Yuyu Huang, and E.L Hall “Region filling operations with random obstacle avoidance for mobile

Page 10: Omni-vision based autonomous mobile robotic platform

robot,” Journal .of Robotic Systems, 5(2), pp.87-102, 1988.

9. Zvi Shiller, “Online suboptimal obstacle avoidance,” The International Journal of Robotics Research, Vol.19, No.5,

pp.480-497, 2000.

10. Alain Lamber, and Nadine Le Fort-Piat, “Safe task planning integrating uncertainties and local maps federations,”

The International Journal of Robotics Research, Vol.19, No.6, pp.597-611, 2000.

11. Liming Zhang, and Zuoliang Cao, “Teach-playback based beacon guidance for autonomic guided vehicles,”

Journal of Tianjin Institute of Technology, Vol.12 No.1, pp.28-31, 1996.

12. Liming Zhang, and Zuoliang Cao, “Mobile path generating and tracking for beacon guidance,” 2nd, Asian

Conference on Robotics, 1994.

Page 11: Omni-vision based autonomous mobile robotic platform

Deleted: ¶

<sp>¶

<sp>¶

<sp>¶

Fig. 5: TUT-I

Omni-vision based autonomous mobile ... [1]

Page 12: Omni-vision based autonomous mobile robotic platform

Page 11: [1] Deleted Ernie