a two-layered subgoal based mobile robot navigation algorithm with vision system and ir sensors

22
A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors N. Nirmal Singh, Avishek Chatterjee, Amitava Chatterjee , Anjan Rakshit Electrical Engineering Department, Jadavpur University, Kolkata 700 032, India article info Article history: Received 25 March 2010 Received in revised form 1 November 2010 Accepted 17 December 2010 Available online 30 December 2010 Keywords: Navigation Vision Subgoal IR sensor Shortest path IR sensor abstract The present work describes the real-life implementation of a mobile robot navigation scheme where vision sensing is employed as primary sensor for path planning and IR sen- sors are employed as secondary sensors for actual navigation of the mobile robot with obstacle avoidance capability in a static or dynamic indoor environment. This two-layer based, goal-driven architecture utilizes a wireless camera in the first layer to acquire image and perform image processing, online, to determine subgoal, employing a shortest path algorithm, online. The subgoal information is then utilized in the second layer to navigate the robot utilizing IR sensors. Once the subgoal is reached, vision based path planning and IR guided navigation is reactivated. This sequential process is continued in an iterative fashion until the robot reaches the goal. The algorithm has been effectively tested for sev- eral real-life environments created in our laboratory and the results are found to be satisfactory. Ó 2010 Elsevier Ltd. All rights reserved. 1. Introduction Recent advances in technologies in the area of robotics have made enormous contributions in many industrial and social domains. Nowadays numerous applications of robotic systems can be found in factory automation, surveillance systems, quality control systems, AGVs (autonomous guided vehicles), disaster fighting, medical assistance etc. More and more robotic applications are now aimed at improving our day-to-day lives, and robots can be seen more often than ever before performing various tasks in disguise [1]. For many such applications, autonomous mobility of ro- bots is a mandatory key issue. Many modern robotic appli- cations now employ computer vision as the primary sensing mechanism. A vision system is considered as a pas- sive sensor and possesses some fundamental advantages over the active sensors such as infrared, laser, and sonar sen- sors. Passive sensors such as cameras do not alter the envi- ronment by emitting lights or waves in the process of acquiring data, and also the obtained image contains more information (i.e. substantial, spatial and temporal informa- tion) than active sensors [2]. Vision is the sense that enables humans to extract relevant information about the physical world, and appropriately it is the sense that we, the humans, rely on most. Computer vision techniques capable of extracting such information are continuously being devel- oped and more and more real-time vision-based navigation systems for mobile robots are being implemented now. Vision based Robot navigation is defined as the technique that guides a mobile robot to a desired destination, or along a desired path in an environment, by avoiding static (and may be dynamic) obstacles primarily using vision sensor. Navigation in indoor and outdoor environments are the two major research areas that have emerged in the mobile robotics community. The navigation problem of a mobile ro- bot can be, most often, divided into four subproblems [3]: 1. World perception. It senses the world symbolizing it into features. 2. Path planning. It uses the features to create an ordered sequence of objective points that the robot must attain. 0263-2241/$ - see front matter Ó 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.measurement.2010.12.002 Corresponding author. E-mail address: [email protected] (A. Chatterjee). Measurement 44 (2011) 620–641 Contents lists available at ScienceDirect Measurement journal homepage: www.elsevier.com/locate/measurement

Upload: n-nirmal-singh

Post on 26-Jun-2016

220 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Measurement 44 (2011) 620–641

Contents lists available at ScienceDirect

Measurement

journal homepage: www.elsevier .com/ locate /measurement

A two-layered subgoal based mobile robot navigation algorithmwith vision system and IR sensors

N. Nirmal Singh, Avishek Chatterjee, Amitava Chatterjee ⇑, Anjan RakshitElectrical Engineering Department, Jadavpur University, Kolkata 700 032, India

a r t i c l e i n f o

Article history:Received 25 March 2010Received in revised form 1 November 2010Accepted 17 December 2010Available online 30 December 2010

Keywords:NavigationVisionSubgoalIR sensorShortest pathIR sensor

0263-2241/$ - see front matter � 2010 Elsevier Ltddoi:10.1016/j.measurement.2010.12.002

⇑ Corresponding author.E-mail address: [email protected] (A. Chatter

a b s t r a c t

The present work describes the real-life implementation of a mobile robot navigationscheme where vision sensing is employed as primary sensor for path planning and IR sen-sors are employed as secondary sensors for actual navigation of the mobile robot withobstacle avoidance capability in a static or dynamic indoor environment. This two-layerbased, goal-driven architecture utilizes a wireless camera in the first layer to acquire imageand perform image processing, online, to determine subgoal, employing a shortest pathalgorithm, online. The subgoal information is then utilized in the second layer to navigatethe robot utilizing IR sensors. Once the subgoal is reached, vision based path planning andIR guided navigation is reactivated. This sequential process is continued in an iterativefashion until the robot reaches the goal. The algorithm has been effectively tested for sev-eral real-life environments created in our laboratory and the results are found to besatisfactory.

� 2010 Elsevier Ltd. All rights reserved.

1. Introduction

Recent advances in technologies in the area of roboticshave made enormous contributions in many industrial andsocial domains. Nowadays numerous applications of roboticsystems can be found in factory automation, surveillancesystems, quality control systems, AGVs (autonomousguided vehicles), disaster fighting, medical assistance etc.More and more robotic applications are now aimed atimproving our day-to-day lives, and robots can be seen moreoften than ever before performing various tasks in disguise[1]. For many such applications, autonomous mobility of ro-bots is a mandatory key issue. Many modern robotic appli-cations now employ computer vision as the primarysensing mechanism. A vision system is considered as a pas-sive sensor and possesses some fundamental advantagesover the active sensors such as infrared, laser, and sonar sen-sors. Passive sensors such as cameras do not alter the envi-ronment by emitting lights or waves in the process of

. All rights reserved.

jee).

acquiring data, and also the obtained image contains moreinformation (i.e. substantial, spatial and temporal informa-tion) than active sensors [2]. Vision is the sense that enableshumans to extract relevant information about the physicalworld, and appropriately it is the sense that we, the humans,rely on most. Computer vision techniques capable ofextracting such information are continuously being devel-oped and more and more real-time vision-based navigationsystems for mobile robots are being implemented now.

Vision based Robot navigation is defined as the techniquethat guides a mobile robot to a desired destination, or alonga desired path in an environment, by avoiding static (andmay be dynamic) obstacles primarily using vision sensor.Navigation in indoor and outdoor environments are thetwo major research areas that have emerged in the mobilerobotics community. The navigation problem of a mobile ro-bot can be, most often, divided into four subproblems [3]:

1. World perception. It senses the world symbolizing it intofeatures.

2. Path planning. It uses the features to create an orderedsequence of objective points that the robot must attain.

Page 2: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 621

3. Path generation. The goal is to obtain a path through thesequence of objective points.

4. Path tracking. It is the responsibility of the controllerthat the mobile robot should follow the intended path.

All vision based indoor navigation schemes can bebroadly classified into three categories: map based naviga-tion, map-building based navigation, and mapless naviga-tion [4]. In map based navigation the robot has an abstractdescription of the environment, knows its current positionin the environment and the location of its goal. For indoorenvironments, the map-based approach is the most com-mon technique used due to the structured nature of theenvironment. The disadvantage of the map based navigationis that the model descriptions are not always easy to gener-ate, especially if one has to provide metric information. Inaddition, these schemes are valid for static environmentsonly. The map-building based navigation systems constructtheir own environment maps by using sensors and then usethese maps for navigation. Popular choices for the environ-ment maps are grid-based [5–7], topological map [8,9], andhybrid map [10]. The mapless navigation systems are thosethat use no explicit representation at all of the space in

Fig. 1a. The KOALA robot, equipped with sonar and IR sensors a

Fig. 1b. The block diagram

which navigation is to take place and they rather resort torecognizing objects found in the environment or to trackingor avoiding those objects by generating motion commandsbased on visual observations [11,12]. Several researchworks have so far been reported to acquire knowledge aboutthe environment using camera(s) in stereo vision [13,14],trinocular vision [15], omni-directional or panoramic vision[16,17], and monocular vision [18,19]. Each such solution inmobile robot navigation has its own advantages and disad-vantages. An important step in navigation after the con-struction of the map is the path planning for intelligentcontrol or guidance of the mobile robot. There are three pop-ular general approaches for path planning: roadmap, celldecomposition, and potential field [20]. They differ in howthe connectivity graphs are constructed and their represen-tations. Obviously, without any a priori knowledge of anenvironment, it is almost impossible to determine the trueshortest path for navigation, among all possible paths. It ispotentially possible to determine such paths by employingstandard graph-search techniques, such as Dijkstra’s algo-rithm [21] and A� algorithm [22].

The present paper describes a goal driven approach formobile robot navigation, using vision based sensing and IR

nd integrated with a single-camera based vision system.

of the vision system.

Page 3: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

622 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

sensor based navigation. This two-layer based approachattempts to determine the shortest path of navigationbetween the start point and the known goal point, givena static or dynamic environment, in presence of obstacles.In the first layer, vision acts as the primary sensing systemto acquire image of the environment, for subsequent pathplanning. A series of image processing operations is per-formed on the acquired image and then a gradient descentbased algorithm is employed to compute the shortest pathbetween the present position of the robot and the goal,avoiding obstacles [26]. This shortest path is employed togenerate a subgoal and this information is then locally uti-lized to navigate the robot, utilizing IR sensor based guid-ance. This second layer of IR sensor based robot navigationattempts to guide the robot to the subgoal, even if theenvironment changes dynamically. Once the robot reachesthe subgoal, the two-layer based algorithm is again acti-vated to generate a new subgoal and to navigate the robottill this new subgoal is reached. This process is repeatediteratively until the final goal is reached. This methodsimultaneously attempts to attain two objectives. Basedon vision sensing, it attempts to implement a shortest pathplanning algorithm in a bid to reach the goal, avoidingobstacles, as fast as it can. Then, if the environmentundergoes a change during navigation and obstacle infor-mation gets updated, then IR sensor based guidance equipsthe robot with the capability of handling the changed

Fig. 2. Snapshot of the user-interface devel

environment so that the robot can still navigate safely.The periodic usage of vision based updating of the environ-ment, subsequent path planning and then IR based actualnavigation helps to guide the robot to adapt its navigationtemporally with dynamic variations in the environmentand still attempt to reach the goal in shortest time, asquickly as practicable. This algorithm was implementedin our laboratory, for the KOALA robot [23], creating sev-eral real-life like environments. The results showed theusefulness of the proposed algorithm.

The rest of the manuscript is organized as follows: Sec-tion 2 describes the hardware setup. Section 3 describesthe propose scheme, in a nut shell. Section 4 describesthe computer vision based exploration of the environmentin layer 1. Section 5 describes the algorithm implementedfor the gradient descent based shortest path planning. Sec-tion 6 describes the IR based robot navigation in layer 2.Section 7 describes the performance evaluation. Section 8concludes our work.

2. The hardware setup

The KOALA is a small (32 cm � 32 cm) six wheeled, dif-ferential drive vehicle manufactured by K-team, Switzer-land [23]. The KOALA robot used in our laboratory isequipped with 16-proximity/ambient IR sensors, fourultrasonic sensors and wheel encoders. We have integrated

oped, that can interact with the user.

Page 4: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 623

a complete vision system along with the KOALA robot inour laboratory by employing a single wireless camera formonocular vision. In KOALA, the hardware control is per-formed by an on-board microprocessor (Motorola68331@ operating frequency 22 MHz) [23]. Fig. 1a showsa snapshot of the mobile robot with four ultrasonic sensorsand the vision system, integrated in our laboratory,employing a single vision sensor. The ultrasonic sensorscan detect obstacles over a wide range from 15 cm to300 cm, and the IR sensors will provide a range of mea-surements from 5 cm to 20 cm. Our system utilizes singlevision sensor comprises a JMK wireless camera (WS-309AS) with A/V receiver and a frontech USB (TV Box)frame grabber, which is used for acquiring a running videostream. Fig. 1b shows the vision system in schematic form.The entire system is developed with an objective of provid-ing a low-cost solution which should prove attractive forthe industrial community. The vision system is developedwith two degrees of freedom to provide pan control and tiltcontrol. To add two degrees of freedom (DOFs) for this vi-sion system, a PIC (16F876A) microcontroller based systemis developed in our laboratory for pan control and tilt con-trol of the single-camera based robot system [24]. Here,the main onboard Motorola microcontroller acts as themaster and the PIC microcontroller acts as a slave. The soft-ware, developed in interrupt driven mode, communicateswith the mobile robot through the RS232C port. Fig. 2shows a snapshot of the user-interface developed in the

Fig. 3. Flow chart for the propo

PC side that can interact with the user. The main serialmode of communication is handled by passing ASCII mes-sage strings between the PC and the Motorola processor inthe robot. The RS 232C serial link set-up between the PCand the robot is always set at 8 bit data, 1 start bit, 2 stopbits and no parity mode. To give an example, the messagestring for RC servo action to provide pan or tilt control, sentfrom the PC end, comprise ‘Z’ as the identifier character,followed by hii, which corresponds to the servomotor idfor which position command is prepared (i = 1, 2), followedby the sign, given in hsi, for position command (‘+’ or ‘�’)and then the two digit actual position command, in degree(this can vary from �90� to +90�). Every ASCII messagestring is terminated, as usual, by using carriage return,hCRi.

3. The proposed two-layer, goal oriented navigationscheme

Fig. 3 shows the complete proposed navigation algo-rithm in a flow chart form. A wireless camera, as shownin Fig. 1a, is used to capture a running video stream ofthe environment in front of the KOALA robot. An imageframe can be acquired from this video stream for furtherprocessing at any point of time. This acquired image frameis first processed to make the image suitable for furtherprocessing, by employing a series of image processingoperations like image filtering, edge detection and image

sed navigation algorithm.

Page 5: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

624 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

segmentation. Then the shortest path generation algorithmis employed for this processed image, using the goal infor-mation, available a priori. Next the shortest path generatedis utilized to determine the next subgoal. This entire proce-dure constitutes layer 1 of the algorithm and is imple-mented in high level in a PC using Visual Basic (VB)

Fig. 4. (a) Image acquired by the wireless camera, (b) detected edge image, (cinformation, and (f) trapezoidal floor image.

platform. This subgoal information is next transferred tolayer 2 where the KOALA robot is actually navigated to-wards the subgoal using obstacle avoidance capability sothat the robot can be useful even in a dynamically chang-ing environment. The navigation in layer 2 is performedusing several IR sensors, connected at the front face and

) thickened image, (d) region grown image, (e) image with the obstacle

Page 6: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 5. Relationship between the image coordinate and the mobile robot coordinate.

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 625

side faces of the KOALA robot. Once the subgoal is reached,the control is transferred back to layer 1 so that the nextsubgoal can be generated and actual navigation can be per-formed in layer 2. This process of local path planning,followed by actual navigation, is continued in an iterativefashion, until the final goal is reached. The algorithm inlayer 2 for actual navigation is implemented by developinga C program whose cross-compiled version (a .s37 file) isdownloaded in the Motorola processor of the KOALA robot.This .s37 program communicates with the VB program inthe PC end, in the interrupt driven mode, in real life. The.s37 program generated from the C program written, is alsoequipped with the facility of providing support from VBbased PC end for a pool of protocol commands for com-manding the KOALA robot. These commands are originallyonly available for execution from a terminal emulatoravailable with the KOALA robot package. We developed asystem where all the KOALA robot protocol commandsand our additional navigation algorithms are supportedby the C program developed, so that the entire systemcan be completely controlled from the VB platform in thePC end.

4. Image processing based exploration of theenvironment in layer 1

Image processing is a form of signal processing wherethe input signals are images and the output could be a

transformed version of the input. The proposed systememploys a map building method based on image segmen-tation, for vision-based navigation for mobile robot in anindoor environment, with the assumption that the surfaceis uniform. The following steps are implemented as follows[27]:

A. Acquire the image from the wireless camera.The camera, mounted at the center of the pan–tilt sys-

tem of the robot, keeps acquiring a running video streamof the environment ahead of it. From this video stream,an image frame can be acquired for further processing.Fig. 4a shows such an acquired image.

B. Employ low pass filtering on the acquired image.The acquired image is then low pass filtered to reduce

noise. This causes a smoothing or blurring effect on theneighboring pixels. The system is developed using the pop-ular arithmetic mean filter to perform low pass filtering.This arithmetic mean filter is utilized using a 5 � 5 matrix,centered on each pixel, whose intensity is computed as theaverage value of the pixels under the influence of the filtermatrix.

C. Detect edges in the filtered image by Canny edgedetection.

An edge physically signifies a boundary between tworegions with relatively distinct gray-level properties. Thetechnique of edge-based segmentation signifies isolationof desired objects from a scene using different types ofgradient operators. Edges of the image in our work are

Page 7: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 6b. A snapshot of the map updated with obstacle information.

Fig. 6a. A snapshot of the software developed for creating map.

626 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

Page 8: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 7. A sample shortest path computed for an environment.

Fig. 8a. IR sensor arrangements of the KOALA robot [23].

Fig. 8b. Measured values of the IR sensor readings, by placing a 1.5 cmwide obstacle in front of sensor (R0) at a distance of 10 cm.

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 627

detected by using canny edge detection method. Fig. 4bshows the edge image of the processed filtered version ofthe acquired image.

D. Process the edge image to thicken and link the edges.The edge image contains many small broken edges. To

make any edge image a meaningful one, one needs to link

nearby edges to bridge gaps and they can be thickened tomake their presence distinct. Thickening can be performedby a morphological operation called dilation by a structur-ing element that is used to grow selected regions of fore-ground pixels in images. Dilation is normally applied tobinary images, and it produces another binary image asoutput. This dilation operation ‘‘thickens’’ or ‘‘grows’’ ob-jects in a binary image and the shape of thickening canbe controlled by a suitable choice of the structuring ele-ment shape, used to perform dilation of the image. Theconcept of linking edges and thickening them by dilationin an edge image can also be performed by a suitable low

Page 9: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 9. (a): The image acquired, (b–f): sequence of image processing carried out in layer 1. (b): edge image; (c): thickened edge image; (d): region grownimage; (e): image with free, obstacle, and hidden regions and (f): trapezoidal floor image.

628 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

Page 10: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 10. The initial grid map.

Fig. 11. The grid map, when the robot reaches the first subgoal.

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 629

Page 11: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 12. (a–d) Results of image processing at subgoal 1. (a): the acquired image; (b): edge image; (c): thickened edge image, and (d): image with threedistinct regions.

1 For interpretation of color in Figs. 4 and 6, the reader is referred to theweb version of this article.

630 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

pass filtering scheme with a suitable choice of the filtermask. This operation is carried out in this work by usinggeometric mean filtering. The geometric mean filter ismember of a set of nonlinear mean filters, which areefficient in removing Gaussian type noise and preservingedge features than the arithmetic mean filter. Fig. 4c showsthe edge linked and thickened image.

E. Perform region growing segmentation on the thickenededge image.

Once the thickening is done, the image is segregated intoregions. To find the obstructed zone and unobstructed zonein the image, region growing based segmentation is per-formed on the thickened image. Region growing is a simplebut efficient region-based image segmentation method andit is classified as one of the pixel-based image segmentationschemes which involves the selection of initial seed points.This approach to segmentation examines the neighboringpixels of the initial ‘‘seed points’’ and determines if the pixelshould be added to the seed point or not. Region growing is

done by examining properties of each such block createdand merging them with adjacent blocks that satisfy somecriteria (similar gray-level pixel values, texture, etc.). Theseed point needed for performing region growing is chosennear the bottom center of the image. This point ‘S’ is shownin Fig. 4c. Now the image is scanned along all the verticallines from bottom to top. The point at which the floor areaends is regarded as the obstacle. All regions before the obsta-cles are free zone. All regions beyond the obstacles aretermed the hidden zone. Fig. 4d shows the unobstructedzone (free space) with green1 color and the hidden zonewith yellow color. Next the obstructed zone is marked inred and Fig. 4e shows all these three regions. This entireprocess is continuous and the obstacle information getscontinuously updated.

F. Transform the region grown image to the floor region.

Page 12: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 631

The entire grown up region updated with obstacleinformation is now transformed from image plane to floorregion. In order to calculate a distance in the 3D coordi-nates using single camera, we assume that all the objectshave contact at the bottom and interpret it in two dimen-sional coordinates. Fig. 5 shows the relationship that, giventhe elevation of the camera and the elevation angle, howany point on the image plane can be directly mapped onthe floor, relative to the position of the camera [25]. Herethe robot/camera 3D coordinate frame is assumed withthe corresponding notations shown in Fig. 5. This coordi-nate frame is assumed attached to present pose of therobot/camera, at any instant of time. This coordinate trans-formation mechanism allows one to determine the freepoints and the obstructed points in the world coordinatesystem (WCS) from the image acquired by the camera.Hence, with reference to Fig. 5, any point with coordinates(u, v) in the image plane can be transformed to the coordi-nates in the two dimensions (xc, yc) in the robot/cameracoordinate frame as:

xc ¼uh

ðf sin hEL þ v cos hELÞ

yc ¼hðv sin hEL � f cos hELÞðv cos hEL þ f sin hELÞ

where h = height of the camera optical center from baseplane, f = focal length of the camera, hEL = elevation angleof the camera, ycmin

¼ AðDead zoneÞ, ycmax¼ B, xcmin

¼ C2,

xcmax ¼ D2.

Once this transformation is employed, one can obtainthe actual position of a point (x, y) on the floor, given this(xc, yc) and the present pose of the robot (xR, yR, /R).Fig. 4f shows the floor with obstacle information. Thetransformed floor region is in trapezoidal form. Then thisfloor plane image is copied to the 500 pixel � 500 pixelmap which is 20 m � 20 m as a working space for the ro-bot. Fig. 6a shows a snapshot of the map created andFig. 6b shows a snapshot with the floor image in the gridmap. In Fig. 6b, the trapezoidal floor region is shown ingreen color and the obstacle information is shown inred color. The above process of transformation is contin-uous even when the robot is in motion and it updatesthe new obstacle information in the map when it is inmotion.

5. Steepest decent gradient based shortest pathcomputation and subgoal generation

In mobile robot navigation it should be an importantobjective to determine the optimum path between thepresent robot location and the goal point, so that the ro-bot can reach the destination in minimum time, avoidingobstacles, as far as practicable. The present work employsa heuristic gradient based method which is based on gridmap for finding the shortest path [26]. Algorithm 1 showsthis algorithm in detail. The initial and the final positionsof the robot are known a priori with the obstacle informa-tion determined from the previous steps. Now the coordi-

nates along the shortest path are determined by usingsteepest descent method. The steepest decent algorithmuses the gradient function to determine the direction inwhich a function is decreasing most rapidly. Each succes-sive iteration of the algorithm moves along this directionfor a specified step size and then recomputes the gradientto determine the new direction of travel. This heuristicapproach employed here can be easily understood if aland is considered with known obstacles and the initialpoint and final point on it. The land surface is assumedfrictionless such that, say, at the starting point, if we startpouring sand on the ground, it spreads towards all possi-ble paths, similar to dispersion of a fluid in all possibledirections. It is obvious that one cannot pass throughthe obstacles. In each iteration, we assume that a fixedamount of sand is poured and we let it spread. We set atime index to every point on the ground, equal to the iter-ation number, when the sand reaches a pre-assumedheight. So the earlier the height is reached, the smalleris this index. Such a pre-assumed height and a fixedamount of sand dispersed are chosen so as to avoid satu-ration in value within any finite considerable region.Hence a travel time matrix (H) can be calculated employ-ing the finite element diffusion method and this H matrixis iteratively updated, until a termination criterion is met.At the end of this procedure, those entries in H which stillcontain zeros correspond to the obstacle cells. Next, thegradient descent based procedure is employed to deter-mine the coordinates of the points on the shortest pathby starting from the goal point and finally arriving atthe present robot location. For this, the gradient matricesof H in x- and y-directions, i.e. rHx and rHy, are calcu-lated and based on them the new coordinates of the nextpoint on the shortest path are computed, utilizing the lastpoint obtained on the shortest path. The algorithm alwaysproceeds backwards starting from the goal point. Thismethod is an efficient one and it operates in an iterativefashion. Fig. 7 shows a sample environment where theshortest path is computed between the initial and thegoal point in the map.

Once the shortest path is determined, we need to findthe corner points nearer to an obstacle. To find the cor-ner points, we take three consecutive points on the pathand find the cosine of the angle between the two linesegments joining the first two and last two points. If thisvalue falls below a given threshold, then the middle ofthese three points is considered as a corner point, other-wise we move to the next subsequent point and againcompute the cosine of the new angle. This process is con-tinued until the suitable corner point is obtained. Thiscorner point is stored as the next subgoal point for nav-igation. For example, in Fig. 7, when A1, A2, A3 are thethree points under consideration, then the cosine of theangle between the line segments A1A2 and A2A3 is veryhigh (above the chosen threshold). So A2 is not consid-ered as a corner point. In this process, we keep movingforward, and when we reach the three consecutive pointsB1, B2, B3, the angle between the line segments B1B2 andB2B3 is large enough so that the cosine of the angle isbelow the chosen threshold. Then B2 is considered as acorner point.

Page 13: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Algorithm 1. The shortest path generation algorithm employing obstacle avoidance.

BEGIN1. Obtain the Occupancy grid matrix (M), the start point (x_start, y_start), and the goal point (x_goal, y_goal). M(i, j) = 0

denotes a free cell and M(i, j) = 1 denotes an obstacle cell.2. Create diffusion matrix (W) and Travel Time Matrix (H) and make them of same size as M. Initialize W0 = H0 = 0.3. Set W0 (x_start, y_start) = 1.4. Set diffusion constant (diffconst) and maximum number of iterations without updates (no_update_max). Initialize

number of iterations ðiter countÞ and number of iterations without updates (no_update_iter_count).5. WHILE (no_update_iter_count < no_update_max)

5.1. iter_count = iter_count + 1.5.2. Diffuse cells downwards:

Witer_count(i, j) = Witer_count(i, j) + diffconst �Witer_count(i+1, j)i = 1, 2, . . . ,(W_ROWS –1); j = 1, 2, . . . ,W_COLS;

5.3. Diffuse cells upwards:Witer_count(i, j) = Witer_count(i, j) + diffconst �Witer_count(i � 1, j)

i = 2, 3, . . . ,W_ROWS; j = 1, 2,. . . ,W_COLS;5.4. Diffuse cells towards right:

Witer_count(i, j) = Witer_count(i, j) + diffconst �Witer_count(i, j + 1)i = 1, 2, . . . ,W_ROWS; j = 1, 2, . . . ,(W_COLS-1);

5.5. Diffuse cells towards left:Witer_count(i, j) = Witer_count(i, j) + diffconst �Witer_count(i, j � 1)

i = 1, 2, . . . ,W_ROWS; j = 2, 3, . . . ,W_COLS;5.6. Make Witer_count(i, j) = 0, if M(i, j) = 1; i = 1, 2, . . . ,W_ROWS; j = 1, 2, . . . ,W_COLS;5.7. If any Witer_count(i, j) becomes greater than the height for the first time, then make corresponding Hiter_count(i, j) =iter_count.5.8. Count sum_countiter_count as the sum of those entries in W matrix at present with value >1.5.9. IF |sum_countiter_count � sum_count(iter_count-1)| < 1 THEN no_update_iter_count = no_update_iter_count + 1

ENDIFENDWHILE6. All H(i, j) point still equal to zero are the obstacle points. Set these points to a high value i.e. one more than their

adjacent neighbor which one have the highest value (steep gradient for obstacle occupied points).7. Create shortest path coordinate vectors sh_path_coord_row and sh_path_coord_col and initialize the first point:

sh_path_coord_row(1) = x_goal; sh_path_coord_col(1) = y_goal.Set l.8. Compute gradient matrices of H matrix in x-direction (rHx) and y-direction (rHy).9. rHx = �rHx; rHy = �rHy; path_index = 1; path_flag = 1;10. WHILE (path_flag = 1)10.1. Compute del_row by interpolation using the rHy matrix.10.2. Compute del_col by interpolation using the rHx matrix.10.3. Compute the coordinates of the next point on the shortest path:sh_path_coord_row(path_index + 1) = sh_path_coord_row(path_index)+l � del row

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

del row2þdel col2p

sh_path_coord_col(path_index + 1) = sh_path_coord_col(path_index)+l � del col

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

del row2þdel col2p

10.4. IF (initial point is reached) THENpath_flag = 0;ENDIFENDWHILE11. Reverse vectors sh_path_coord_row and sh_path_coord_col.END

632 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

6. IR based navigation in layer 2

Once the subgoal point is determined, the control willbe passed from layer 1 to layer 2. As soon as the new

subgoal information is passed, the robot updates its pres-ent pose (xR, yR, /R), based on incremental wheel encoderinformation, and determines the new steering angle, basedon its present pose and the subgoal information. Ideally

Page 14: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 13. The grid map, when the robot reaches the final goal point.

-1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0-1.0

-0.5

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Obstacle

Goal pointInitial point

y-co

ordi

nate

(m

)

x-coordinate (m)

Fig. 14. The robot navigation path traversed.

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 633

this is the angle by which the robot should turn and pro-ceed at a constant speed to reach the subgoal, in a staticscenario. This is because the subgoal belongs to the set ofpoints which were generated from the shortest path gener-ation algorithm, employing obstacle avoidance. However,in a dynamic scenario, after the last time the vision basedmapping subroutine was activated, a new obstacle mayhave arrived or an old obstacle’s position may have beenchanged. This may result in obstruction along the idealpath of travel between the robot and the subgoal. To copewith this dynamic environment, the navigation is guidedby 16 IR sensors, mounted symmetrically along the periph-ery of the KOALA robot.

These IR sensors are densely populated in front andsparsely populated at the two sides of the robots. Fig. 8ashows the sensor arrangement of the mobile robot andFig. 8b shows a typical situation for the measured valuesof the sensors, by placing a 1.5 cm wide obstacle in frontof the front sensor (R0) at a distance of 10 cm from the ro-bot front face. For navigation, these 16 IR sensors scan theenvironment. Depending on these sensor readings, thesystem calculates the obstacle regions and free regionsahead of the robot. From these calculations the traversablearea is determined. For determining the traversable area,separate thresholds are set for each of the 16 sensors, withthe maximum priority given to the front sensors (R0–R3,L0–L3). For each sensor, if its reading exceeds its threshold,it means the direction ahead of it is obstructed, else thedirection ahead is considered free for traversal. Now,depending on these readings, there can be traversableareas both to the left and to the right of the present pose

of the robot. The decision of whether the robot should turnleft or right is taken based on which direction will meanthat the robot has to undertake the shorter detour with re-spect to its ideal direction of travel. Once the detour direc-tion is determined, the speed of the robot is determinedbased on the IR sensor readings in that direction. Whenthe robot travels a predetermined distance, the entire IRbased scanning and determination of the new detour direc-tion of traversal is reactivated and this procedure is contin-ued until the robot reaches the subgoal or its closestvicinity. Then the robot stops and the control is transferredback to layer 1.

Page 15: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

500

600

700

800

r re

adin

g

L0 L3

634 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

7. Performance evaluation

The performance evaluation has been carried out, for vi-sion-based navigation, in our laboratory, utilizing severalenvironments. Here we present the results for four suchexperiments, two each in static and dynamic environments.

0 10 20 30 40 50 600

100

200

300

400

Lef

t se

nso

Time (s)

0 10 20 30 40 50 600

100

200

300

400

500

600

700

800

Rig

ht s

enso

r re

adin

g

Time (s)

R0 R3

(a)

(b)

Fig. 15. Variation of: (a) response of L0 & L3 IR sensors with time and (b)response of R0 & R3 IR sensors with time, for case study I.

7.1. Case study – I

The initial pose of the robot is (0, 0, 0) and the goal pointis (2, 0). There lies an object between the robot and thegoal position. It should be mentioned here that for the ro-bot system which is equipped with a pan–tilt mechanismwith its corresponding degrees of freedom, in this work,the pan angle and the tilt angle are suitably initializedfor a particular environment and then they are kept fixed,for all subsequent experiments. Initially these two anglesare so chosen for the robot system developed so that themonocular camera, in each frame, covers a reasonablylarge floor and environment area. The system is henceequipped with the flexibility where these angles can besuitably initialized depending on the environment wherethis navigation system is going to be implemented.Fig. 9a shows the image frame acquired from the videostream of the camera and Fig. 9b–f show the sequence ofimage processing steps, when the robot is in initial posi-tion. The edge of the face of the obstacle on the ground,viewed by the robot in front of it when the robot is at itsinitial pose, actually extends from (0.9, 0.3) to(0.9, �0.85). Fig. 10 shows the snapshot of the grid mapwith the obstacle information and free region. Here theshortest path is calculated and the layer 2 of the robot nav-igation algorithm is updated with the subgoal information.The algorithm calculates the subgoal 1 as (0.81, 0.33).When the control is transferred to layer 2, the robot navi-gates using IR sensor based guidance, up to subgoal 1. Therobot actually stops at (0.819, 0.332) which has very smalldiscrepancy with the calculated subgoal. Fig. 11 shows thesnapshot of the grid map when the robot reaches the firstsubgoal point. This grid map is developed when the controlis transferred back to layer 1 and vision based processing iscarried out once more. Fig. 12a–d show the results of im-age processing steps, when the robot is at the first subgoalpoint. These results are used for IR based navigation oncemore. This sequential process is continued to reach the fi-nal goal point. Fig. 13 shows the grid map when the robotreached the destination. The robot finally stops at(1.963, 0.024) which is extremely close to the specifiedgoal (2, 0). Fig. 14 shows the complete navigation path tra-versed by the robot, starting from the initial point andreaching the goal point, in presence of the obstacle, follow-ing the shortest possible path. Fig. 15a shows the responseof IR sensors on the right side of the robot (R0, R3) duringnavigation and Fig. 15b shows the corresponding re-sponses for the IR sensors on the left side of the robot(L0, L3). It can be seen that the reading of the R3 sensorreaches a high value when the robot is in the vicinity ofthe obstacle. As the robot crosses the obstacle and pro-ceeds towards the goal point, the reading of the R3 sensorgradually decreases.

7.2. Case study – II

Here again the initial pose of the robot is (0, 0, 0) andthe new goal point is (3, 0). Now two objects are intro-duced between the robot and the goal position. Fig. 16ashows the image frame acquired at the initial position ofthe robot and fig. 16b–f show the results of subsequent im-age processing steps in layer 1. Fig. 17 shows the snapshotof the initial grid map with the obstacle information andfree region. Next the shortest path is calculated and thelayer 2 of the robot navigation algorithm is implementedwith this subgoal information. Fig. 18 shows the snapshotof the grid map when the robot reaches the first subgoalpoint. When the robot reaches subgoal 1, the control istransferred back to layer 1. The system again performsthe vision based processing, as shown in Figs. 19 and 20shows the grid map when the robot reaches subgoal 2,using IR based navigation in layer 2. This iterative processis continued until the robot reaches the final goal. Fig. 21shows the grid map when the robot reaches the final goalpoint. Fig. 22 shows the complete path of traversal of the

Page 16: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 16. (a): The image acquired, (b)–(f): results of image processing steps in layer 1. (b): edge image; (c): thickened edge image; (d): region grown image;(e): image with free, obstacle and hidden regions and (f): trapezoidal floor image.

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 635

robot for this static environment and shows that the robotreaches the goal satisfactorily. Fig. 23 shows the variations

of four IR sensors, R0, R3, L0, and L3, readings when the ro-bot navigates towards its destination.

Page 17: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 17. The initial grid map.

Fig. 18. The grid map, when the robot reaches the first subgoal.

636 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

7.3. Case study – III

In the next two case studies, we demonstrate the util-ity of the proposed system in case of a dynamicallychanging environment. Here, for an environment similarto that considered in case study I, the robot starts froman initial pose (0, 0, 0), with a bid to reach the goal point(2, 0), in presence of an obstacle between the robot and

the goal position. However, after the robot starts its IRbased navigation towards subgoal 1, determined usingvision based image processing in layer 1 at the initial po-sition of the robot, followed by the determination of thesubgoal 1 utilizing the shortest path algorithm, the posi-tion of obstacle 1 is shifted. The new position of theobstacle is now shown in Fig. 24 where it is moved nearerto the robot and it is shifted towards the left of the robot,

Page 18: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 19. (a–d) Results of image processing at subgoal 1. (a): the captured image; (b): edge image; (c): thickened edge image, and (d): image with threedistinct regions.

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 637

with reference to its initial pose. Because of this dynamicvariation in the environment, the robot takes a detourtowards its left but was still able to avoid the obstacleand reach its subgoal. The subsequent activations of theiterative algorithm show that the robot reaches its finalgoal almost perfectly, once more. Fig. 24 shows this nav-igation of the robot in the dynamic environment. Fig. 25aand b show the IR sensor readings, in front of the robot. Itcan be seen that the reading of R0 and L0 receive a sud-den kick when the obstacle is moved in the dynamicenvironment.

Here it should be mentioned that if there arises anexceptional situation where the dynamically changing ob-ject arrives exactly on a subgoal, then, according to thealgorithm, the IR sensor based actual navigation guidancemechanism will ensure that the robot will stop at theshortest distance from the subgoal, satisfying obstacleavoidance or collision requirement.

7.4. Case study – IV

This situation is similar to case study II, but with bothobstacles being made dynamic in nature. Here also, afterthe robot start traversing towards subgoal 1, avoidingobstacle 1 whose position was determined from the visionbased image processing in layer 1, the position of theobstacle 1 was suddenly changed. It was brought closerto the robot and more towards its left, making partial dy-namic blockage of the free region of traversal. Similarly,when the robot was attempting to traverse a shortest pathavoiding obstacle 2, suddenly the position of the obstacle 2was changed by bringing it closer to the robot. Howeverthe robot was able to undertake the required detour inits IR based navigation in each such situation and was ableto reach the final goal satisfactorily, as shown in Fig. 26.Fig. 27a and b show the readings of the IR sensors R0, R3,L0, and L3. It can be seen that here also the readings of

Page 19: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

Fig. 20. The grid map, when the robot reaches the second subgoal.

Fig. 21. The grid map, when the robot reaches the final goal point.

638 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

L0 and R0 receive two sudden kicks, when the two obstaclepositions are changed.

8. Conclusion

A two-layered, goal oriented, vision based robot naviga-tion scheme is proposed in this work. The system employsvision based analysis of the environment in layer 1, which

employs several image processing functions and a shortestpath generation algorithm, to determine the next subgoalfor navigation, with the objective of reaching the finaldestination as fast as possible, avoiding obstacles. This sub-goal information is utilized by the robot in layer 2 to nav-igate in dynamic environments, utilizing a set of IR sensors,avoiding obstacles, to reach the subgoal or its closest vicin-ity. This two-layered algorithm is utilized iteratively tocreate the next subgoal and navigate up to it, so that the

Page 20: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

-1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0-1.0

-0.5

0.0

0.5

1.0

1.5

2.0

2.5

3.0

3.5

4.0

Obstacle 2

Obstacle 1

goal pointinitial point

y-co

ordi

nate

(m

)

x-coordinate (m)

Fig. 22. The robot navigation path traversed, in case study II.

0 10 20 30 40 50 60 70 800

50

100

150

200

250

300

350

400

Rig

ht s

enso

r re

adin

g

Time (s)

R0 R3

(a)

0 10 20 30 40 50 60 70 800

50

100

150

200

250

300

350

400 L0 L3

Lef

t se

nsor

rea

ding

Time (s)

(b)

Fig. 23. Variation of: (a) response of R0 and R3 IR sensors and (b)response of L0 and L3 IR sensors during navigation, for case study II.

-1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0-1.0

-0.5

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Obstacle 1 shifted position

Obstacle 1 position

Goal pointInitial pointy-

coor

dina

te (

m)

x-coordinate (m)

Fig. 24. The robot navigation path traversed in a dynamic environment.

0 10 20 30 40 50 60 700

100

200

300

400

500

600

700

800

900

Rig

ht s

enso

r re

adin

g

Time (s)

R0 R3

(a)

0 10 20 30 40 50 60 700

100

200

300

400

500

600

700

800

Lef

t se

nsor

rea

ding

Time (s)

L0 L3

(b)

Fig. 25. Variation of: (a) response of R0 and R3 IR sensors and (b)response of L0 and L3 IR sensors during navigation in the dynamicenvironment, for case study III.

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 639

Page 21: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

-1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0-1.0

-0.5

0.0

0.5

1.0

1.5

2.0

2.5

3.0

Obstacle 2 shifted position

Obstacle 2

Obstacle 1 shifted position

Obstacle 1

Goal pointInitial point

y-co

ordi

nate

(m

)

x-coordinate (m)

Fig. 26. The robot navigation path traversed in a dynamic environment.

0 10 20 30 40 50 60 70 80 900

100

200

300

400

500

600

700

800

Rig

ht s

enso

r re

adin

g

Time (s)

R0 R3

(a)

0 10 20 30 40 50 60 70 80 900

100

200

300

400

500

600

700

800

Lef

t se

nsor

rea

ding

Time (s)

L0 L3

(b)

Fig. 27. Variation of: (a) response of R0 and R3 IR sensors and (b)response of L0 and L3 IR sensors during navigation in the dynamicenvironment, for case study IV.

640 N. Nirmal Singh et al. / Measurement 44 (2011) 620–641

final goal is reached sufficiently quickly. The main contri-bution of this work is to hybridize the shortest path

algorithm with camera based image processing to enhancethe quality of vision-based navigation of mobile robots inthe real world, so that, the robot can reach its goal (knowna priori), following the shortest practical path, avoidingobstacles. The robustness of the system is further ensuredby the IR-sensor guided navigation, which helps the robotto adapt its navigation, based on any possible change inobstacle positions in a dynamic environment. The pro-posed algorithm is implemented for several environmentscreated for indoor navigation in our laboratory. It wasdemonstrated that the KOALA robot could achieve its task,each time, satisfactorily, for both static environments anddynamic environments.

The developed programs comprise high-end programsdeveloped in VB platform which communicate, inreal-time, with the processor of the robot system, wherecross-compiled versions of custom-designed C programsare downloaded. However, in the real implementationphase, the entire system is run from the high-end VB plat-form in a PC through a user-friendly GUI developed, so thatit can be easily utilized by some common users.

For high illumination situations the algorithm is ex-pected and has been demonstrated to provide satisfactoryperformance. However, for low illumination situations, thereflections of the obstacles on the floor may look dark en-ough (as is shown in the case of Fig. 4a–e) so that the edgeimage may contain some edges corresponding to reflectionson the floor. Hence these reflections may be interpreted asobstacle and this reduces the free zone computed. However,according to the algorithm, in these exceptional cases, theshortest path computed may be a little longer than the trueshortest path but still safe and robust navigation of the robotavoiding obstacles towards the goal will be ensured.

The present system is developed where the robot pose inreal environment is estimated by odometry using onlyincremental wheel encoder information. This suffices wellfor indoor applications with uniform floors, for which thesystem is primarily developed. The experiments conductedsufficiently demonstrate that the robot reaches the goal inreal world, under these conditions, for a variety of environ-mental configurations. However, the accuracy of this systemmay suffer in outdoor environments due to problems likewheel slippage etc. The future work will intend to concen-trate on adaptation of this system for outdoor environmentstoo and this may be accomplished by additionally integrat-ing e.g. extended Kalman filter based algorithms for robotlocalization, along with the current system developed.

Acknowledgement

This work was supported by University Grants Commis-sion, India under Major Research Project Scheme (GrantNo. 32-118/2006(SR)).

References

[1] Z. Chen, S.T. Birchfield, Qualitative vision-based mobile robotnavigation, in: Proc. of the IEEE International Conference onRobotics and Automation (ICRA), Orlando, Florida, May 2006.

[2] M. Bertozzi, A. Broggi, A. Fascioli, Vision-based intelligent vehicles:state of the art and perspectives, Robotics and Autonomous Systems32 (2000) 1–16.

Page 22: A two-layered subgoal based mobile robot navigation algorithm with vision system and IR sensors

N. Nirmal Singh et al. / Measurement 44 (2011) 620–641 641

[3] D.H. Shin, S. Singh, Path generation for robot vehicles usingcomposite clothoid segments, The Robotics Institute, InternalReport CMU-RI-TR-90-31, Carnegie-Mellon University, 1990.

[4] G.N. DeSouza, A.C. Kak, Vision for mobile robot navigation: a survey,IEEE Transactions on Pattern Analysis and Machine Intelligence 24(2) (2002) 237–267.

[5] H.P. Moravec, Sensor fusion in certainty grids for mobile robots,Artificial Intelligence Magazine 9 (1988) 61–74.

[6] A. Elfes, Sonar-based real-world mapping and navigation, IEEEJournal of Robotics and Automation 3 (6) (1987) 249–265.

[7] P. Jensfelt, H.I. Christensen, Pose tracking using laser scanning andminimalistic environmental models, IEEE Transactions on Roboticsand Automation 17 (2001) 138–147.

[8] B. Yamauchi, R. Beer, Spatial learning for navigation in dynamicenvironments, IEEE Transactions on System, Man, and Cybernetics,Part B 26 (3) (1995) 634–648.

[9] D. Pierce, B. Kuipers, Learning to explore and build maps, in: Proc. ofthe Twelfth National Conference on Artificial Intelligence, AAAI, AAAIPress/MIT Press, Menlo Park, July 1994, pp. 1264–1271.

[10] S. Thrun, Learning metric-topological maps for indoor mobile robotnavigation, Artificial Intelligence 99 (1998) 21–71.

[11] J. Santos-Victor, G. Sandini, F. Curotto, S. Garibaldi, Divergent stereofor robot navigation: Learning from Bees, in: Proc. IEEE ComputerSociety Conference on Computer Vision and Pattern Recognition,New York, June 1993, pp. 434–439.

[12] D. Kim, R. Nevatia, Recognition and localization of generic objects forindoor navigation using functionality, Image and Vision Computing16 (11) (1998) 729–743.

[13] D. Murray, J.J. Little, Using real-time stereo vision for mobile robotnavigation, Autonomous Robots 8 (2000) 161–171.

[14] A.J. Davison, Mobile robot navigation using active vision, PhD Thesis,1998.

[15] N. Ayache, O.D. Faugeras, Maintaining representations of theenvironment of a mobile robot, IEEE Transactions on Robotics andAutomation 5 (6) (1989) 804–819.

[16] M. Fialaa, A. Basub, Robot navigation using panoramic tracking,Pattern Recognition 37 (2004) 2195–2215.

[17] J. Gasper, J. Santos- Victor, Vision-based navigation andenvironmental representations with an omnidirectional camera,IEEE Transactions on Robotics and Automation 16 (6) (2000) 890–898.

[18] A. Ohya, A. Kosaka, A. Kak, Vision-based navigation by a mobile robotwith obstacle avoidance using single-camera vision and ultrasonicsensing, IEEE Transactions on Robotics and Automation 14 (6)(1998) 969–978.

[19] M.H. Li, B.H. Hong, Z.S. Cai, S.H. Piao, Q.C. Huang, Novel indoormobile robot navigation using monocular vision, EngineeringApplications of Artificial Intelligence (2007) 1–18.

[20] J. Latombe, Robot Motion Planning, Kulwer, Norwell, MA, 1991.[21] E.W. Dijkstra, A note on two problems in connection with graphs,

Numerische Mathematik 1 (1959) 269–271.[22] N.J. Nilsson, Principles of Artificial Intelligence, Tioga Publishing

Company, 1980.[23] KOALA User Manual, Version 2.0 (silver edition), K-team S.A.,

Switzerland, 2001.[24] N.N. Singh, A. Chatterjee, A. Rakshit, A PIC microcontroller-based

system for real-life interfacing of external peripherals with amobile robot, International Journal of Electronics 97 (2) (2010)139–161.

[25] P.G. Kim, C.G. Park, Y.H. Jong, J.H. Yun, E.J. Mo, C.S. Kim, M.S. Jie, S.C.Hwang, K.W. Lee, Obstacle avoidance of a mobile robot using visionsystem and ultrasonic sensor, in: D.-S. Huang, L. Heutte, M. Loog(Eds.), ICIC 2007, LNCS 4681, Springer-Verlag, Berlin, Heidelberg,2007, pp. 545–553, doi:10.1007/978-3-540-74171-8.

[26] http://www.mathworks.com/matlabcentral/fileexchange/8625-shortest-path-with-obstacle-avoidance-ver-1-3?controller=file_infos.

[27] http://atalasoft-imgx-controls-sdk.software.informer.com/.