Transcript

Autonomous Mobile Robot Navigation using Smartphones

Extended Abstract

Andre Guilherme Nogueira Coelho dos SantosInstituto Superior Tecnico

November 2008

Abstract

Robots are present everywhere, from lawn mowers andpool cleaners to space vehicles. Within robotics, specialattention is given to mobile robots, since they have thecapacity to navigate in their environment.

Mobile phones are today another reality that none ofus live without. From immensely limited devices, tosmall, cheap and rich smartphone platforms, mobilephones are today one of the most used embedded sys-tems in the world. A smartphone is a fully functionalcomputer with added communication capacity, built-invideo camera and media players, just to name a few.

With the proliferation of embedded systems in our lives,we decided to study how an integration between mobilerobots and smartphones could contribute to the tasksthat these robots perform today. We studied the cur-rent feasibility of a smartphone executing navigationalgorithms in order to control mobile robots. We de-veloped such a mobile system where we tested the threemain navigation problems, Mapping, Localization andPath Planning.

Keywords: Visual landmark recognition, particlefilter, potential fields, mobile robotics, smartphone,J2ME.

1. Introduction

This paper presents the development of a navigationsystem composed by a mobile robot and a smart-phone. In this system, the mobile robot is controlled bythe smartphone, where navigation algorithms are pro-cessed, generating controls that are transmitted back tothe mobile robot using bluetooth communication (seethe system organization presented in Figure 1).

Our work established the following main objectives:

• Develop a prototype considering the system archi-tecture provided in Figure 1;

• Research and develop navigation algorithms formobile robots considering their implementation onembedded systems;

• Study and analyze smartphone feasibility, capacityand performance when executing computationallyexpensive algorithms (and also considering imagecapture as a visual sensor).

Figure 1: System organization.

We believe that the use of a smartphone to control arobot has many real life applications. In this work wecontribute with the study of integration solutions forjoint systems of mobile robots and smartphones; as wellas the implementation of computationally demandingnavigation algorithms in limited embedded devices.

This paper is organized as follows: section 2 gives anoverview of navigation in mobile robotics; section 3presents our system, with the definition of the proto-type developed and algorithms implemented; section 4shows experimental results and section 5 presents someconclusions and future work.

2. Mobile Robotics Navigation

A robot is an autonomous system that is able to senseits environment, and to act on it to achieve goals. Amobile robot adds the fact that it is not confined to one

1

specific location, as it has the capability of moving inits environment. The main characteristic that definesan autonomous robot is the ability to act on the basisof its own decisions, and not through the control of ahuman [14].

Navigation is defined as the process or activity of ac-curately ascertaining one’s position, planning and fol-lowing a route. In robotics, navigation refers to theway a robot finds its way in the environment [14] andis a common necessity and requirement for almost anymobile robot.

Robot navigation is a vast field and can be divided intosubcategories for better understanding of the problemsit addresses. Leonard and Durrant-Whyte [12] brieflydescribed the general problem of mobile robot naviga-tion by three questions, each one addressed for a sub-category: Localization, Mapping and Path Planning.

2.1. Localization - ”Where am I?”

Localization is the process of estimating where therobot is, relatively to some model of the environment,using whatever sensor measurements are available. Asthe robot keeps moving, the estimation of its positiondrifts and changes, and has to be kept updated throughactive computation [14]. These updates are performedbased on the recognition of special features in land-marks, sensor data and probabilistic models.

Robot localization is a key problem in making trulyautonomous robots. If a robot does not know whereit is, it can be difficult to determine what to do next.In order to localize itself, a robot has to access rela-tive and absolute measurements which return feedbackabout its driving actions and the situation of the envi-ronment. Given this information, the robot has to de-termine its location as accurately as possible [15]. Un-certainty rises from the sensing of the robot becausethe estimation process is indirect, the measurementsare noisy and problematic because of real-world sensorsand measurements may not be available at all times.

Based on the uncertainty characteristics of the prob-lem, localization, as other important mobile roboticsproblems, has been tackled by probabilistic methods[24]. The most commonly used are Markov Localiza-tion [4] and Particle Filters [6].

2.2. Mapping - ”Where am I going?”

The mapping problem exists when the robot does nothave a map of its environment and incrementally buildsone as it navigates. While in movement the robotsenses the environment, identifying key features whichwill allow it to register information of its surroundings.

The main concern for the mapping problem is howdoes the mobile robot perceive the environment. Tolocate obstacles, detect distances, observe landmarks,etc., the mobile robot must have a sensing mechanismthat enables measurement collection. The sensors usedfor mapping depend on the type of mapping that needsto be done. Most common sensors are sonar, digi-tal cameras and range lasers. Approaches for map-ping have been accomplished considering the extrac-tion of natural features from the environment (see [13])and through the identification of special artificial land-marks (see, e.g., [19] and [1]).

The complexity of the mapping problem is the result ofa different number of factors [24], the most importantof which are: Size of the environment; noise in percep-tion and actuation; perceptual ambiguity and cycles.

2.3. Path Planning - ”How do I getthere?”

Path Planning is the process of looking ahead at theoutcomes of the possible actions, and searching for thesequence of actions that will drive the robot to thedesired goal [14]. It involves finding a path from therobot’s current location to the destination.

The cost of planning is proportional to the size andcomplexity of the environment. The bigger the dis-tance and the number of obstacles, the higher the costto the overall planning. The cost of planning is a veryimportant issue for real-time navigation needs, as thelonger it takes to plan, the longer it takes to find asolution. Path Planning techniques for navigation canbe divided into two subcategories:

• Local Path Planning are solutions that do notimply much complexity since they use only localinformation of the environment. They often do notoffer optimal solutions and also have the commonproblem of local minima.

• Global Path Planning takes into account all theinformation of the environment at the same time.Unfortunately this type of planning is not appro-priate for real-time obstacle avoidance because ofthe high processing needs for all the environment’sdata.

There are many different approaches to path planningwhich try to solve the problem using different tech-niques. Two relevant Path Planning techniques arethe Artificial Potential Field [8] and approaches basedon the Ant Colony [3].

2

2.4. SLAMSLAM (Simultaneous Localization and Mapping) [21]is a technique that involves both localization and map-ping. It is used by robots and autonomous vehicles tobuild up a map within an unknown environment whileat the same time keeping track of their current posi-tion.

This approach is complex since it involves both local-ization and mapping processes both with uncertaintiesassociated. One main concern in SLAM is keeping theuncertainty, for both robot position and landmark po-sition, controlled, thus keeping errors as low as possi-ble. For this double uncertainty, SLAM usually usesKalman Filter [7] and Particle Filter [6] methods.

3. Approach

The block diagram of the system is shown in Figure 1.There is a middleware component responsible for theinteraction between the smartphones and the mobilerobot. The navigation algorithms are executed in thesmartphone, and the control orders are passed to therobot via the middleware.

3.1. PrototypeAs mentioned before the prototype developed consistson a mobile robot (Lego Mindstorms NXT kit [11]) ableto navigate through the environment with an attachedsmartphone (Nokia N80 [17] or Nokia N95 [16]). Thesmartphone is positioned so its built-in camera facesthe front of the robot, enabling it to act as an intelligentimage sensor (Figure 2).

Figure 2: Prototype mobile robot with smartphone.

Development for the smartphone was done in Java us-ing its Micro Edition version (J2ME [23]). Develop-ment for the mobile robot is done also using Java withthe addition of a custom firmware for the Lego’s NXTBrick known as leJOS NXJ [22].

3.2. NXT MiddlewareIn order to provide seamless integration within the sys-tem, we developed a middleware component (Figure 3).The main objective of the middleware component is to

facilitate application development for the system com-posed by the smartphone and the NXT-based mobilerobot. The core functionality of the middleware con-sists in providing abstractions for Bluetooth commu-nication and also access to the mobile robot’s sensorsand actuators (see Table 1). The middleware compo-nent was developed in the Java programming languageand was built on top of the leJOS NXJ firmware [22].

Figure 3: NXT Middleware component schematic.

Methodsconnect() forward(velocity)disconnect() backward(velocity)getSensors() stop()getSensorValue(sensor) travel(distance)

rotate(angle)

Table 1: List of current middleware methods.

3.3. Visual Landmark Recognition

For real-time mapping we rely on fast feature findingby the visual sensor. With the objective of keeping thedetection and recognition of the landmarks as fast aspossible, the approach implemented in this work usessolid-color artificial landmarks. The approach devel-oped is similar to the method by [2]. In our approach,the visual system detects one landmark, and by rec-ognizing features with interest, classifies the landmarkand calculates its distance and orientation to the visualsensor.

In this approach, only one landmark is identified perimage, which can become a limitation in more complexenvironments.

3.3.1. Landmark Design

The design of the artificial landmark defines the qual-ity and difficulty of its detection and recognition. Weuse cylindrical shaped objects with solid color as arti-ficial landmarks (Figure 4). Multiple landmarks needto be distinct from one another, in order to diminish

3

uncertainty when searching for a specific color, sincedifferent colors will have different associations.

Figure 4: Landmark Colors (green, blue and red).

The cylindrical landmarks are made out of A4 card-board colored paper and have 4 cm in radius and 21 cmin height. The proposed size makes it easy and fast tocreate landmarks and accomplishes the objective of be-ing easily identifiable in an indoor environment withina considerable range. The cylindrical shape providesa simple visible landmark for the mobile robot, havinga rectangular representation from any side viewpointwhere the mobile robot observation can take place.

3.3.2. Color SegmentationThe first step for detecting the landmark consists onperforming color segmentation on the captured image.Previously, the landmark color features were gatheredand analyzed, providing the means of empirically pro-ducing a set of rules in the RGB color space for colordetection. These rules will detect the presence of alandmark in an image thus providing the correspond-ing landmark classification.

For the green landmark, we used the rules present in(1). R, G and B correspond to the red, green and bluecolor components of the RGB color space. The valueX is an adjustment value, that is used to augment thegreen color component relatively to the red and blue.

(G ≥ 130) and(G > R+X) and (G > B +X) (1)

In Visual Landmark Recognition, the color segmenta-tion process transforms the captured image into a blackand white image as can be seen in Figure 5. Whitecolor pixels indicate the presence of the green rangecolor and black pixels the absence of it.

3.3.3. Image Noise ReductionSalt and pepper image noise may be present because ofthe color segmentation process. The noise present maycompromise better results in future steps and thereforeneeds to be reduced or removed. The solution relies onthe application of an image noise reduction filter.

The filter implemented uses a 3× 3 scanning window,that analyzes all the landmark pixels present in theimage. The window checks if the pixels surroundingthe current scanned pixel are mostly belong to the

Figure 5: Green Landmark (left image) and Color Seg-mentation application (right image).

landmark or the background. If they are mostly back-ground (≥ 50%) then the pixel is most likely noise andis erased.

Figure 6: Green landmark with image noise reduction(left image) and with bounding rectangle (right image).

The application of the noise reducing filter imple-mented can be seen on the left of Figure 6. The filterremoves the sparse noise that is present after the colorsegmentation stage. To the right of Figure 6 the bound-ing box and center point are marked in red. With thefrontiers recognized, size measurements can be morecorrectly performed.

3.3.4. Minimum Bounding Rectangle

For more accurate calculations of distance and orienta-tion the landmark boundaries are identified. A bound-ary rectangle contains the shape detected and is usedto help cope with some small variations in the shape’sperspective, that can vary according to the view fromwhich the image was acquired.

3.3.5. Distance and Orientation

For the calculation of the distance (d) and orientation(θ) from the visual sensor to the landmark we basedour equations on one of the proposed methods in [26].By knowing the width, height and center point of thelandmark and having already performed measurementsfor camera calibration, the distance and orientation in-formation can be inferred from the landmark’s size andposition in the image.

4

The distance between the visual sensor and the land-mark is inversely proportional to the image’s landmarklength. For a more correct distance calculation we useboth the width and height of the image. They are com-puted independently and the final distance is given bythe average of the two. Equations (2) present the ex-pressions used.

dy = ky ×1y′

dx = kx ×1x′

d =dx + dy

2(2)

For the angle orientation of the landmark we used theEquation (3). The orientation of the landmark regard-ing the visual sensor is computed using the x position ofthe center point calculated for the landmark. The x co-ordinate is used to calculate the angle of the landmarkposition in the captured image using m and l which areconstants derived from the camera calibration.

θ = m× xLandmarkCenter + l (3)

3.4. Particle FilterThe Particle Filter method implemented in this work isbased on the approach presented in [20]. The environ-ment is represented as an occupancy grid map, whereeach grid cell matches an area of the real environmentat a specific ratio. Occupancy grid map consists of anenvironment’s 2D representation where each grid cellcan be assigned with estimation probabilities of the mo-bile robot’s position or with a reference to the presenceof an obstacle.

3.4.1. Motion ModelThe motion model is the robot’s path planner, whichis responsible for providing at each step a path for themobile robot’s movement. In order to provide the mo-bile robot with an appropriate movement, so that theParticle Filter method could be properly tested andexecuted, two motion models were developed: an ex-plorer type motion model which visits all free lo-cations in the map and a point to point motionmodel which is a predefined obstacle-free path fromone location in the environment to another.

3.4.2. Noise modelOdometry noise was added to the robot’s motion, mod-eled as a gaussian distribution, based on the noisemodel provided in [20]. The possible odometry errorconsidered for the noise was divided into rotation errorand translation error. Both were experimentally estab-lished from the real odometry errors from the roboticskit used. Translation and rotation with noise is ac-complished using a pseudo-random value, drawn as asample from the guassian distribution.

3.4.3. Measurement Model

The measurement model will provide on each measure-ment necessary information for the weighting func-tion which will update the particle’s weights. In thisimplementation the particle’s weight is considered tobe a numeric value w greater than 0.

A measurement consists on an observation from theenvironment. This observation can be accomplishedby using a single straight observation from the infor-mation present in the internal map representation orby using the visual landmark recognition method pre-sented earlier.

3.4.4. Resampling

Resampling occurs when a considerable amount of par-ticles within the particle population have weight valuesbelow a threshold and therefore have low contributionon the overall estimate of the robot’s pose.

The resampling process recognizes particles with smallweight values (< threshold) and replaces them witha random particle, whose weight value is higher thanthe resampling threshold (≥ threshold). This randomreplacement minimizes the problem of diversity loss.

When all particles have weights bellow the thresholdthen a new random set of particles is generated.

3.4.5. Robot Pose Estimate

In order to estimate the mobile robot’s position at adetermined time t, we chose the best particle ap-proach, which has the maximum weight value withinthe current particle set.

3.5. Potential Fields

The Potential Fields Approach [8] is very used for pathplanning and collision avoidance due to its mathemat-ical simplicity and elegance. It is simple to implementand can easily provide acceptable and quick results [10]in real-time navigation.

This method is based upon the concept of attractiveand repulsive forces. Since the objective of any pathplanning algorithm is to flee from obstacles and movetowards a desired goal, in potential fields, the goal isseen as a global minimum potential value, and all ob-stacles as high valued potential fields. The movementof the robot is then defined by the potential valuespresent in its path, moving ideally from high to lowpotentials.

Our approach uses as basis the potential field functionspresented by [5]. The most difficult problem for thePotential Field method, known as the local minimawas addressed using the definition of escape techniques

5

(e.g., Random Escape, Perpendicular Vector Escape[25], Virtual Obstacle Concept Escape [18], SimulatedAnnealing Escape [9]).

In order to provide a smoother robot movement, alookahead function was implemented which preventsthe mobile robot from falling into local minima loca-tions having them detected in advance.

4. Experimental Results

In this section, we present and discuss experimental re-sults for the navigation problems: Mapping, Localiza-tion and Path Planning. Here we evaluate the perfor-mance of the algorithms developed, by comparing ex-ecutions between the used smartphones and a desktopPC, and analyzing the feasibility of smartphones forreal-time applications. Performance monitoring withprofiling results using a PC MIDP emulator, and fieldtests are also conducted. Figure 7 shows the testingenvironment used with green landmarks posing as ob-stacles where field testing takes place.

Figure 7: Field environment for testing.

4.1. MappingExperiments with mapping test the application of theVisual Landmark Recognition method while trying tomap the environment present in Figure 7.

Tests executed indicated good identification of thelandmarks colors. Illumination changes were identifiedas the main problem of the incorrect identifications.

Using a single captured image and considering a goodlandmark detection and color segmentation process,the distance calculation revealed quite accurate. Theaverage absolute error was 3.6 cm and the average rel-ative error was 5.715%. Considering the angle orienta-tion measurement, it revealed also quite accurate withan average absolute error of 2.11◦ and average relativeerror of 10.06%.

Figure 8 presents profiling results of the process of cap-turing an image and applying the landmark recognition

algorithms. Table 2 compares the execution’s time andmemory usage between the PC, Nokia N80 and NokiaN95.

Figure 8: Contribution to the overall execution time ofeach step associated to the Visual Landmark Recogni-tion.

PC Nokia N80 Nokia N95Time(ms)

453.00 3079.40 5824.30

Memory(bytes)

360368.00 163557.60 157840.80

Table 2: Time and Memory measurements for the Vi-sual Landmark Recognition method.

Obviously, the PC is the fastest to execute the applica-tion. Comparing the two smartphones, memory use issimilar but the execution time is slower on the NokiaN95 model compared to the N80 model. The N95 hasa more complex built-in camera with higher resolution,making it slower when capturing an image.

Figure 9 shows the achieved mapping accuracy whenthe Visual Landmark Recognition is executed on theenvironment presented in Figure 7. The grid presentsthe obstacles as black colored cells, obstacle estimatescalculated in blue colored cells, and the path taken bythe mobile robot is presented in orange.

Although the implemented recognition makes use of asimple approach to computer vision, and consideringgood lighting conditions, the identification of the land-marks were reasonably accurate. In field testing, themethod revealed less accurate. The robot’s movementand variable lighting conditions prevent the methodfrom achieving its best results. Thus, this solution can-not be considered a very reliable method for accuratemapping purposes in real-time mobile robot naviga-tion.

4.2. LocalizationExperimental results for Localization were conductedconsidering only the global localization approach.

6

Figure 9: Mapping results with landmark estimatedpositions.

For profiling the Particle Filter implementation, weconsidered one execution of the method with a totalnumber of 1000 particles and using the environment inFigure 10. The robot pose estimate is only performedat the end of the mobile robot’s movement. Figure 11presents the percentages of execution time of the mainphases of the Particle Filter method. Table 3 presentsthe execution time and memory used comparison be-tween running the implemented localization applica-tion on a standard desktop PC and on the smartphonemodels Nokia N80 and Nokia N95.

Figure 10: Occupancy grid map for Particle Filter.

PC Nokia N80 Nokia N95Time(ms)

78.00 3618.00 1725.00

Memory(bytes)

139200.00 138060.00 78212.00

Table 3: Time and Memory measurements for the Par-ticle Filter method.

Figure 11: Contribution to the overall execution timeof each phase of the Particle Filter method.

According to the experiments, the phase which was re-sponsible for the highest percentage of execution timewas the Prediction Phase with almost 48%. The Up-date phase followed with more than 41%. Finally andconsidering the number of particles used and their dis-tribution within the environment, the Resample Phasetook approximately 9% of execution time.

In the Prediction Phase the particle’s movement andnoise addition are the main consuming areas. Withinthe Update Phase, the execution time is dominated bythe measurement of the particles. For the ResamplePhase, the execution time is dominated with the selec-tion of particles to remove and particles to duplicate.The last 1.69% of execution time is spent by auxiliarytasks and by the attainment of the pose estimate.

Now let’s consider testing the Localization using theParticle Filter on the field. Experiments on the fieldwill try to localize the mobile robot in the environmentpresented in Figure 7. The Localization approach isimplemented as a distributed system, were the ParticleFilter approach is executed on the Nokia’s N95 smart-phone, considering 1000 particles, the same amountof particles as the profiling experiment; and the mea-surement model is a visual sensor performed with theVisual Landmark Recognition method running on theNokia N80 model.

Results for five executions of this field experiment arepresented in Table 4. Consider that the positions aregiven as xy position and θ orientation: [x; y; θ]. Therobot’s real position at the end of the predefined pathis [7; 0; 90◦].

By analyzing Table 4, we can observe that only one ofthe experiments estimated the robot to be at its exactphysical location. In the other four experiments, threewere relatively close to the robot’s real position, andthe last one was very far from the robot’s position.

7

Experiment Best Particle Position#1 [4; 0; 90◦]#2 [7; 0; 90◦]#3 [9; 1; 90◦]#4 [4; 0; 90◦]#5 [0; 11; 180◦]

Table 4: Pose estimate for field testing using the Par-ticle Filter method.

The random initialization of the particles makes themethod difficult to predict, by providing very differ-ent results on different runs of the algorithm (see ex-periments #1 to #5 in Table 4). For the same map,same measurement model and motion model, the al-gorithm can provide a very accurate estimate of themobile robot position or a very inaccurate one. Onepossible solution to this problem is the increase of thenumber of particles, but with high additional compu-tational costs.

The solution implemented was feasible to be executedin real-environments, with the limitation of slow move-ment by the mobile robot. The visual sensor, the par-ticle filter execution and the transmission of informa-tion using Bluetooth are time demanding and cannot,without further optimizations, be used to navigate mo-bile robots at high speed. Nevertheless, considering aslower motion, this solution was able to provide a mech-anism for mobile robot localization.

4.3. Path Planning

The next experiments analyze the Potential Fields.Figure 12 illustrates the effect of the lookahead functionin the overall execution of the method. A lookahead of5 positions was used to detect the local minima loca-tions. This preemptive detection consumed the great-est amount of execution time.

Figure 12: Contribution to the overall execution timeof each stage of the Potential Fields.

Table 5 shows the execution times for the simulation inFigure 13. The PC presents the lowest execution time,and on the smartphones, when considering algorithmsthat simply need to accomplish fast calculations, theNokia N95 has faster execution times than the NokiaN80 model.

Figure 13: Complex environment used for profiling.

PC Nokia N80 Nokia N95AverageStepTime(ms)

11.00 377.00 278.75

TotalTime(ms)

7664.60 253442.75 187882.75

TotalMemory(bytes)

705572.00 108412.00 109437.00

Table 5: Time and Memory measurements for the Po-tential Fields algorithm.

A simulation of the execution of the Potential Fieldsmethod on the environment presented in Figure 7 isshown in Figure 14. The illustrated movement repre-sents the real movement taken by the mobile robot,successfully escaping the obstacles and reaching thetarget goal.

In the field test, due to the nature of the PotentialFields method, the robot revealed some strange orien-tation changes when avoiding obstacles. This fact wasnever very noticeable in simulation testing. We con-cluded that, even in the absence of local minima loca-tions, some raw directional vectors cannot be directlyapplied for the robot’s movement. Some of these di-rectional vectors force the robot to perform expensiverotations that need to be smoothen beforehand. For

8

Figure 14: Path Planning simulation test.

field testing, a preprocessing translation model shouldbe applied in order to better convert the output fromthe potential fields to the mobile robot’s movements.

5. Conclusions

Mobile robot navigation is today a main area in therobotics field. It approaches three of the most impor-tant problems for autonomous mobile robots: Map-ping, Localization and Path Planning. These problemswere addressed by this work to research the use and fea-sibility of a smartphone for mobile robot navigation.

Mapping using artificial landmarks revealed easy todetect with the smartphone’s built-in camera. Unfor-tunately, we noticed that the capture time for a smart-phone is time consuming. Distance and orientation cal-culation are very dependent of the quality of the colorsegmentation stage output. Therefore in conditionswhere illumination changes often, incorrect measure-ments of distance and orientation can occur. Overall,the solution presented obtained acceptable results forsuch a simple method, having its quality bottleneck onthe color segmentation process.

Localization using Particle Filters had its result out-come worse than expected due to the limitations en-countered in the measurement model used. With amore correct and precise measurement model, surelythe results would have been greatly improved. Never-theless, in the experimental results conducted, in sim-ulation and on the field, the outcome was always inpositions similar to the actual mobile robot location.

Path Planning with Potential Fields, although notalways ensuring optimal paths for robot motion, it stillprovides possible obstacle-free paths with low execu-tion times and memory usage.

Experiments on the field allowed us also to perceive theodometry errors caused by the mobile robotics kit used.The mobile implementation of the algorithms, revealedthat although all the constraints and migration rulesapplied, algorithm consistency was kept. Experimentalresults also show that it is feasible to execute somenavigation algorithms in real-time for robot motion,under certain limitations. This feasibility for real-timeapplications testifies the current processing capacity’sstate in high-end mobile phone devices.

Future improvements for this work include:

• Further optimization of the implemented algo-rithms for faster execution;

• Implementation of other navigation algorithms;

• Further experimental testing on the field consid-ering more complex environments;

• Identification of more than one landmark per cap-tured image;

• Study and testing of more Particle Filter measure-ment sensors;

• Potential Fields local minima lookahead study.

References

[1] R. Baczyk, A. Kasinski, and P. Skrzypczynski.Vision-based mobile robot localization with sim-ple artificial landmarks. Prepr. 7th IFAC Symp.on Robot Control, pages 217–222, 2003.

[2] J. Bruce, T. Balch, and M. Veloso. Fast and inex-pensive color image segmentation for interactiverobots. In Proceedings of the 2000 IEEE/RSJ In-ternational Conference on Intelligent Robots andSystems (IROS ’00), 3:2061–2066, October 2000.

[3] M. Dorigo. Optimization, Learning and NaturalAlgorithms. PhD thesis, Politecnico di Milano,Italy, 1992.

[4] D. Fox. Markov Localization: A Probabilis-tic Framework for Mobile Robot Localization andNavigation. PhD thesis, Institute of ComputerScience III, University of Bonn, Germany, Decem-ber 1998.

[5] M. A. Goodrich. Potential fields tutorial. ClassNotes, 2002.

9

[6] N. Gordon, D. Salmond, and A. Smith. Novel ap-proach to nonlinear/non-gaussian bayesian stateestimation. IEEE Proceedings for Radar and Sig-nal Processing, 140(2):107–113, April 1993.

[7] R. E. Kalman. A new approach to linear fil-tering and prediction problems. Transactions ofthe ASME–Journal of Basic Engineering, 82(Se-ries D):35–45, 1960.

[8] O. Khatib. Real-time obstacle avoidance for ma-nipulators and mobile robots. The InternationalJournal of Robotics Research, 5(1):90–98, 1986.

[9] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi.Optimization by simulated annealing. The Amer-ican Association for the Advancement of Science,220(4598):671–680, May 1983.

[10] L.-F. Lee. Decentralized motion planning withinan artificial potential framework (apf) for coop-erative payload transport by multi-robot collec-tives. Master’s thesis, Department of Mechanicaland Aerospace Engineering, December 2004.

[11] Lego. http://mindstorms.lego.com/, last visitedin February 2008.

[12] J. J. Leonard and H. F. Durrant-Whyte. Mobilerobot localization by tracking geometric beacons.IEEE Transactions on Robotics and Automation,7(3):376–382, 1991.

[13] D. G. Lowe. Object recognition from local scale-invariant features. The Proceedings of the SeventhIEEE International Conference on Computer Vi-sion, 2:1150–1157, 1999.

[14] M. J. Mataric. The Robotics Primer. The MITPress, 1st edition, 2007.

[15] R. Negenborn. Robot localization and kalman fil-ters. Master’s thesis, Utrecht University, Nether-lands, September 2003.

[16] Nokia. Nokia nseries n95 smartphone.http://www.nseries.com/products/n95/, lastvisited in February 2008.

[17] Nokia. Nokia nseries n80 smartphone.http://www.nseries.com/products/n80/, lastvisited in June 2008.

[18] M. G. Park and M. C. Lee. Artificial potentialfield based path planning for mobile robots us-ing a virtual obstacle concept. Proceedings of the2003 IEEE/ASME International Conference onAdvanced Intelligent Mechatronics (AIM 2003),2:735–740, July 2003.

[19] D. Prasser and G. Wyeth. Probabilistic visualrecognition of artificial landmarks for simultane-ous localization and mapping. Proceedings of the2003 IEEE International Conference on Roboticsand Automation, 1:1291–296, September 2003.

[20] I. Rekleitis. Cooperative Localization and Multi-Robot Exploration. PhD thesis, School of Com-puter Science, McGill University, Montreal, Jan-uary 2003.

[21] R. Smith, M. Self, and P. Cheeseman. Esti-mating uncertain spatial relationships in robotics.In Autonomous robot vehicles, pages 167 – 193.Springer-Verlag New York, Inc., New York, NY,USA, 1990.

[22] J. Solorzano, B. Bagnall, J. Stuber, and P. An-drews. lejos: Java for lego mindstorms.http://lejos.sourceforge.net/, last visited in June2008.

[23] SunMicrosystems. Java me technology.http://java.sun.com/javame/technology/, lastvisited in May 2008.

[24] S. Thrun, W. Burgard, and D. Fox. ProbabilisticRobotics. The MIT Press, September 2005.

[25] P. Veelaert and W. Bogaerts. Ultrasonic potentialfield sensor for obstacle avoidance. IEEE Transac-tions on Robotics and Automation, 15(4):774–779,1999.

[26] K.-J. Yoon, G.-J. Jang, S.-H. Kim, and I. S.Kweon. Fast landmark tracking and localizationalgorithm for the mobile robot self-localization.in IFAC Workshop on Mobile Robot Technology,pages 190–195, 2001.

10


Top Related