ares-iii: a versatile multi-purpose all-terrain...

8
ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2 Magno Guedes, 2 Pedro Santana, 1 Pedro Deusdado, 2 Ricardo Mendonc ¸a, 2 Francisco Marques, 3 Nuno Henriques, 2,3 Andr´ e Lourenc ¸o 3 Lu´ ıs Correia, 2 Jos´ e Barata, 1 Luis Flores 1 R&D Division, INTROSYS, SA 2 CTS-UNINOVA, New University of Lisbon, Portugal 3 LabMAg, University of Lisbon, Portugal Abstract This paper presents ARES-III, a multi-purpose service robot for robust operation in all-terrain outdoor environ- ments. Currently in pre-production phase, ARES-III is aimed to fulfil the requirements of a robotic platform that is able to support the development of real world appli- cations in surveillance, agriculture, environmental moni- toring, and other related domains. These demanding sce- narios motivate a design focused on the reliability of the mechanical platform, the scalability of the control system, and the flexibility of its self-diagnosis and error recovery mechanisms. These are key features of ARES-III often dis- regarded in current commercial and research platforms. First, a comprehensive set of field trials demonstrated the ability of the ARES-III chassis, made of durable materials and with no-slip quasi-omnidirectional kinematic charac- teristics, to perform robustly in rough terrain. Second, supported by a control system fully compliant with the wide spread Robot Operating System (ROS), the scala- bility of ARES-III is enforced. Finally, the integration of active self-diagnosis and error recovery mechanisms in ARES-III control system fosters long lasting operation. 1. Introduction Mobile robotics is a field with a high potential for so- cial and economical revenue. This has driven a consider- able amount of application-driven and research-driven de- velopments, ranging from the implementation of human companion and assistance robots [6] to the exploitation of robotics as test-bed for artificial intelligence models [12] and perception algorithms [5] [10]. Whether the goal is to develop a robotic application or to devise a new model, it is fundamental for the research and development teams to have access to an interoperable, robust, and scalable robotic platform. Although indoors platforms are widely available [1], this is not the case for the outdoor domain. This owes mostly to the inherent complexity of moving a mecha- Figure 1. The ARES-III robot. tronic platform in uneven terrain under stringent condi- tions. This paper fills this gap by presenting ARES-III (see Fig. 1), a versatile outdoor robotic platform that is currently in pre-production phase. This robot is expected to be soon available for commercial and research pur- poses. The high potential of outdoor robots can be seen by the recent developments on search & rescue missions [7], patrol & reconnaissance [2], humanitarian demining [8], environment monitoring [13], or agriculture [3]. The most popular approach for solving the locomotion problem in outdoors is to use tracks instead of wheels [11]. However, if in the one hand tracks offer more traction in complex terrains and are less prone to slip, in the other hand the inherent use of skid steering results in unreliable odometry estimation and considerable energy loss. Also, using tracks hampers the ability to surpass prominent ob- stacles as the height to ground is significantly reduced. Another solutions have been proposed, such as spherical shaped robots [4] and legged platforms [14]. However, while the first is inadequate for use in sloppy terrains, the later require a complex control system and energy in order to stay balanced, which hampers autonomy. The solution implemented in ARES-III relies on a

Upload: vonguyet

Post on 30-Nov-2018

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ARES-III: A Versatile Multi-Purpose All-Terrain Robothome.iscte-iul.pt/~pfsao/papers/etfa_2012.pdf · ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2Magno Guedes, 2Pedro

ARES-III: A Versatile Multi-Purpose All-Terrain Robot

1,2Magno Guedes, 2Pedro Santana, 1Pedro Deusdado,2Ricardo Mendonca, 2Francisco Marques, 3Nuno Henriques, 2,3Andre Lourenco

3Luıs Correia, 2Jose Barata, 1Luis Flores

1R&D Division, INTROSYS, SA2CTS-UNINOVA, New University of Lisbon, Portugal

3LabMAg, University of Lisbon, Portugal

Abstract

This paper presents ARES-III, a multi-purpose servicerobot for robust operation in all-terrain outdoor environ-ments. Currently in pre-production phase, ARES-III isaimed to fulfil the requirements of a robotic platform thatis able to support the development of real world appli-cations in surveillance, agriculture, environmental moni-toring, and other related domains. These demanding sce-narios motivate a design focused on the reliability of themechanical platform, the scalability of the control system,and the flexibility of its self-diagnosis and error recoverymechanisms. These are key features of ARES-III often dis-regarded in current commercial and research platforms.First, a comprehensive set of field trials demonstrated theability of the ARES-III chassis, made of durable materialsand with no-slip quasi-omnidirectional kinematic charac-teristics, to perform robustly in rough terrain. Second,supported by a control system fully compliant with thewide spread Robot Operating System (ROS), the scala-bility of ARES-III is enforced. Finally, the integration ofactive self-diagnosis and error recovery mechanisms inARES-III control system fosters long lasting operation.

1. Introduction

Mobile robotics is a field with a high potential for so-cial and economical revenue. This has driven a consider-able amount of application-driven and research-driven de-velopments, ranging from the implementation of humancompanion and assistance robots [6] to the exploitation ofrobotics as test-bed for artificial intelligence models [12]and perception algorithms [5] [10]. Whether the goal isto develop a robotic application or to devise a new model,it is fundamental for the research and development teamsto have access to an interoperable, robust, and scalablerobotic platform.

Although indoors platforms are widely available [1],this is not the case for the outdoor domain. This owesmostly to the inherent complexity of moving a mecha-

Figure 1. The ARES-III robot.

tronic platform in uneven terrain under stringent condi-tions. This paper fills this gap by presenting ARES-III(see Fig. 1), a versatile outdoor robotic platform that iscurrently in pre-production phase. This robot is expectedto be soon available for commercial and research pur-poses. The high potential of outdoor robots can be seenby the recent developments on search & rescue missions[7], patrol & reconnaissance [2], humanitarian demining[8], environment monitoring [13], or agriculture [3].

The most popular approach for solving the locomotionproblem in outdoors is to use tracks instead of wheels [11].However, if in the one hand tracks offer more traction incomplex terrains and are less prone to slip, in the otherhand the inherent use of skid steering results in unreliableodometry estimation and considerable energy loss. Also,using tracks hampers the ability to surpass prominent ob-stacles as the height to ground is significantly reduced.Another solutions have been proposed, such as sphericalshaped robots [4] and legged platforms [14]. However,while the first is inadequate for use in sloppy terrains, thelater require a complex control system and energy in orderto stay balanced, which hampers autonomy.

The solution implemented in ARES-III relies on a

Page 2: ARES-III: A Versatile Multi-Purpose All-Terrain Robothome.iscte-iul.pt/~pfsao/papers/etfa_2012.pdf · ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2Magno Guedes, 2Pedro

wheeled based locomotion mechanism designed to re-duce slippage and maximise traction power. This canbe achieved by assuring that wheel direction accompa-nies the movement while staying in permanent contactto the ground. To solve this problem and attain quasi-omnidirectional motion, ARES-III is equipped with fourwheels independently controlled in both steering and trac-tion. Furthermore, with a longitudinal passive axis, therobot navigates in bumpy terrains in a compliant way, i.e.,without losing four wheels traction. These are propertiesinherited from its ancestor, the Ares-II robot [9]. How-ever, the original bicycle wheels have been replaced withcustom spokeless wheels to reduce entrapment areas. Toallow the operation in explosive atmospheres, the robot’sbody is made of aluminium, rather that iron, as it wasthe case of Ares-II. The control system has also been im-proved and is now fully integrated within the Robot Oper-ating System1 (ROS) framework. This serves the neededfeatures for a developer to implement the application withall the specific algorithms for the robot operation, with theadvantage of being supported by a reasonably large com-munity with a mixture from academia and industry.

This paper is organised as follows: section 2 presentsthe mechanical structure; the hardware architecture is de-scribed in section 3; software architecture is presented insection 4; followed up by the description of the diagno-sis and recovery mechanism in section 5; a discussion onobtained results is carried out in section 6; and finally,section 7 summarises the conclusions taken and presentssome possibilities for future work.

2. The Mechanical Structure

The hardware architecture of ARES-III was designedto provide a solution that is simultaneously cost effec-tive, durable while operating on all-terrain and all-weatherenvironments, and suited the deployment of semi- andfully-autonomous control systems. Cost effectiveness isattained by means of a simple design, i.e., without spring-dumping systems or active longitudinal axes. Operationrobustness is attained by means of a weatherproofed alu-minium chassis. To meet the demands of sliding auton-omy, a no-slip quasi-omnidirectional locomotion structureis in place (see Fig. 2). This provides the additional advan-tage of reducing mechanical stress, energy consumption,and avoiding high odometry errors.

Intended to work on outdoor real world applications,the mechanical structure takes a key role not only on guar-anteeing resistance against externally applied forces, butalso on weatherproofing the electrical and electronic com-ponents, maintaining functionality, reliability, and safetyon its use. The aluminium was the chosen material forthe chassis due to its lightweight, strength, corrosion re-sistance, flexibility, thermal conductivity, ductility, andcost efficiency. By using an alloy version of this ma-terial, some properties are reinforced, still maintaining

1ROS website: http://www.ros.org/

(a) Ackermann (b) Double Ackermann

(c) Displacement (d) Lateral motion (e) Turning point

Figure 2. ARES-III five locomotion modes.

lightweight and, as result, an hight tough structure is ob-tained. Also, it enables the construction of a fully sealedfanless system without however falling in overheating is-sues.

Locomotion is achieved by a 4x4 independent wheeledsystem, which provides up to five types of movement (seeFig. 2). A longitudinal passive axis is also present for en-abling the robot to navigate in compliance with the topol-ogy of bumpy terrains. Combined, they offer the capa-bility to robustly navigate in uneven terrain with limitedmechanical stress.

Each motor is directly coupled to their correspondingwheel through a gear. The absence of chains or belts notonly reduces energy loss but also the number of failurepoints. The aggregated power of all motors, exceeding1 kW maximum, not only allows the robot to navigate atup to 1.5 ms�1 and surpass up to 45� incline slopes, butalso enables the robot to transport up to 200 kg payloadweight in a planar ground. The later aspect enables ex-tended operational capabilities by supporting the additionof heavy payload components such as robotic manipula-tors, or allowing it to transport cargo (e.g. as a roboticmule).

An issue often neglected in current robotic platformsis the ease of their transportation and storage. Besides itsmedium sized footprint (800 mm⇥ 1500 mm⇥ 700 mm)and weight, 100 Kg, ARES-III includes some parapherna-lia to ease its manual transportation. Namely, each wheelsupports hand locking and manual clutch, and the longi-tudinal passive axis can also be locked in the central posi-tion.

3. The Hardware Architecture

To support the control system, high level applications,and other software modules, ARES-III presents a modu-lar hardware architecture mainly composed of commercialoff-the-shelf components (see Fig. 3). This design com-prises the need for flexibility, i.e., plug-in and plug-out

Page 3: ARES-III: A Versatile Multi-Purpose All-Terrain Robothome.iscte-iul.pt/~pfsao/papers/etfa_2012.pdf · ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2Magno Guedes, 2Pedro

Figure 3. ARES-III hardware architecture.

devices with minimal reconfigurability procedure, relia-bility, and easy maintenance.

The main computational unit of the robot is composedof an Intel i7 powered industrial board. From there, amultiplicity of perceptional devices, such as a 2-D laserscanner, a stereoscopic vision sensor, and a set of monoc-ular cameras, feed the computer with required perceptualinformation for spatial awareness. Also included in per-ception is a inertial measurement unit (IMU) with a built-in global positioning system (GPS) device for localisationand navigation purposes. For short range obstacle detec-tion, the platform is equipped with a ring of ultrasonicsensors covering its periphery. For self-monitoring anddiagnosis, the robot integrates several internal sensors re-sponsible for gathering temperature readings, voltage andcurrent drawn, and controlling the health of the battery.Problems that might occur are detected by these sensorsand appropriate measures are taken, which might resulton an defective device being shut down and backed up.For device integration, the computational unit is preparedto support a variety of communication standards. Whilelow-level motor control communicate through a CAN bus,high-level inter-module communication is performed vialocal Ethernet. Additional USB and RS-232 ports are alsoavailable for sensory devices.

The power unit is composed by two lithium polymerbatteries with aggregated capacity of 60 Ah. This tech-nology allows lower weight and higher reliability whilepermitting up to 4 hours nominal operation and up to 10hours in contained motion operation of the ARES robot.

Two types of devices can be used to control the robot:the operator control unit and a radio-frequency controller.The later is only used for close line-of-sight operations.Conversely, the operator control unit is a fully functionalcontrol centre for tele-operation purposes. Intended forHumane-Machine Interaction (HMI), it is equipped with arugged laptop computer, a sun readable screen and a pairof rugged joysticks. This allows both direct tele-operationof the robot and mission driven high level operation, e.g.,

(a) Wireless mode. (b) Infrastructure mode.

(c) Tether mode. (d) RF mode.

Figure 4. Communication mediums.

by feeding the robot with a pre-defined path drawn on ageo-referenced map. When in tele-operation mode, thecontrol centre sends the commands from the joysticks tothe robot’s on-board computer with the purpose of con-trolling the robot and its pan-tilt-zoom (PTZ) camera. Thescreen is used for displaying the video feeds provided bythe PTZ camera and by the network of small camerasspread across the robot’s chassis. This set of cameras en-ables a thorough inspection of the robot’s surroundings.

Considering the two control units, the robot has supportfor four communication mediums, as shown in Fig. 4. Al-though RF controller has only support to short range wire-less, the control centre offers three alternatives: ’wirelessmode’ for communication ranges of up to 1 Km over Wi-Fi in the 2.4 GHz band, ’infrastructure mode’ for the sit-uations in which the robot has access to a local wirelessnetwork, and ’tether mode’. This last option relies on along cable for Ethernet data and power transmission onsites where wireless is not viable, allowed, or secure.

Hardware expansion modules are easily accessible inARES-III. These allow the addition of new sensors (e.g.,temperature, radiation, mines, humidity, metal) througha common power bus and standard communication in-terfaces, namely RS-232/485, CAN, and Ethernet. Ad-ditional computing power, video acquisition devices, androbotic manipulators can also be added with minimal ef-fort.

4. The Software Architecture

A multi-purpose robot may be distinguished for itsability not only to operate in distinct scenarios, but also tosupport both the integration of additional sensors and/orcomputational units, and additional software modules(e.g., collision-free navigation and human-robot interac-tion). This must be reflected in a flexible and modularsoftware architecture capable of abstracting hardware andindividual applications in a standardised manner, so thatupgrades and downgrades can be done just by plugging inand out modules with standardised interfaces.

The software architecture of ARES-III (see Fig. 5) wasdesigned in order to support interactions with logging, di-agnostics, and human-machine interfaces with proper hi-

Page 4: ARES-III: A Versatile Multi-Purpose All-Terrain Robothome.iscte-iul.pt/~pfsao/papers/etfa_2012.pdf · ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2Magno Guedes, 2Pedro

Figure 5. Software architecture.

erarchy along the abstraction layers and the mesh of net-work interconnections. The input and output (I/O) of theseveral built-in and add-on hardware devices is also sup-ported. The feed from the devices is streamed across thenetwork through a low latency conceptual service bus,which allow a set of nodes to efficiently share data.

The top layer of the software architecture encom-passes the human-robot interface and high-level algo-rithms, which aggregate data from the on-board sensorsand produce information for decision support and naviga-tion aid. Conversely, the bottom layer includes the soft-ware components directly connected to the hardware de-vices and software modules connecting and giving accessto upper layers. Intermediate layers help building the re-maining functionality of a multi-purpose robot.

The main software modules are briefly described next:

Human-Machine Interface This component has the roleof mediation between human agents and the robot. Itprovides the human operator with relevant informa-tion about the current robot state required for properdecision making. That is, all relevant sensory andtelemetric data is presented, accompanied with thefeed from the on-board cameras (see top of Fig. 6).Interaction between the human and the robot is per-mitted not only to directly tele-operate the robot butalso to correct or improve desired behaviour whenthe later is in autonomous mode. An additional built-in browser gives access to relevant Web cloud ser-vices such as geographical positioning and mapping2

and face detection and recognition of selected char-acteristics 3, having the first been improved with per-tinent features such as drawing desired trajectories,and overlay current robot position, detected obstaclesand planned paths (see bottom of Fig. 6).

Application This component is where the higher levellogic processing algorithms reside. It includes therobot’s kinematic modelling, obstacle detection, pathplanning, self-monitoring, and error recovery. Forflexibility, it allows the addition of software modulesby maintaining a graph structure interfaced with stan-

2http://maps.google.com/3http://face.com/

Figure 6. Part of the control centre GUI ded-icated to navigation.

dardised inter-process communication messages (aswill be described below).

Repository This component serves the purpose of stor-age for all persistent data and processed information.It is organised as files on A tree-based file-systemavailable across software modules.

Device Drivers These are critical to interface the sensorsand actuators with the information system inside therobot. They mediate between hardware connected re-placeable devices producing raw data and the mainrobot processing centre with a data format commonacross modules.

Service Bus Represents a common interface to processcommunication (services and messages) between allsoftware modules.

In order to comply with the requirements of flexibil-ity and modularity, the ARES-III control system is fullycompliant with the Robot Operating System (ROS), whichis a free and open source software framework maintainedby Willow Garage that is becoming a de facto standard inresearch-oriented and commercial-oriented robotics. ROSprovides standard operating system services such as hard-ware abstraction, low-level device control, implementa-tion of commonly used functionality, message-passing be-tween processes, and package management. It is based ona graph architecture in which processing takes place innodes that may receive, post, and multiplex sensor con-trol, state, planning, actuator, and other messages. It is

Page 5: ARES-III: A Versatile Multi-Purpose All-Terrain Robothome.iscte-iul.pt/~pfsao/papers/etfa_2012.pdf · ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2Magno Guedes, 2Pedro

Figure 7. Basic control system architecture in ROS.

also a provider of distributed computing development in-cluding libraries and tools for obtaining, writing, building,and running applications across multiple computers.

The control system of ARES-III respects the ROS syn-tax in terms of the concept of nodes and topics for messag-ing (Fig. 7). Nodes interact through a publish-subscribemessaging mechanism using the topic concept. Nodes,as processes inside ROS, can have several functionalities.ARES-III is equipped with the following custom nodes:

Locomotion Control As part of the device drivers’ layer,the locomotion control node is responsible for in-terfacing the system with the motor control deviceson the robot. This includes acting accordantly withthe motors velocities and positions received from thekinematics node (see below) in the form of a ROSstandard message sensor msgs/JointState.Conversely, encoder data is published by this nodeusing the same messages, which are then consumedby the kinematics node.

Kinematics In the application layer, the kinematics nodeis responsible for solving the inverse kinematicsproblem taking into account the desired headingand velocity that comes from the navigation stack(when in autonomous control) or from the teleopnode (when in tele-operation). This results inestimating motor position and velocity accordingto linear and angular velocities (received in theform of ROS messages geometry msgs/Twist),and based on the robot’s modelled kinematics.Afterwards, the estimated values are publish assensor msgs/JointState messages. Fur-thermore, encoder data is received in the samesort of messages, translated into pose and dis-placement information based on previous robot’sstate and modelled kinematics, and republished asnav msgs/Odometry to be consumed by the nav-igation stack (see below).

Navigation The navigation stack is composed of a groupof nodes in charge of both path planning and obstacle

avoidance behaviours. By receiving and fusing sen-sory information (sensor msgs/LaserScan,sensor msgs/IMU), position and velocity esti-mates (nav msgs/Odometry), and desired goalposes (msgs PoseStamped), provided respec-tively by the perception nodes, odometry and con-trol gui, it publishes a path to the desired goalas well as the next angular and linear velocities(geometry msgs/Twist) to the locomotion con-trol node.

TeleOperation This node is responsible for the inter-facing with joysticks for tele-operation purposes.This node acquires desired velocity, heading, andlocomotion mode from the joystick and produces acorresponding angular and linear velocities, whichare then published as standard ROS messagesgeometry msgs/Twist to the system.

Control GUI Included in the human-machine interfacelayer, the control Graphical User Interface (GUI)node has the function of representing sensory data,current robot state, and diagnostic in a human read-able manner. This information is available in bothonline and offline modes. Off-line access to informa-tion is done recurring to the built-in logging facilitiesprovided by ROS. Also through this node, the humanoperator is able to change the system’s parameters.Finally, this node includes support to visualizationand simulation tools via the built-in Rviz and Gazebomodules, respectively (see Fig. 8).

Augmented Camera Displays the PTZ camera’s videofeed and processes the acquired video frames for facedetection. It also allows the human operator to takea snapshot of a scene with the option for picture fa-cial detection and recognition of selected characteris-tics. To provide orientation cues to the tele-operator,a bearing pointer is displayed at the lower right cor-ner. The pointer, whose bearing is estimated fromthe robot’s pose estimator, rotates and points towardsthe waypoint that must be pursued next, according

Page 6: ARES-III: A Versatile Multi-Purpose All-Terrain Robothome.iscte-iul.pt/~pfsao/papers/etfa_2012.pdf · ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2Magno Guedes, 2Pedro

to the off-line specified path. Several optimisationswere implemented to avoid unneeded computation,such as in slow robot motion situations.

Map Trail Shows the robot’s actual path overlaid on ageo-referenced map using the robot’s estimated poseand the Google Maps JavaScript library API. ThisAPI permits drawing and overlaying several pathson a realistic terrain description. The human oper-ator may interact with the map by marking and link-ing waypoints, which are directly translated to geo-graphical coordinates. Linked waypoints define off-line specified paths. Grabbing and dragging innerpath points splits the marker into two and creates onemore segment whilst changing the path way. Lock-ing and unlocking is useful to avoid adding one moremarker inadvertently when the action is to drag andpan the map. The left path browser panel is used tomaintain a list of the paths and select the current one.

Additionally, as the device drivers for some of therobot’s components were available in the form of ROSpackages4, they were easily integrated in the basic con-trol system. Namely:

Laser Scan Available in the package LMS1xx, thisdevice driver interfaces with the SICK LMS111laserscanner device on the robot publishingthe range readings as standard ROS messagessensor msgs/LaserScan.

IMU From the package lse xsens mti, this nodegets pose readings from the Xsens MTi-G inertialmeasurement unit on the robot publishing them asstandard ROS messages sensor msgs/IMU. Also,geographical coordinates are published using thesensor msgs/NavSatFix messages.

Other sensor modalities (e.g., stereo sensors, time offlight cameras, sonars) are also available under the samerationale.

5. Diagnosis and Recovery

The diagnostic system is designed to collect infor-mation from the ARES-III hardware and software mod-ules for the purpose of analysis, troubleshooting, and

4http://www.ros.org/browse/repo list.php

Figure 8. The ARES-III in the ROS on-linevisualisation (left) and off-line simulation(right) tools.

logging. This system is fully supported by ROS,as it provides a standard diagnostic toolchain for col-lecting, publishing, analyzing and viewing diagnosticdata. In particular, all nodes publish standard ROSdiagnostic msgs/DiagnosticStatus messageson the /diagnostics topic. A diagnostic aggrega-tor node subscribes to the /diagnostics topic, pro-cesses and categorizes the raw diagnostic data (i.e., cat-egorizing by system components), and republishes it on/diagnostics agg topic.

The aggregated diagnostic output can be displayed bystandard ROS tools for viewing diagnostic data. However,these tools are generic, simple, and offer no user control,which is essential to fix unrecoverable errors. To over-come this limitation, ARES-III is provided with a customgraphical interface for diagnostic and error recovery pur-poses (see Fig. 9). In this interface, key components of therobot are represented by squared icons overlaid on theircorresponding physical locations over the ARES-III back-ground image. The diagnostic status of each componentis indicated by the colour on the respective icon border.The set of diagnostic status is defined as Ok, Warning, Er-ror or Stale, and the possible colours are green, orange,red or blue, respectively. For instance, the battery systemshows an orange colour to indicate a warning status. TheStale status is attributed when the respective standard ROSdiagnostic msgs/DiagnosticStatus messagesstopped from being published. When the user clicks onthe magnifying glass icon on the diagnostic panel (seeFig. 9), a global and detailed diagnostic report with theoption to shutdown the system is displayed (see Fig. 10).

The diagnostic system supports an error recoverymechanism. This process was designed to act everytime asystem failure is reported, thus reducing human interven-tion on error recovery. Embedded in the control systemarchitecture as an independent ROS node, the error recov-erer gathers data from the diagnostic aggregator and sub-scribes to the most critical topics, namely the ones sharingdata about locomotion commands and device interfacing.This allows not only to control the running state of eachnode, but also the validity of orders and commands being

Figure 9. ARES-III diagnostic graphical in-terface.

Page 7: ARES-III: A Versatile Multi-Purpose All-Terrain Robothome.iscte-iul.pt/~pfsao/papers/etfa_2012.pdf · ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2Magno Guedes, 2Pedro

Figure 10. ARES-III detailed diagnostic re-port.

passed between nodes. In fact, the error recovery is awareof the system safety parameters, which are accessible onthe repository module (see Fig. 5). Consequently, it hasthe ability to act as a filter for invalid or unsafe actuationcommands. For the sake of clarity, one can imagine a sit-uation in which a human operator controls the robot fora long period at its maximum speed, which causes mo-tors to overheat. As a result, the error recoverer inter-venes by lowering the system wide parameter specifyingthe maximum allowed motion speed. Finally, the error re-covery has a built-in service server to respond to fault re-ports issued by the individual modules. That is, each nodeis allowed to report local critical failures (e.g., a motor’sdevice driver unable to communicate with the hardwarecomponent) directly to the error recoverer, thus reducingresponse time. On this situation, the error recoverer actson the global system (e.g., halting the robot and puttingall locomotion commands on hold) and tries to solve theerror (e.g., by resetting the device or changing configura-tion) before requesting human intervention.

6. Results

In order to assess chassis’ ability to endure mechanicalstress, a finite element analysis was carried out with theAutodesk ANsys software. These tests not only guaran-tee that the robot complies with the initial requirementsbut also that the robot is able to cope with tough terrainconditions.

The longitudinal passive joint, the two boxes attachedto it, and the transmission systems were the main subjectsof the analysis. In a first test, four forces were applied tothe boxes, being them equivalent to the weight of the bat-teries and electronics present on the actual robot. Result-ing equivalent stress images show that the maximum valueis only reached on infinitesimal points (see Fig. 11(a)). Ina second test, the Von Mises criterium, which is based on

(a) Equivalent stress (b) Maximum stress (c) Minimum stress

(d) Safety factor (e) Maximum stressvertical

(f) Minimum stressvertical

(g) Safety factor verti-cal

(h) Maximum stresslateral

(i) Maximum stressvertical

(j) Minimum stressvertical

(k) Safety factor verti-cal

(l) Maximum stresslateral

Figure 11. Finite element analysis.

infinitesimal part rotation, was applied (see Fig. 11 (b)-(d)). The platform showed to be far from the mechanicalbreakpoint, as it can be seen by the safety factor depictedFig. 11(e)-(h). Similar results were obtained for a thirdrun of tests, this time with vertically (see Fig. 11(i)-(k))and laterally (see Fig. 11(l)) applied forces.

The robot was also tested qualitatively by analysingthe system performance on a typical use case scenario.For this purpose, the robot was tele-operated over a widevariety of terrains, including concrete and cobbled pave-ment, loose gravel, grass, tight vegetation, dirt tracks, anddust sand. During field tests, the robot navigated flaw-lessly over the several types of terrain, and successfullytrespassed vertical obstacles up to 30 cm and 45� inclinedslopes (see Fig. 12).

7. Conclusions and Future Work

ARES-III, an all-terrain robot in pre-production phasewas presented. To enable the development of real-worldapplications in surveillance, agriculture, environmentalmonitoring, and other related domains, the developmentof ARES-III was focused around of reliability of the me-chanical platform, scalability of the control system, andflexible self-diagnosis and error recovery mechanisms.These are key features usually disregarded in roboticsresearch, but key for actual fielding of mobile robots.Reliability of the mechanical platform was achieved bymeans of durable materials, simplicity of the design, andno-slip quasi-omnidirectional kinematic characteristics,which helps reducing mechanical stress. Scalability ofthe control system was achieved by fully complying with

Page 8: ARES-III: A Versatile Multi-Purpose All-Terrain Robothome.iscte-iul.pt/~pfsao/papers/etfa_2012.pdf · ARES-III: A Versatile Multi-Purpose All-Terrain Robot 1,2Magno Guedes, 2Pedro

(a) Over grass. (b) Following a dirt track.

(c) Over a bumpy segment of loosegravel and rubble.

(d) Traversing dense vegetation.

(e) In dust sand, surpassing a30 cm height concrete obstacle.

(f) In concrete, climbing a 45�

slope.

Figure 12. Traction tests.

the wide spread Robotics Operating System (ROS). Fur-thermore, several perception, localisation, and navigationROS-enabled open-source algorithms are integrated in thecontrol system, thus facilitating adaptation to the task athand. Self-diagnosis and error recovery was included inorder to allow the operator to have a standardised accessto all critical information about the system’s health and toallow a systematic handle of exceptions. This model be-ing developed on ROS native diagnostic platform is itselfhighly scalable. Future developments of practical utilitywill be focused on enabling robot-robot and human-robotteamwork.

8. Acknowledgements

This work was funded by the Regional OperationalProgramme of Lisbon (POR Lisboa), in the scope ofthe National Strategic Reference Framework of Portu-gal (QREN), part of the European Regional DevelopmentFund. We also thank Raquel Caldeira from INTROSYS,SA for their constructive comments and support.

References

[1] G. Desouza and A. Kak. Vision for mobile robot naviga-tion: a survey. IEEE Transactions on Pattern Analysis andMachine Intelligence, 24(2):237–267, Feb. 2002.

[2] T. Huntsberger, H. Aghazarian, A. Howard, and D. C.Trotz. Stereo visionbased navigation for autonomous sur-face vessels. Journal of Field Robotics, 28(1):3–18, 2011.

[3] D. A. Johnson, D. J. Naffin, J. S. Puhalla, J. Sanchez, andC. K. Wellington. Development and implementation of ateam of robotic tractors for autonomous peat moss harvest-ing. Journal of Field Robotics, 26(6-7):549–571, 2009.

[4] V. Kaznov and M. Seeman. Outdoor navigation witha spherical amphibious robot. In Proceedings of theIEEE/RSJ International Conference on Intelligent Robotsand Systems (IROS 2010), pages 5113–5118, Oct. 2010.

[5] K. Konolige, M. Agrawal, M. R. Blas, R. C. Bolles,B. Gerkey, J. Sol, and A. Sundaresan. Mapping, naviga-tion, and learning for off-road traversal. Journal of FieldRobotics, 26(1):88–113, 2009.

[6] M. M. Loper, N. P. Koenig, S. H. Chernova, C. V.Jones, and O. C. Jenkins. Mobile human-robot teamingwith environmental tolerance. In Proceedings of the 4thACM/IEEE international conference on Human Robot In-teraction (HRI 2009), pages 157–164, New York, 2009.

[7] R. R. Murphy and S. Stover. Rescue robots for mudslides:A descriptive study of the 2005 la conchita mudslide re-sponse. Journal of Field Robotics, 25(1-2):3–16, 2008.

[8] P. Santana, J. Barata, and L. Correia. Sustainable robotsfor humanitarian demining. International Journal of Ad-vanced Robotic Systems, 4(2):207–218, 2007.

[9] P. Santana, C. Candido, P. Santos, L. Almeida, L. Correia,and J. Barata. The ares robot: case study of an affordableservice robot. In Proceedings of the European RoboticsSymposium 2008, pages 33–42. Springer, 2008.

[10] P. Santana, L. Correia, M. Guedes, and J. Barata. Visualattention and swarm cognition towards fast and robust off-road robots. In Proceedings of the IEEE InternationalSymposium on Industrial Electronics (ISIE 2011), pages2255–2260, 2011.

[11] S. Shoval. Stability of a multi tracked robot traveling oversteep slopes. In Proceedings of the IEEE InternationalConference on Robotics and Automation (ICRA 2004),volume 5, pages 4701–4706, May 2004.

[12] K. Tahboub. Intelligent human-machine interaction basedon dynamic bayesian networks probabilistic intentionrecognition. Journal of Intelligent & Robotic Systems,45:31–52, 2006.

[13] M. Trincavelli, M. Reggente, S. Coradeschi, A. Loutfi,H. Ishida, and A. Lilienthal. Towards environmental moni-toring with mobile robots. In Proceedings of the IEEE/RSJInternational Conference on Intelligent Robots and Sys-tems (IROS 2008), pages 2210–2215, Sep. 2008.

[14] D. Wooden, M. Malchano, K. Blankespoor, A. Howardy,A. Rizzi, and M. Raibert. Autonomous navigation for big-dog. In Proceedings of the IEEE International Conferenceon Robotics and Automation (ICRA 2010), pages 4736–4741, May 2010.