perception-informed autonomous environment augmentation

7
Perception-Informed Autonomous Environment Augmentation With Modular Robots Tarik Tosun * , Jonathan Daudelin * , Gangyuan Jing * , Hadas Kress-Gazit, Mark Campbell, and Mark Yim, Abstract—We present a system enabling a modular robot to autonomously build structures in order to accomplish high- level tasks. Building structures allows the robot to surmount large obstacles, expanding the set of tasks it can perform. This addresses a common weakness of modular robot systems, which often struggle to traverse large obstacles. This paper presents the hardware, perception, and planning tools that comprise our system. An environment characteriza- tion algorithm identifies features in the environment that can be augmented to create a path between two disconnected regions of the environment. Specially-designed building blocks enable the robot to create structures that can augment the environment to make obstacles traversable. A high-level planner reasons about the task, robot locomotion capabilities, and environment to decide if and where to augment the environment in order to perform the desired task. We validate our system in hardware experiments. I. I NTRODUCTION Employing structures to accomplish tasks is a ubiquitous part of the human experience: to reach an object on a high shelf, we place a ladder near the shelf and climb it, and at a larger scale, we construct bridges across wide rivers to make them passable. The fields of collective construction robotics and modular robotics offer examples of systems that can construct and traverse structures out of robotic or passive elements [1], [2], [3], [4], and assembly planning algorithms that allow arbitrary structures to be built under a variety of conditions [5], [6]. This existing body of work provides excellent contributions regarding the generality and com- pleteness of these methods: some algorithms are provably capable of generating assembly plans for arbitrary volumetric structures in 3D, and hardware systems have demonstrated the capability to construct a wide variety of structures. Less work is available regarding ways that robots could deploy structures as a means of completing an extrinsic task, the way a person might use a ladder to reach a high object. In this paper, we present hardware, perception, and high-level planning tools that allow structure-building to be deployed by a modular robot to address high-level tasks. Our work uses the SMORES-EP modular robot [7], and introduces novel passive block and wedge modules that SMORES-EP can use to form ramps and bridges in its environment. Building structures allows SMORES-EP to sur- mount large obstacles that would otherwise be very difficult J.Daudelin, G. Jing, M. Campbell, and H. Kress-Gazit are with the Sibley School of Mechanical and Aerospace Engineering, Cornell University, Ithaca, NY, 14850. T. Tosun and M. Yim are with the Mechanical Engineering and Ap- plied Mechanics Department, University of Pennsylvania, Philadelphia, PA, 19104. * T. Tosun, J. Daudelin, and G. Jing contributed equally to this work. or impossible to traverse, and therefore expands the set of tasks the robot can perform. This addresses a common weakness of modular robot systems, which often struggle with obstacles much larger than a module. We expand on an existing framework for selecting ap- propriate robot morphologies and behaviors to address high- level tasks [8]. In this work, the high-level planner not only decides when to reconfigure the robot, but also when to augment the environment by assembling a passive structure. To inform these decisions, we introduce a novel environment characterization algorithm that identifies candidate features where structures can be deployed to advantage. Together, these tools comprise a novel framework to automatically identify when, where, and how the robot can augment its environment with a passive structure to gain advantage in completing a high-level task. We integrate our tools into an existing system for perception-driven autonomy with modular robots [9], and validate them in two hardware experiments. Based on a high-level specification, a modular robot reactively identifies inaccessible regions and autonomously deploys ramps and bridges to complete locomotion and manipulation tasks in realistic office environments. II. RELATED WORK Our work complements the well-established field of col- lective robotic construction, which focuses on autonomous robot systems for building activity. While we use a modular robot to create and place structures in the environment, our primary concern is not assembly planning or construction of the structure itself, but rather its appropriate placement in the environment to facilitate completion of an extrinsic high-level task. Petersen et al. present Termes [2], a termite-inspired collective construction robot system that creates structures using blocks co-designed with a legged robot. Similarly, our augmentation modules are designed to be easily carried and traversed by SMORES-EP. Where the TERMES project focused on collective construction of a goal structure, we are less concerned with efficient building of the structure itself and more concerned with the application and placement of the structure in the larger environment as a means of facilitating a task unrelated to the structure itself. Werfel et al. present algorithms for environmentally- adaptive construction that can build around obstacles in the environment [6]. A team of robots senses obstacles and builds around them, modifying the goal structure if needed to leave room for immovable obstacles. An algorithm to build arXiv:1710.01840v2 [cs.RO] 1 Mar 2018

Upload: others

Post on 18-Dec-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Perception-Informed Autonomous Environment Augmentation With ModularRobots

Tarik Tosun*, Jonathan Daudelin*, Gangyuan Jing*, Hadas Kress-Gazit, Mark Campbell, and Mark Yim,

Abstract— We present a system enabling a modular robotto autonomously build structures in order to accomplish high-level tasks. Building structures allows the robot to surmountlarge obstacles, expanding the set of tasks it can perform. Thisaddresses a common weakness of modular robot systems, whichoften struggle to traverse large obstacles.

This paper presents the hardware, perception, and planningtools that comprise our system. An environment characteriza-tion algorithm identifies features in the environment that can beaugmented to create a path between two disconnected regionsof the environment. Specially-designed building blocks enablethe robot to create structures that can augment the environmentto make obstacles traversable. A high-level planner reasonsabout the task, robot locomotion capabilities, and environmentto decide if and where to augment the environment in order toperform the desired task. We validate our system in hardwareexperiments.

I. INTRODUCTION

Employing structures to accomplish tasks is a ubiquitouspart of the human experience: to reach an object on a highshelf, we place a ladder near the shelf and climb it, andat a larger scale, we construct bridges across wide riversto make them passable. The fields of collective constructionrobotics and modular robotics offer examples of systems thatcan construct and traverse structures out of robotic or passiveelements [1], [2], [3], [4], and assembly planning algorithmsthat allow arbitrary structures to be built under a varietyof conditions [5], [6]. This existing body of work providesexcellent contributions regarding the generality and com-pleteness of these methods: some algorithms are provablycapable of generating assembly plans for arbitrary volumetricstructures in 3D, and hardware systems have demonstratedthe capability to construct a wide variety of structures.

Less work is available regarding ways that robots coulddeploy structures as a means of completing an extrinsic task,the way a person might use a ladder to reach a high object. Inthis paper, we present hardware, perception, and high-levelplanning tools that allow structure-building to be deployedby a modular robot to address high-level tasks.

Our work uses the SMORES-EP modular robot [7], andintroduces novel passive block and wedge modules thatSMORES-EP can use to form ramps and bridges in itsenvironment. Building structures allows SMORES-EP to sur-mount large obstacles that would otherwise be very difficult

J.Daudelin, G. Jing, M. Campbell, and H. Kress-Gazit are with the SibleySchool of Mechanical and Aerospace Engineering, Cornell University,Ithaca, NY, 14850.

T. Tosun and M. Yim are with the Mechanical Engineering and Ap-plied Mechanics Department, University of Pennsylvania, Philadelphia, PA,19104.

*T. Tosun, J. Daudelin, and G. Jing contributed equally to this work.

or impossible to traverse, and therefore expands the setof tasks the robot can perform. This addresses a commonweakness of modular robot systems, which often strugglewith obstacles much larger than a module.

We expand on an existing framework for selecting ap-propriate robot morphologies and behaviors to address high-level tasks [8]. In this work, the high-level planner not onlydecides when to reconfigure the robot, but also when toaugment the environment by assembling a passive structure.To inform these decisions, we introduce a novel environmentcharacterization algorithm that identifies candidate featureswhere structures can be deployed to advantage. Together,these tools comprise a novel framework to automaticallyidentify when, where, and how the robot can augment itsenvironment with a passive structure to gain advantage incompleting a high-level task.

We integrate our tools into an existing system forperception-driven autonomy with modular robots [9], andvalidate them in two hardware experiments. Based on ahigh-level specification, a modular robot reactively identifiesinaccessible regions and autonomously deploys ramps andbridges to complete locomotion and manipulation tasks inrealistic office environments.

II. RELATED WORK

Our work complements the well-established field of col-lective robotic construction, which focuses on autonomousrobot systems for building activity. While we use a modularrobot to create and place structures in the environment, ourprimary concern is not assembly planning or constructionof the structure itself, but rather its appropriate placementin the environment to facilitate completion of an extrinsichigh-level task.

Petersen et al. present Termes [2], a termite-inspiredcollective construction robot system that creates structuresusing blocks co-designed with a legged robot. Similarly,our augmentation modules are designed to be easily carriedand traversed by SMORES-EP. Where the TERMES projectfocused on collective construction of a goal structure, weare less concerned with efficient building of the structureitself and more concerned with the application and placementof the structure in the larger environment as a means offacilitating a task unrelated to the structure itself.

Werfel et al. present algorithms for environmentally-adaptive construction that can build around obstacles in theenvironment [6]. A team of robots senses obstacles andbuilds around them, modifying the goal structure if needed toleave room for immovable obstacles. An algorithm to build

arX

iv:1

710.

0184

0v2

[cs

.RO

] 1

Mar

201

8

enclosures around preexisting environment features is alsopresented. As with Termes, the goal is the structure itself;while the robots do respond to the environment, the structureis not built in response to an extrinsic high-level task.

Napp et al. present hardware and algorithms for buildingamorphous ramps in unstructured environments by deposit-ing foam with a tracked mobile robot [10], [4]. Amorphousramps are built in response to the environment to allow asmall mobile robot to surmount large, irregularly shaped ob-stacles. Our work is similar in spirit, but places an emphasison autonomy and high-level locomotion and manipulationtasks rather than construction.

Modular self-reconfigurable robot (MSRR) systems arecomprised of simple repeated robot elements (called mod-ules) that connect together to form larger robotic struc-tures. These robots can self-reconfigure, rearranging theirconstituent modules to form different morphologies, andchanging their abilities to match the needs of the task andenvironment [11]. Our work leverages recent systems thatintegrate the low-level capabilities of an MSRR system into adesign library, accomplish high-level user-specified tasks bysynthesizing library elements into a reactive state machine[8], and operate autonomously in unknown environmentsusing perception tools for environment exploration and char-acterization [9].

Our work extends the SMORES-EP hardware system byintroducing passive pieces that are manipulated and traversedby the modules. Terada and Murata [3], present a lattice-style modular system with two parts, structure modulesand an assembler robot. Like many lattice-style modularsystems, the assembler robot can only move on the structuremodules, and not in an unstructured environment. Otherlattice-style modular robot systems create structures out ofthe robots themselves. M-blocks [1] form 3D structures outof robot cubes which rotate over the structure. Paulos etal. present rectangular boat robots that self-assemble intofloating structures, like a bridge [12].

Magnenat et al [13] present a system in which a mobilerobot manipulates specially designed cubes to build func-tional structures. The robot explores an unknown environ-ment, performing 2D SLAM and visually recognizing blocksand gaps in the ground. Blocks are pushed into gaps tocreate bridges to previously inaccessible areas. In a “real butcontrived experimental design” [13], a robot is tasked withbuilding a three-block tower, and autonomously uses twoblocks to build a bridge to a region with three blocks, retriev-ing them to complete its task. Where the Magnenat systemis limited to manipulating blocks in a specifically designedenvironment, our work presents hardware, perception, andhigh-level planning tools that are more general, providingthe ability to complete high-level tasks involving locomotionand manipulation in realistic human environments.

III. APPROACH

A. Environment Characterization

To successfully navigate its environment, a mobile robotmust identify traversable areas. One simple method for

wheeled robots is to select flat areas large enough forthe robot to fit. However, MSRR systems can reconfigureto traverse a larger variety of terrains. The augmentationabilities we introduce extend MSRR navigation even fur-ther; the robot can build structures to traverse otherwise-impossible terrains. For autonomous operation, we need analgorithm to locate and label features in the environmentthat can be augmented. We present a probabilistic, template-based environment characterization algorithm that identifiesaugmentable features from a 2.5D elevation map of therobot’s environment.

The characterization algorithm searches for a desiredfeature template Fn which identifies candidate locations inthe environment where useful structures could be built. Atemplate consists of a grid of likelihood functions li(h) for1 ≤ i ≤ M where M is the number of grid cells in thetemplate, and h is a height value. The size of grid cellsin the template is variable and need not correspond to theresolution of the map. In addition, features of different sizecan be searched for by changing the cell size of the templateto change the scale. In our system implementation, templateparameters and likelihood functions are designed by handto correspond to each structure in the system’s structurelibrary. However, future implementations could automaticallygenerate these templates offline with an additional algorithm.

Figure 1 shows an example of a template used to charac-terize a “ledge” feature, consisting of Gaussian and logisticlikelihood functions. Any closed-form likelihood functionmay be used for each grid cell, enabling templates to ac-commodate noisy data and variability in possible geometricshapes of the same feature. To determine if the feature existsat a candidate pose X in the map, a grid of height valuesis taken from the map corresponding to the template gridcentered and oriented at the candidate pose, as illustrated inFigure 1. Then, the probability that each grid cell ci belongsto the feature is evaluated using the cell’s likelihood functionfrom the template.

P (ci ∈ Fn) = li(hi) (1)

The likelihood of the feature existing at that location iscalculated by finding the total probability that all grid cellsbelong to the feature. Making the approximate simplifyingassumption that grid cells are independent, this probability isequivalent to taking the product over the feature likelihoodsof all grid cells in the template:

P (X ∈ Fn) =∏

i=1:M

li(hi) (2)

The feature is determined to exist if the total probability ishigher than a user-defined threshold, or P (X ∈ Fn) > αM ,where α represents the minimum average probability of eachgrid cell forming part of the feature. In our experiments weuse α = 0.95. This formulation normalizes the thresholdwith respect to the number of grid cells in the template.

Fig. 1. Left: Example template used to characterize a “ledge” feature.Right: Example template overlayed on elevation map (top view) to evaluatecandidate feature pose.

Fig. 2. Characterization of an environment with a “ledge” feature.Red indicates a detected feature, pink indicates the start of the feature,demonstrating orientation.

To characterize an environment, the algorithm takes asinputs an elevation map of the environment and a list offeature templates. Before searching for features, the algo-rithm preprocesses the elevation map by segmenting it intoflat, unobstructed regions that are traversable without aug-mentation. It then grids the map and exhaustively evaluateseach candidate feature pose from the grid, using a grid oforientations for each 2D location. In addition to evaluationwith the template, candidate poses are only valid if theends of the feature connect two traversable regions fromthe preprocessing step, thereby having potential to extendthe robot’s reachable space. Once the search is complete,the algorithm returns a list of features found in the map,including their locations, orientations, and the two regionsthey link in the environment. Figure 2 shows an example ofa characterized map. Each long red cell represents a detected“ledge-height-2” feature, with a corresponding small pinkcell demonstrating the orientation of the feature (and thebottom of the ledge). Note that, in this example, severalfeatures are chosen close to each other. Since all connect thesame regions, any one is valid and equivalent to be selectedfor augmentation.

The algorithm scales linearly with the number of gridcells in the 2D environment map, and linearly with thenumber of features being searched for. Characterization ofthe environment shown in Figure 2 took approximately 3seconds to run on a laptop with an Intel Core i7 processor.

Left

Right

Pan

Tilt

Fig. 3. SMORES-EP module

B. Hardware: Augmentation Modules

Our system is built around the SMORES-EP modularrobot. Each module is the size of an 80mm cube, weighs473g, and has four actuated joints, including two wheelsthat can be used for differential drive on flat ground [7],[14]. Electro-permanent (EP) magnets allow any face ofone module to connect to any face of another, enabling therobot to self-reconfigure. They are also able to attach toobjects made of ferromagnetic materials (e.g. steel). The EPmagnets require very little energy to connect and disconnect,and no energy to maintain their attachment force of 90N[7]. Each module has its own battery, microcontroller, andWiFi module for communication. In this work, clusters ofmodules are controlled by a central computer running aPython program that commands movement and magnet viaWiFi. Wireless networking is provided by a standard off-the-shelf router, and commands to a single module can bereceived at a rate of about 20hz. Battery life is about onehour (depending on magnet, motor, and radio usage).

Large obstacles, like tall ledges or wide gaps in the ground,are often problematic for modular robot systems. One mightexpect that a modular system could scale, addressing a large-length-scale task by using many modules to form a largerobot. In reality, modular robots don’t scale easily: addingmore modules makes the robot bigger, but not stronger.The torque required to lift a long chain of modules growsquadratically with the number of modules, quickly overload-ing the maximum torque of the first module in the chain.Consequently, large systems become cumbersome, unable tomove their own bodies. Simulated work in reconfigurationand motion planning has demonstrated algorithms that handlehundreds of modules, but in practice, fixed actuator strengthhas typically limited these robots to configurations withfewer than 40 modules.

We address this issue by extending the SMORES-EPhardware system with passive elements called environmentaugmentation modules. We use the Wedge and Block aug-mentation modules shown in Figure 4. Wedge and blockmodules are designed to work synergistically with SMORES-EP, providing features that use the best modes of locomotion(driving), manipulation (magnetic attachment), and sensing(AprilTags) available to SMORES-EP.

Blocks are the same size as a module (80mm cube),and wedges are half the size of a block (an equilateralright triangle with two 80mm sides). Both are made of

Fig. 4. Wedge and Block Augmentation Modules

Fig. 5. Bridge and Ramp

lightweight laser-cut medium-density fiberboard (blocks are162g, wedges are 142g) and equipped with a steel attachmentpoint for magnetic grasping. Neodymium magnets on theback faces of wedges, and the front and back faces ofblocks, form a strong connection in the horizontal direction.Interlocking features on the top and bottom faces of theblocks, and the bottom faces of the wedges, allow them to bestacked vertically. Wedges provide a 45-degree incline witha high-friction rubber surface, allowing a set of 3 or moremodules to drive up them. Side walls on both the wedgesand blocks ensure that SMORES-EP modules stay alignedto the structure and cannot fall off while driving over it.The side walls of wedges are tapered to provide a funnelingeffect as modules drive onto them, making them moretolerant to misalignment. Each wedge and ramp has uniqueAprilTag fiducials on its faces, allowing easy identificationand localization during construction and placement in theenvironment.

Wedges and blocks allow a SMORES-EP cluster to au-tonomously construct bridges or ramps that allow it to reachhigher heights and cross wider gaps than it could with robotmodules alone (Figure 5). Provided enough time, space, andaugmentation modules are available, there is no limit to theheight of a ramp that can be built. Bridges have a maximumlength of 480mm (longer bridges cannot support a load ofthree SMORES-EP modules in the center).

C. High-Level Planner

We utilize a high-level planner that allows users to controllow-level robot actions by defining tasks at high-level witha formal language [9]. The high-level planner serves two

Fig. 6. An example of synthesized robot controller

main functions. First it acts as a mission planner, automati-cally synthesizing a robot controller (finite state automaton)from user-given task specifications. Second, it executes thegenerated controller, commanding the robot to react to thesensed environment and complete the tasks. In this work,the high-level planner integrates with a robot design libraryof user-created robot configurations and behaviors, as wellas a structure library of structures that can be deployedto alter the environment. Users do not explicitly specifyconfigurations and behaviors for each task, but rather definegoals and constraints for the robot. Based on the task specifi-cations, the high-level planner chooses robot configurationsand behaviors from the design library, and executes themto satisfy the tasks. When necessary, the planner will alsochoose to build a structure from the structure library tofacilitate task execution.

Consider the following example task: The robot is askedto look for a pink drawer, open the drawer, and then climbon top of it. The mission planer synthesizes the controllershown in Figure 6. Each state in the controller is labeledwith a desired robot action, and each transition is labeledwith perceived environment information; for example, the“climb drawer” action is specified to be any behavior fromthe library with properties climb in a ledge environment.In our previous framework [9], the high-level planner couldchoose to reconfigure the robot whenever needed to satisfythe required properties of the current action and environment.

In this work, the high-level planner can choose not only tochange the abilities of the robot (reconfiguration), but alsothe properties of the environment (environment augmenta-tion). We expand our framework by introducing a libraryof structures S = {s1, s2, . . . , sN}, where each structure isdefined as sn = {Fn, An}. Fn is an environment featuretemplate that specifies the kind of environment the structurecan augment, and which can be identified by the environmentcharacterization algorithm described in Section III-A. Theassembly plan An is itself a high-level task controller (finitestate automaton), specifying the required building blocksneeded to create the structure and the order in which theymay be assembled. As with other tasks in our framework,construction actions within assembly plans are specified interms of behavior properties (e.g. pickUpBlock, placeWedge)that the high-level planner maps to appropriate configurationsand behaviors from the robot design library.

For the example in Figure 6, if no behavior in the librarysatisfies the “climb drawer” action, the high-level plannerwill consider augmenting its environment with a structure.It passes a set of feature templates to the environment

Fig. 7. System Overview Flowchart

characterization subsystem, which returns a list of matchedfeatures (if any are found), as well as two lists of regionsR1 = {r10, r11, . . . }, R2 = {r20, r21, . . . } that the matchedfeatures connect.

To decide what structure to build, the high-level plannerconsiders the available augmentation modules in the currentenvironment, the current robot configuration, and the distancefrom the structure build-point to the robot goal position.After selecting a structure, the high-level planner executesits assembly plan to construct it. Once the structure isbuilt, the high-level planner considers regions r1i and r2i tobe connected and traversable by the robot, allowing it tocomplete its overall task of climbing onto the drawer.

IV. SYSTEM INTEGRATION

We integrate our environment augmentation tools intothe system introduced in [9], as shown in Figure 7. Thehigh-level planner automatically converts user defined taskspecifications to controllers from a robot design library. Itexecutes the controller by reacting to the sensed environment,running appropriate behaviors from the design library tocontrol a set of hardware robot modules. Active perceptioncomponents perform simultaneous localization and mapping(SLAM), and characterize the environment in terms of robotcapabilities. Whenever required, the reconfiguration subsys-tem controls the robot to change configurations.

The system used the Robot Operating System (ROS)1 fora software framework, networking, and navigation. SLAMwas performed using RTAB-MAP[15], and color detectionwas done using CMVision2.

SMORES-EP modules have no sensors that allow themto gather information about their environment. To enableautonomous operation, we use the sensor module shownin Figure 8. The sensor module has a 90mm × 70mm ×70mm body with thin steel plates on its front and back thatallow SMORES-EP modules to connect to it. Computationis provided by an UP computing board with an Intel Atom1.92 GHz processor, 4 GB memory, and a 64 GB harddrive. A USB WiFi adapter provides network connectivity.A front-facing Orbecc Astra Mini RGB-D camera enablesthe robot to map and explore its environment and recognizeobjects of interest. A thin stem extends 40cm above thebody, supporting a downward-facing webcam. This cameraprovides a view of a 1m × 0.75m area in front of thesensor module, and is used to track AprilTag [16] fiducials on

1http://www.ros.org2CMVision: http://www.cs.cmu.edu/∼jbruce/cmvision/

Fig. 8. Sensor Module with labelled components. UP board and batteryare inside the body.

modules and augmentation modules for reconfiguration andstructure building. A 7.4V, 2200mAh LiPo battery providesabout one hour of running time.

V. EXPERIMENT RESULTS

Our system can generalize to arbitrary environment aug-mentations and high-level tasks. We validate our systemin two hardware experiments that require the same systemto perform tasks requiring very different environment aug-mentations for successful completion. In both experiments,the robot autonomously perceives and characterizes eachenvironment, and synthesizes reactive controllers to accom-plish the task based on the environment. Videos of the fullexperiments accompany this paper, and are also availableonline at https://youtu.be/oo4CWKWyfvQ.

A. Experiment I

For the first experiment, the robot is tasked with inspectingtwo metal desk drawers in an office. It is asked to locate theset of drawers (identified with a pink label), open the twolowest drawers, and inspect their contents with its camera.The robot can open metal drawers by magnetically attachingto them and pulling backwards, but it is unable to reach thesecond drawer from the ground. Therefore, it can only openthe second drawer if it can first open the bottom drawer andthen climb on top of the things inside it.

Figure 9 shows snapshots throughout the robot’s au-tonomous performance of Experiment I. After recognizingand opening the first drawer, the robot characterizes theenvironment with the opened drawer and identifies the sideof the drawer as a “ledge” feature. The high-level plan-ner recognizes that the ledge is too high for the currentconfiguration to climb, and furthermore that there is noother configuration in the library to which the robot cantransform that could climb the ledge, leaving environment

Fig. 9. Snapshots throughout Experiment I. From left to right, top to bottom: i) Experiment start ii) Opening first drawer iii) Picking up ramp iv) Placingramp next to open drawer. v) Reconfiguring and climbing ramp vi) Opening second drawer

augmentation as the only strategy that can complete thetask. Observing a ramp structure in the environment, thehigh-level planner commands the robot to acquire the ramp,place it at the “ledge” feature detected by the characterizationalgorithm, climb the drawer, and complete the mission.

In a second version of the same experiment, the firstdrawer is empty. When the robot characterizes the environ-ment containing the drawer, it identifies no “ledge” features,since the drawer no longer matches the requirements ofthe feature. As a result, it recognizes that environmentaugmentation is not possible, and the mission cannot becompleted.

B. Experiment II

The environment for Experiment II consists of two tablesseparated by a 16 cm gap. The robot begins the experimenton the left table with two wedges and one block. To completeits mission, the robot must cross the gap to reach a pinkdestination zone on the right table.

Figure 10 shows snapshots throughout Experiment II. Thistime, characterization of the environment identifies that thepink goal zone is in a separate region from the robot, and alsoidentifies several “gap” features separating the two regions.Recognizing that the gap is too wide for any configuration inthe design library to cross unassisted, the high-level plannerconcludes it must build a bridge across the gap to completeits mission. It begins searching for materials, and quicklyidentifies the three available augmentation modules, whichit autonomously assembles into a bridge. It then places thebridge across the gap and crosses to complete its mission.

VI. DISCUSSION

Block and wedge modules demonstrably expand the phys-ical capabilities of SMORES-EP, allowing the system toclimb to a high ledge and cross a wide gap to completetasks that would have been very difficult with the SMORES-EP modules alone. Perception tools accurately characterize

Outcome Exp 1 Exp 2Success 2 (25.0%) 3 (37.5%)

Perception-Related Failure 2 (25.0%) 2 (25.0%)Navigation Failure 1 (12.5%) 0 (0.0%)Hardware Failure 3 (37.5%) 2 (25.0%)

Setup Error 0 (0.0%) 1 (12.5%)

TABLE IOUTCOMES FOR EXPERIMENTS 1 AND 2

augmentable features in the environment. High-level reason-ing tools identify when environment augmentation is neces-sary to complete a high-level task, and reactively sequencelocomotion, manipulation, construction, and reconfigurationactions to accomplish the mission. The presented workrepresents the first time that modular robots have successfullyaugmented their environment by deploying passive structuresto perform high-level tasks.

As with our previous work with autonomous modularrobots [9], robustness proved challenging. Out of 8 test runsof Experiment I, the robot successfully completed the entiretask once. Table I shows the outcomes of 8 runs of eachexperiment. The largest source of error was due to hardwarefailures such as slight encoder mis-calibration or wirelesscommunication failure. Creating more robust hardware formodular robots is challenging due to the constrained sizeof each module, and the higher probability of failure fromhigher numbers of components in the system.

Perception-related errors were another frequent cause offailure. These were due in part to mis-detections by thecharacterization algorithm, or because the accuracy in findinglocation and orientation of features was not high enough forthe margin of error of the robot when placing structures.Finally, navigation failures occurred throughout developmentand experiments due to cumulative SLAM errors as the robotnavigates the environment. We found that it was importantto minimize in-place rotation of the robot, and to avoid areaswithout many features for visual odometry to use.

A. Future

In the interest of establishing the deployment of struc-tures as an effective means to address high-level tasks, this

Fig. 10. Snapshots throughout Experiment II. From left to right, top to bottom: i) Experiment start ii) Assembling bridge iii) Transporting bridge iv)Placing bridge over gap. v) Reconfigure and cross bridge. vi) Arrive at the target zone.

work does not focus on the speed or scale of construction,demonstrating the use of only small structures (with threeelements). Future work might attempt to accelerate construc-tion, build larger structures, and attempt larger-scale taskswith SMORES-EP. For the purposes of this work, structureassembly plans (An) were create manually, but this processcould be automated by employing established assemblyplanning algorithms [5], [6]. Assembly might be significantlyaccelerated by using multiple builders in parallel, as someother collective robot construction systems have done [2]. Totruly scale to large tasks, a large number of block and wedgemodules must be available in the environment, or better,autonomously transported into the environment. Developingmechanisms for transporting building material to a tasklocation remains an open challenge for future work.

While our system implementation is tightly coupled to theSMORES-EP hardware, the concepts, system architecture,and theoretical frameworks could be applied widely. Inparticular, most elements of the framework could be directlyapplied to the Termes [2] or foam-ramp building robots [4]that have similar construction and locomotion capabilities toSMORES-EP, provided that appropriate sensing and percep-tion capabilities were established.

B. Conclusion

To conclude, this paper presents tools that allow a modularrobot to autonomously deploy passive structures as a meansto complete high-level tasks involving locomotion, manipu-lation, and reconfiguration. This work expands the physicalcapabilities of the SMORES-EP modular robot, and extendsour existing frameworks for addressing high-level tasks withmodular robots by allowing both the robot morphology andthe environment to be altered if doing so allows the taskto be completed. We validate our system in two hardwareexperiments that demonstrate how the hardware, perceptiontools, and high-level planner work together to complete high-level tasks through environment augmentation.

ACKNOWLEDGMENTS

This work was funded by NSF grant numbers CNS-1329620 and CNS-1329692.

REFERENCES

[1] J. W. Romanishin, K. Gilpin, S. Claici, and D. Rus, “3d m-blocks:Self-reconfiguring robots capable of locomotion via pivoting in threedimensions,” in ICRA. IEEE, 2015, pp. 1925–1932.

[2] K. Petersen, R. Nagpal, and J. Werfel, “Termes: An autonomousrobotic system for three-dimensional collective construction,” Proc.Robotics: Science & Systems VII, 2011.

[3] Y. Terada and S. Murata, “Automatic assembly system for a large-scalemodular structure-hardware design of module and assembler robot,”in IROS, vol. 3. IEEE, 2004, pp. 2349–2355.

[4] N. Napp and R. Nagpal, “Distributed amorphous ramp constructionin unstructured environments,” Robotica, vol. 32, no. 2, pp. 279–290,2014.

[5] J. Seo, M. Yim, and V. Kumar, “Assembly planning for planarstructures of a brick wall pattern with rectangular modular robots,”IEEE CASE, pp. 1016–1021, aug 2013.

[6] J. Werfel, D. Ingber, and R. Nagpal, “Collective construction ofenvironmentally-adaptive structures,” IROS, pp. 2345–2352, 2007.

[7] T. Tosun, J. Davey, C. Liu, and M. Yim, “Design and characterizationof the ep-face connector,” in IROS. IEEE, 2016.

[8] G. Jing, T. Tosun, M. Yim, and H. Kress-Gazit, “Accomplishing high-level tasks with modular robots,” Autonomous Robots, conditionallyAccepted. [Online]. Available: https://arxiv.org/pdf/1712.02299.pdf

[9] J. Daudelin∗, G. Jing∗, T. Tosun∗, M. Yim, H. Kress-Gazit,and M. Campbell, “An integrated system for perception-drivenautonomy with modular robots,” In Preparation. [Online]. Available:https://arxiv.org/pdf/1709.05435.pdf

[10] N. Napp and R. Nagpal, “Robotic construction of arbitrary shapes withamorphous materials,” in ICRA. IEEE, 2014, pp. 438–444.

[11] M. Yim, W. Shen, and B. Salemi, “Modular self-reconfigurable robotsystems: Challenges and Opportunities for the Future,” IEEE Robotics& Automation Magazine, no. March, 2007.

[12] J. Paulos et al., “Automated Self-Assembly of Large Maritime Struc-tures by a Team of Robotic Boats,” IEEE Transactions on AutomationScience and Engineering, pp. 1–11, 2015.

[13] S. Magnenat, R. Philippsen, and F. Mondada, “Autonomous construc-tion using scarce resources in unknown environments,” AutonomousRobots, vol. 33, no. 4, pp. 467–485, 2012.

[14] T. Tosun, D. Edgar, C. Liu, T. Tsabedze, and M. Yim, “Paintpots:Low cost, accurate, highly customizable potentiometers for positionsensing,” in ICRA. IEEE, 2017.

[15] M. Labbe and F. Michaud, “Online Global Loop Closure Detection forLarge-Scale Multi-Session Graph-Based SLAM,” in IROS, Sept 2014,pp. 2661–2666.

[16] E. Olson, “Apriltag: A robust and flexible visual fiducial system,” inICRA. IEEE, 2011, pp. 3400–3407.