quadcloud: a rapid response force with quadrotor teams

14
QUAD C LOUD: A Rapid Response Force with Quadrotor Teams Kartik Mohta, Matthew Turpin, Alex Kushleyev, Daniel Mellinger, Nathan Michael and Vijay Kumar Abstract We describe the component technologies, the architecture and system design, and experimentation with a team of flying robots that can respond to emer- gencies or security threats where there is urgent need for situational awareness. We envision the team being launched either by high level commands from a dispatcher or automatically triggered by a threat detection system (for example, an alarm). Our first response team consists of autonomous quadrotors with downward-facing cam- eras that can navigate to a designated location in an urban environment and develop a integrated picture of areas around a building or a city block. We specifically address the design of the platform capable of autonomous navigation at speeds of over 30 mph, the control and estimation software, the algorithms for trajectory planning and allocation of robots to specific tasks, and a user interface that allows the specification of tasks with a situational awareness display. Keywords Aerial robotics · Multi-robot systems · Field robotics 1 Introduction Over the last decade, aerial robotics has received a lot of attention and there is extensive literature on both indoor [16, 18] and outdoor platforms. Indeed by some estimates, 1 the UAV market is estimated to exceed $20 B in the next 3years, and 1 http://www.marketresearchmedia.com/?p=509. K. Mohta (B ) · M. Turpin · V. Kumar GRASP Laboratory, University of Pennsylvania, Philadelphia, PA 19104, USA e-mail: [email protected] A. Kushleyev · D. Mellinger KMel Robotics, Philadelphia, PA 19146, USA N. Michael Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA © Springer International Publishing Switzerland 2016 M.A. Hsieh et al. (eds.), Experimental Robotics, Springer Tracts in Advanced Robotics 109, DOI 10.1007/978-3-319-23778-7_38 577

Upload: others

Post on 29-Apr-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

QUADCLOUD: A Rapid Response Forcewith Quadrotor Teams

Kartik Mohta, Matthew Turpin, Alex Kushleyev, Daniel Mellinger,Nathan Michael and Vijay Kumar

Abstract We describe the component technologies, the architecture and systemdesign, and experimentation with a team of flying robots that can respond to emer-gencies or security threats where there is urgent need for situational awareness. Weenvision the team being launched either by high level commands from a dispatcheror automatically triggered by a threat detection system (for example, an alarm). Ourfirst response team consists of autonomous quadrotors with downward-facing cam-eras that can navigate to a designated location in an urban environment and develop aintegrated picture of areas around a building or a city block. We specifically addressthe design of the platform capable of autonomous navigation at speeds of over 30mph, the control and estimation software, the algorithms for trajectory planning andallocation of robots to specific tasks, and a user interface that allows the specificationof tasks with a situational awareness display.

Keywords Aerial robotics ·Multi-robot systems · Field robotics

1 Introduction

Over the last decade, aerial robotics has received a lot of attention and there isextensive literature on both indoor [16, 18] and outdoor platforms. Indeed by someestimates,1 the UAV market is estimated to exceed $20 B in the next 3years, and

1http://www.marketresearchmedia.com/?p=509.

K. Mohta (B) ·M. Turpin · V. KumarGRASP Laboratory, University of Pennsylvania,Philadelphia, PA 19104, USAe-mail: [email protected]

A. Kushleyev · D. MellingerKMel Robotics, Philadelphia, PA 19146, USA

N. MichaelRobotics Institute, Carnegie Mellon University,Pittsburgh, PA 15213, USA

© Springer International Publishing Switzerland 2016M.A. Hsieh et al. (eds.), Experimental Robotics, Springer Tractsin Advanced Robotics 109, DOI 10.1007/978-3-319-23778-7_38

577

Page 2: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

578 K. Mohta et al.

this forecast is conservative since it does not account for the thousands of micro-UAVs that are likely to be fielded in the near future. At the smaller (and lighter)end of the spectrum, quadrotors and hexrotors have become a standard platform forrobotics research worldwide with the potential to support many indoor and outdoorapplications [14]. In spite of the limitations of battery technologies and their inher-ent inefficiency, there are a number of short duration missions that make for veryinteresting applications. In this paper, we describe a rapid response force consist-ing of team of quadrotors that can quickly respond to disasters and emergencies byproviding situational awareness before human responders can get to the scene. Ourmain goal in this paper is to describe the component technologies, our approach tothe architecture and system design, and experimental data in support of an integratedsystem with applications to first response.

Our paper builds on previous work on designing outdoor quadrotor platforms.A tutorial on quadrotors is provided in [14]. In particular, the work by Huanget al. [7] is notable in its study of lift and drag in flight conditions. Commercialplatforms for applications such as aerial photography are available from companieslike DJI,2 Ascending Technologies,3 and Parrot. Multi vehicle demonstrations havebeen shown at conferences like ICRA 2013 and shows like the 50 robot outdoor event[2]. However, our main focus is on building a system of platforms that can functionas a cohesive unit to perform a range of tasks within the broad scope of emergencyresponse.

There is extensive literature on the control for quadrotors. Backstepping approac-hes to control system design based on linear control laws are discussed in [3]. Anonlinear controller that incorporates the curvature of the SE(3) is described in[11, 15], while a closely related approach used in control showed significantly betterperformance over linear controllers [15]. Similarly the use of linear filters or extendedKalman Filters for state estimation in is quite standard [1]. However a UKF estimatoryields significantly improvedperformance [17].Our paper builds on these approachesas described later.

Finally, the studyof quadrotor teams is also relevant to thiswork.Our ownpreviouswork is described in [9, 13]. Representative papers from other groups who have donesimilar work include [6]. The work in this paper primarily addresses a frameworkfor coordination of aerial robots without assigning labels to the individual vehiclesand without specificity of the number of robots in the team. Another related bodyof work is the design of user interfaces for multi robot control [4]. However, thereis relatively less literature on the design of user interfaces for large teams of aerialrobots where the three-dimensional environments and the short term duration of themissions impose new constraints.

2http://www.dji.com.3http://www.asctec.de.

Page 3: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

QuadCloud: A Rapid Response Force... 579

2 System Design

2.1 Architecture

The goal of QuadCloud is to monitor an area that is the size of a few city blocks,approximately a total area of 400m × 400m,while being responsive to the operator’scommands and dynamic information collected by one or more robots. We want theresponse times to these requests of surveillance to be less than 30 s. This requiresthe robots to have speeds of around 10ms−1, and acceleration of 3ms−2. We wantthe size of the robots to be as small as possible for ease of handling and deployment(imagine deploying a team from the back of a pickup truck), and have enough payloadcapacity to carry all the components needed for autonomy and surveillance. Sincethe system runs outdoors, the robots also need to have enough thrust to be able tocope with moderate winds.

Akey requirement forQuadCloud is that a single operatormust be able to deploy,control and monitor the team of robots easily. This requires sufficient autonomy oneach robot to able to handle the navigation task from trajectory generation to positionstabilization on-board. Further, on each robot, we need downward-facing cameras toprovide imagery, and the computational resources to allow processing of images ataround 5Hz from the on-board camera. Each robot must have the requisite on-boardintelligence to look for salient information. In this paper,we do assume that the targetsof interest are relatively simple and can be easily identified using regular cameras atheights of around 10–20 m. However, the algorithms must allow for false positivesand a probabilistic representation of the belief state of the environment. Finally,in order to send data back from all parts of the monitored area and to respond tocommands issued by the operator, each robot must be able to communicate with thebase station within a 400m distance.

Thus the architecture must incorporate some elements of decentralized planning,control, estimation and target detection and localization while allowing for a cen-tralized, cloud computing model for command and control by the user and taskplanning. Further, we want a framework in which the user is agnostic to the numberof robots, their identities and what exactly their individual states are. This attributeof anonymity increases the robustness of the system to failures and decreases theoverhead on the human user.

The simplest architecture with these attributes is shown in Fig. 1. Robots are ableto localize and control their motions to destinations, freeing the operator to workwithhigh level task specifications such as target destinations or areas for surveillance. Acentralized goal assignment and planning module decides which robot responds towhat request and when. Individual robots independently decide how to follow theserequests. Each robot periodically sends back its position estimates and images fromthe on-board camera to the base station. This information is presented to the operatoron a simple user interface and through the user interface, the operator is also able tocommand goal positions which are sent to the planner.

Page 4: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

580 K. Mohta et al.

Base-station

Goal Assignmentand Planning

User Interface

Waypoints

Robot

Odroid-U2 Quadrotor

PositionsImages

Input

Positions Goals

Fig. 1 Decentralized robot motion planning and control along with a centralized model for humaninteraction and task assignment

Fig. 2 The KMelkQuad500 quadrotorequipped with a u-bloxNEO-6T GPS module, aMatrix Vision mvBlueFOXcamera, a Ubiquity NetworksBullet M2 and a Odroid-U2quad-core ARM single boardcomputer

2.2 The Robots

The robots used for this project, shown in Fig. 2, are quadrotors designed and devel-oped byKMelRobotics. These quadrotors have a tip-to-tip diameter of about 0.54mandweigh around 0.95kg in the configuration used for this project. They are equippedwith an ARMCortex-M3 processor, 3-axis accelerometer, gyroscope, magnetometerand a pressure sensor while a u-blox NEO-6T GPS module was added in order to getposition estimates. A control loop running at 500Hz on the ARM processor stabi-lizes the attitude of the quadrotor. The communication with the on-board processorin order to receive sensor outputs and send thrust and attitude commands is done viaa UART port.

All the high-level computations on the robot are performed on an Odroid-U2quad-core ARM single board computer. This compact board has four Cortex-A9cores each running at up to 1.7 GHz allowing us to have a powerful processor ina small form-factor. The Odroid-U2 comes with a big passive heat-sink which wereplaced with a small active heat-sink cutting its weight from 130g to around 50 g.Each quadrotor is also equipped with a Matrix Vision mvBlueFOX camera in orderto capture images for target detection and surveillance.Wewanted each robot to sendimages to the base station for surveillance purposes, thus we had a requirement of

Page 5: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

QuadCloud: A Rapid Response Force... 581

long-range communicationswith sufficient bandwidth. Since theOdroid-U2 does nothave any built-inwifi, but has an ethernet port, we decided to use aUbiquityNetworksBullet M2 which provides the required bandwidth with a range of more than 350m. This allows us to stream compressed images at more than 40 fps from a singlerobot or around 5 fps from each robot when we have a team of 8 quadrotors sendingimages back. The Bullet M2 comes with a large and heavy antenna connector whichwe replaced with a smaller one, and we also removed the plastic shell for weightsaving, reducing its weight from 180 to 50g. The robots use a 3-cell 2.2 Ah LiPobattery which gives a flight time of around 8–10 min with the current configurationof the robot.

The software system on the robot consists of three main components: estimation,control and image processing. The image processing is simple and is composed ofdetecting specified features (that characterize the targets) in the image and imagecompression for transmitting the images from the camera to the base station. Theestimation and control subsystems are described inmore detail in the next section.Weuse ROS as the framework for all the software running on the Odroid-U2, becauseit provides a good inter-process communication framework allowing transparentrelocation of the processes across the machines allowing us to run a particular set ofnodes on the robot for testing and running others on a separate computer speedingup the development phase.

2.3 Estimation and Control

An overview of the estimation and control systems running on each robot is shown inFig. 3.We use the GPS, IMU,magnetometer and pressure sensor for state estimation.First, theGPS latitude, longitude and height are transformed to a local cartesian frameusing GeographicLib [8]. We ignore the height from the GPS measurement since ithas a large drift and instead rely on the pressure sensor for getting the height. Thisprocessed output of the GPS along with the other sensor outputs, are then fed to a

TrajectoryGenerator

PositionController

AttitudeController

Onboardfilter

UKF

Actuators

Rigid bodydynamics

Sensors

Quadrotor

ThrustR

des

PosVelAccYaw

MotorSpeeds

IMUMagBaroGPS

Pos Vel RR

Odroid-U2

Fig. 3 The estimation and control systems running on each robot

Page 6: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

582 K. Mohta et al.

Unscented Kalman Filter (UKF) in order to generate full 6-DoF pose estimates. TheUKF state that we use is,

x = [pT pT ψ θ φ aT

b

]T

where p is the world-frame position of the robot, p is the world-frame velocity, ψ , θand φ are the yaw, pitch and roll respectively and ab is the accelerometer bias alongthe three axes.

Our control architecture has the common cascade structure of backstepping con-trollers [3, 16], with the attitude controller as the inner loop and the position con-troller as the outer loop around it. The controller is based on the non-linear controllerdeveloped in [11]. The attitude controller runs at a very high rate (600Hz) on theon-board processor stabilizing the orientation of the quadrotor, allowing us to runthe position controller at a much slower rate (50Hz) on the Odroid-U2. This positioncontroller takes position commands sent by a trajectory generator and, using the posi-tion estimates, converts them into thrust and attitude commands which are sent to theattitude controller running on the on-board processor. Finally, the attitude controllerrunning on the quadrotor takes the thrust and attitude commands and converts themto commanded motor speeds.

2.4 Experimental Benchmarking

We have done extensive testing in order to test the performance of our estimationand control algorithms. The estimates of the UKF during a representative hover testare shown in Fig. 4. The plots also show the output from an OptiTrack system whichwas set up outdoors in order to provide ground truth reference. From the plots wecan see that the errors have a standard deviation of around 16cm in the horizontalplane and 39cm in the vertical direction.

3 Communication and Supervision

Communication

The base station communicates with each of the robots via wifi through the BulletM2high-power long-range wifi modules. We want each robot to send back position esti-mates at 50 Hz and image data at 5 Hz. The bandwidth requirement is dominated bythe transmission of images from the multiple robots to the base station. The cameraon each robot has a resolution of 1280× 960 which leads to a raw gray-scale imagesize of approximately 1.2 MB, so for raw image transmission at 5 Hz we requirea bandwidth of about 48 Mbps. The Bullet M2 claims a maximum bandwidth ofaround 65Mbps, but in real-world testing, we got a data rate of about 50Mbps. Thus

Page 7: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

QuadCloud: A Rapid Response Force... 583

50 100 150 200 250 300−1

−0.5

0

0.5

1

x[m

]

Time [s]

OptiTrackUKF

−1

−0.5

0

0.5

1

y[m

]

77.58

8.59

9.5

z[m

]

50 100 150 200 250 300−5

0

5

10

Roll[deg]

Time [s]

50 100 150 200 250 300

Time [s]50 100 150 200 250 300

Time [s]

50 100 150 200 250 300

Time [s]50 100 150 200 250 300

Time [s]

OptiTrackUKF

−5

0

5

10

Pitch

[deg]

110

115

120

125

Yaw

[deg]

(a) (b)

Fig. 4 UKF estimates during a representative hovering experiment in an open area. Ground truthfrom an OptiTrack motion capture system, which was set up specifically for this experiment, areshown for reference. The position tracking errors had a standard deviation of 0.158m in the hori-zontal direction and 0.386m in the vertical direction. a Position, b orientation

it is not possible to send raw images back from each of the robots at the desiredrate. To reduce the bandwidth requirement, we decided to jpeg compress the imagesbefore sending. This brings down the size of the images from 1.2MB to about 130 kBallowing us to stream images at 5 Hz from up to 10 robots. If we want to add morerobots, we would need to decrease the frame rate of the image data being sent backfrom the robots. A frame rate of 2 Hz is sufficient for surveillance purposes andwould allow us to scale to around 20 robots.

User Interface

Since all the computations for autonomy are being done on the robots themselves,the operator does not need a very powerful base station to control the team; the basestation can just be a small laptop. As mentioned earlier, the robots send their positionestimates to the base station. This information is presented to the operator in theform of markers on an overhead schematic map of the area. Figure8 shows somescreenshots of the user interface during an experiment. In addition to monitoringthe system, the user is able to send goal positions to the system without needing tospecify which robot is assignedwhich goal. Using the algorithm described in the nextsection, the system assigns the goals to the robots in order to minimize the maximumtravel time and plans trajectories for each of them. This reduces the cognitive burdenon the operator by allowing the operator to focus on the high-level tasks.

Page 8: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

584 K. Mohta et al.

4 Combining Assignment and Trajectory Planning

To safely navigate the team of robots to goal locations, a motion planning algorithmis required that computes collision-free trajectories and respects the dynamics ofthe robots. It is well known that extending single robot motion planners to plantrajectories for a team of robots implies exponential computational complexity [5].One attempt to solve this problem of computational intractability is to use a two stepalgorithm that decouples the path from the time parameterization of the trajectories[10]. These decoupled approaches first plan motions for each individual robot whiledisregarding collisions with other robots. The second step is to specify the timeparameterization the robot follows its path. Unfortunately, these approaches are notcomplete and cannot guarantee they will find a solution if one exists.

Fortunately, our team of robots are identical and we can exploit this interchange-ability to generate collision-free time parameterized trajectories in a computationallytractable manner using the Gap algorithm.

Goal Assignment and Planning (Gap)

In this paper, we leverage the authors’ previous workGoal Assignment and Planning,or Gap [19, 20], to plan complete, collision-free trajectories with a computationalcomplexity of O(N 3). This approach assumes full knowledge of obstacles presentin the environment. The robots are modeled as second order systems with sphericalextent. The radius of the robots is taken at a conservative 2 m to ensure that errors inlocalization will not cause a catastrophic collision.

The Gap algorithm is a decoupled motion planner that maintains completenessby leveraging the interchangeability of the robots. This algorithm begins by findingthe cost associated with planning trajectories from the initial state of each robot toevery goal location. We use Dijkstra’s algorithm to quickly find these N 2 motionplans where robot i has cost Ci j to travel to goal j . The next step is the assignmentof goals to robots where each robot is assigned to one goal. This assignment canbe represented by a permutation matrix φ, where φi j = 1 if and only if robot i isassigned to goal j . The Hungarian Algorithm is used to find the assignment whichminimizes the maximum cost assignment:

minimizeφ

i=1,2,...,N

j=1,2,...,N

(φi j Ci j )p

where p is a very large constant. In practice, p = 50 is used. Then, robots areprioritized using simple geometric considerations that are fully detailed in [20].Finally, robots are assigned their full time parameterization to construct trajectoriesthat guarantee collision avoidance.

For a team of 6 robots, these plans are generated in under 0.1 seconds. Additionaldetails of this algorithm including boundary condition requirements for completenessare presented in [20].

Page 9: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

QuadCloud: A Rapid Response Force... 585

5 Experimental Results

5.1 High Speed Tests

Since we are flying outdoors, we can fly along long trajectories, which providesufficient distance to accelerate to high speeds. We ran some high speed tests withone of the quadcloud robots where we commanded the robot to fly approximately80–100 m at speeds up to 15ms−1 and looked at the effect of drag.

For a quadrotor, if we ignore drag, since the thrust is only along the body Z-axis,we expect the accelerometer on the quadrotor to measure zero acceleration on the Xand Y axes while the Z axis will measure the effect of thrust [12]. But, as shown in[12], the accelerometer on a quadrotor measures some acceleration in X and Y axeswhen moving due to the drag force acting on the robot.

From Fig. 5, we get,

ax = −D cosφ

m

az = T − D sin φ

m

where D is the drag force and T is the thrust.Thus, when the robot is flying fast and there is considerable drag, we expect it to

have a significant measurement on the accelerometer X-axis. In our data (Fig. 6), wecan see that the accelerometer measures approximately 4ms−2 acceleration alongthe X-axis when the quadrotor is flying at speeds of around 15ms−1. Using the aboveequations, from the accelerometer measurement and orientation estimates, we canestimate the drag force on the robot. The computation gives the magnitude of thedrag force to be approximately 4.2 N which is significant considering that it is halfof the gravitational force acting on the robot. This is a large external force that isnot modeled in our estimator and can lead to errors in our position and orientationestimates. We are still looking into the effects of drag on the orientation estimatesand also ways in which we can use the drag measurements for velocity estimates.

Fig. 5 Forces acting on aquadrotor moving towardsthe right with velocity v

Thrust

Drag

Gravity

φ

v

Page 10: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

586 K. Mohta et al.

−15−10−50

5

1015

Time [s]120 140 160 180 200 220

Time [s]120 140 160 180 200 220

Time [s]120 140 160 180 200 220

−6−4−20246

VelocityAccelerometer

−6−4−20246

−4−3−2−1012

−2

−1

0

1

2

6

8

10

12

14

120 140 160 180 200 220−40

−30

−20

−10

0

10

20

30

40

Roll[deg]

Time [s]

120 140 160 180 200 220Time [s]

IMUUKF

−10

0

10

20

Pitch

[deg]

(a) (b)

Fig. 6 Measurements during a high speed test. a Velocity and acceleration, b orientation

Fig. 7 An outdoor experiment with six robots

5.2 Experimental Results with Multiple Robotsin an Open Field

Here, we describe an experiment we conducted with six robots in an open field(Fig. 7). Instead of flying around real-world obstacles, we provide virtual obstacles tothe planner so that we can perform the experiment in a much safer manner. Figure8

Page 11: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

QuadCloud: A Rapid Response Force... 587

(a) (b)

(c) (d)

Fig. 8 A series of snapshots of the user interface while running an experiment with six robots. aInitial positions, b user species goal positions, c planner assigns the goals and generates trajectories,d actual trajectories followed by the robots

shows the various steps involved in the experiment. We start the robots from theground with a separation of about 4m between each other so that we can take-offwithout worrying about collisions between the robots. Once they take-off and reacha specified height we switch to the trajectory tracker which takes in inputs fromthe central planner and sends position commands to the position controller. Oncethis stage is reached (Fig. 8a), the operator can command the robots from the userinterface and send the robots to the desired goal positions (Fig. 8b). Upon receivingthe goal positions, the planner assigns the goals to the robots and plans trajectoriesfor each of them (Fig. 8c) which are then followed by the robots (Fig. 8d).

As mentioned in the previous section, the planner models the robots as circleswith a radius of 2 m, even though the actual robot radius is around 0.3 m, in order

Page 12: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

588 K. Mohta et al.

Table 1 Mean error betweenthe desired position andestimated position in thehorizontal plane for eachrobot during the six robotexperiment

Robot XY error (m)

1 0.194

2 0.195

3 0.709

4 0.382

5 0.179

6 0.190

to allow some localization and control errors. Table1 provides estimates of the con-troller errors during the six-robot experiment which shows that most of the robotshave errors between 0.2–0.7m. Adding the localization error of approximately 0.2–0.5m gives us a total error of around 1–1.5m thus justifying the choice in the planner.

Aggregation of Visual Imagery for Situational Awareness

The base station receives the images from the quadrotors as well as their pose esti-mates. Using the pose information, we can correct the perspective distortion of theimage and project them onto the ground plane. This allows us to create an overheadmap of the environment using the team of quadrotors. An example of this is shownin Fig. 9 where images from three quadrotors are being used.

Fig. 9 A mosaic beingcreated using the imagesfrom three robots. The robotpositions can be seen by thered circles in the image

Page 13: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

QuadCloud: A Rapid Response Force... 589

6 Conclusion and Future Work

We described the component technologies, the architecture and the software forQuadCloud, a prototype of a rapid response team for first response and disasterrecovery. The key contributions are (a) the design of robust outdoor platforms withspeeds that exceed 30 mph with an absolute positioning accuracy of well under50cm; (b) the architecture design that enables the rapid deployment of a small teamof quadrotors for surveillance without specifying the roles of individuals; and (c) thedesign of algorithms for estimation, control and planning formultiple quadrotors.Weshowed experimental results with a team of six quadrotors. Our analysis suggests thatthe team size can be scaled up to 20 unitswithout compromising systemperformance.Our future work would address vision-based stabilization to increase robustness toGPS drop out and experimentation with larger teams.

Acknowledgments Weare grateful for the support ofARLgrantW911NF-08-2-0004, ONRgrantsN00014-07-1-0829, N00014-09-1-1051 and N00014-09-1-103, NSF grants PFI-1113830 and IIS-1138847, and TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporationprogram sponsored by MARCO and DARPA.

References

1. Achtelik, M., Achtelik, M., Weiss, S., Siegwart, R.: Onboard IMU and monocular vision basedcontrol forMAVs in unknown in- and outdoor environments. In: IEEE International Conferenceon Robotics and Automation (ICRA) (2011)

2. Ars Electronica: Spaxels—Ars Electronica Quadcopter Swarm. http://www.aec.at/spaxels/en3. Bouabdallah, S., Siegwart, R.: Full control of a quadrotor. In: 2007 IEEE/RSJ International

Conference on Intelligent Robots and Systems, pp. 153–158 (2007)4. Envarli, I.C., Adams, J.A.: Task lists for human-multiple robot interaction. In: IEEE Interna-

tional Workshop on Robot and Human Interactive Communication, 2005, pp. 119–124. IEEE(2005)

5. Erdmann, M., Lozano-Perez, T.: On multiple moving objects. In: 1986 IEEE InternationalConference on Robotics and Automation. vol. 3, pp. 1419–1424 (1986)

6. Hoffmann, G.M., Rajnarayan, D.G., Waslander, S.L., Dostal, D., Jang, J.S., Tomlin, C.J.: TheStanford testbed of autonomous rotorcraft for multi agent control (STARMAC). In: The 23rdDigital Avionics Systems Conference, pp. 12.E.4/1–10. IEEE (2004)

7. Huang, H., Hoffmann, G.,Waslander, S., Tomlin, C.: Aerodynamics and control of autonomousquadrotor helicopters in aggressive maneuvering. In: 2009 IEEE International Conference onRobotics and Automation (ICRA), pp. 3277–3282 (2009)

8. Karney, C.F.F.: GeographicLib library. http://geographiclib.sf.net/9. Kushleyev, A., Mellinger, D., Kumar, V.: Towards a swarm of agile micro quadrotors. In:

Robotics: Science and Systems (2012)10. Latombe, J.C.: Robot motion planning (1991)11. Lee, T., Leok, M., McClamroch, N.H.: Geometric tracking control of a quadrotor UAV on

SE(3). In: 2010 49th IEEE Conference on Decision and Control (CDC), pp. 5420–5425 (2010)12. Leishman, R.C.,Macdonald, J.C., Beard, R.W.,McLain, T.W.: Quadrotors and accelerometers:

state estimation with an improved dynamic model. IEEE Control Syst. Mag. 34(1), 28–41(2014)

Page 14: QUADCLOUD: A Rapid Response Force with Quadrotor Teams

590 K. Mohta et al.

13. Lindsey, Q., Mellinger, D., Kumar, V.: Construction with quadrotor teams. Auton. Robots33(3), 323–336 (2012)

14. Mahony, R., Kumar, V., Corke, P.: Multirotor aerial vehicles: modeling, estimation, and controlof quadrotor. IEEE Robot. Autom. Mag. 19(3), 20–32 (2012)

15. Mellinger, D., Kumar, V.: Minimum snap trajectory generation and control for quadrotors. In:2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 2520–2525.Shanghai (2011)

16. Michael, N., Mellinger, D., Lindsey, Q., Kumar, V.: The GRASP multiple micro-UAV testbed.IEEE Robot. Autom. Mag. 17(3), 56–65 (2010)

17. Shen, S., Mulgaonkar, Y., Michael, N., Kumar, V.: Multi-sensor fusion for robust autonomousflight in indoor and outdoor environments with a rotorcraft MAV. In: IEEE International Con-ference on Robotics and Automation, Hong Kong (2014)

18. Shen, S., Michael, N., Kumar, V.: Autonomous multi-floor indoor navigation with a computa-tionally constrainedmav. In: 2011 IEEE International Conference on Robotics and Automation(ICRA), pp. 20–25 (2011)

19. Turpin, M., Michael, N., Kumar, V.: Concurrent assignment and planning of trajectories forlarge teams of interchangeable robots. In: 2013 IEEE International Conference on Roboticsand Automation (ICRA), pp. 842–848. Karlsruhe (2013a)

20. Turpin, M., Mohta, K., Michael, N., Kumar, V.: Goal assignment and trajectory planning forlarge teams of aerial robots. In: Proceedings of Robotics: Science and Systems. Berlin (2013b)