guidance and nonlinear control system for autonomous flight of

24
Guidance and Nonlinear Control System for Autonomous Flight of Minirotorcraft Unmanned Aerial Vehicles Farid Kendoul Robotics and Control Laboratory, Department of Electronics and Mechanical Engineering, Chiba University, Chiba City 263-8522, Japan e-mail: [email protected] Zhenyu Yu State Key Laboratory of Rail Traffic Control and Safety, Beijing JiaoTong University, Beijing 100044, China e-mail: [email protected] Kenzo Nonami Robotics and Control Laboratory, Department of Electronics and Mechanical Engineering, Chiba University, Chiba City 263-8522, Japan e-mail: [email protected] Received 18 March 2009; accepted 18 September 2009 Small unmanned aerial vehicles (UAVs) are becoming popular among researchers and vital platforms for sev- eral autonomous mission systems. In this paper, we present the design and development of a miniature au- tonomous rotorcraft weighing less than 700 g and capable of waypoint navigation, trajectory tracking, visual navigation, precise hovering, and automatic takeoff and landing. In an effort to make advanced autonomous behaviors available to mini- and microrotorcraft, an embedded and inexpensive autopilot was developed. To compensate for the weaknesses of the low-cost equipment, we put our efforts into designing a reliable model- based nonlinear controller that uses an inner-loop outer-loop control scheme. The developed flight controller considers the system’s nonlinearities, guarantees the stability of the closed-loop system, and results in a practi- cal controller that is easy to implement and to tune. In addition to controller design and stability analysis, the paper provides information about the overall control architecture and the UAV system integration, including guidance laws, navigation algorithms, control system implementation, and autopilot hardware. The guidance, navigation, and control (GN&C) algorithms were implemented on a miniature quadrotor UAV that has un- dergone an extensive program of flight tests, resulting in various flight behaviors under autonomous control from takeoff to landing. Experimental results that demonstrate the operation of the GN&C algorithms and the capabilities of our autonomous micro air vehicle are presented. C 2009 Wiley Periodicals, Inc. 1. INTRODUCTION Rotorcraft unmanned aerial vehicles (RUAVs) are being widely used in a multitude of applications, mostly mil- itary but also civilian. In recent years, there has been a growing interest in developing autonomous miniature RUAVs and micro air vehicles (MAVs) capable of achieving more complex missions, and increasing demands are being placed on the hardware and software that comprise their guidance, navigation, and control (GN&C) systems. Minia- ture RUAVs offer major advantages when used for aerial surveillance, reconnaissance, and inspection in cluttered environments and small spaces in which larger RUAVs and fixed-wing aircraft are not well suited. Their usefulness re- sults from their low-cost, small size, VTOL (vertical takeoff Additional Supporting Information may be found in the online version of this article. and landing) capabilities, and ability to fly in very low alti- tudes, hover, cruise, and achieve aggressive maneuvers. The design of autopilots for miniRUAVs has many the- oretical and technical challenges. Indeed, the limited pay- load of miniature vehicles imposes severe constraints on the selection of navigation sensors and onboard electron- ics. Furthermore, due to their complex dynamics, nonlin- earities, and high degree of coupling among control inputs and state variables, the design of reliable and robust con- trollers is a challenge. During the past decade, most re- search activities on the control of miniature RUAVs have been devoted to performing hovering in constrained envi- ronments. However, little work has been done on the devel- opment of fully autonomous miniRUAVs that can achieve real-world applications. In fact, milestones in UAVs and most reported experimental results about advanced flight behaviors have been achieved using fixed-wing aircraft and small-scale helicopters that weigh several kilograms, such as the CMU robotic helicopter (La Civita, Papageorgiou, Journal of Field Robotics 27(3), 311–334 (2010) C 2009 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/rob.20327

Upload: vuongngoc

Post on 15-Dec-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Guidance and nonlinear control system for autonomous flight of

Guidance and Nonlinear Control System for AutonomousFlight of Minirotorcraft Unmanned Aerial Vehicles

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

Farid KendoulRobotics and Control Laboratory, Department of Electronics and Mechanical Engineering, Chiba University, Chiba City 263-8522,Japane-mail: [email protected] YuState Key Laboratory of Rail Traffic Control and Safety, Beijing JiaoTong University, Beijing 100044, Chinae-mail: [email protected] NonamiRobotics and Control Laboratory, Department of Electronics and Mechanical Engineering, Chiba University, Chiba City 263-8522,Japane-mail: [email protected]

Received 18 March 2009; accepted 18 September 2009

Small unmanned aerial vehicles (UAVs) are becoming popular among researchers and vital platforms for sev-eral autonomous mission systems. In this paper, we present the design and development of a miniature au-tonomous rotorcraft weighing less than 700 g and capable of waypoint navigation, trajectory tracking, visualnavigation, precise hovering, and automatic takeoff and landing. In an effort to make advanced autonomousbehaviors available to mini- and microrotorcraft, an embedded and inexpensive autopilot was developed. Tocompensate for the weaknesses of the low-cost equipment, we put our efforts into designing a reliable model-based nonlinear controller that uses an inner-loop outer-loop control scheme. The developed flight controllerconsiders the system’s nonlinearities, guarantees the stability of the closed-loop system, and results in a practi-cal controller that is easy to implement and to tune. In addition to controller design and stability analysis, thepaper provides information about the overall control architecture and the UAV system integration, includingguidance laws, navigation algorithms, control system implementation, and autopilot hardware. The guidance,navigation, and control (GN&C) algorithms were implemented on a miniature quadrotor UAV that has un-dergone an extensive program of flight tests, resulting in various flight behaviors under autonomous controlfrom takeoff to landing. Experimental results that demonstrate the operation of the GN&C algorithms and thecapabilities of our autonomous micro air vehicle are presented. C© 2009 Wiley Periodicals, Inc.

1. INTRODUCTION

Rotorcraft unmanned aerial vehicles (RUAVs) are beingwidely used in a multitude of applications, mostly mil-itary but also civilian. In recent years, there has beena growing interest in developing autonomous miniatureRUAVs and micro air vehicles (MAVs) capable of achievingmore complex missions, and increasing demands are beingplaced on the hardware and software that comprise theirguidance, navigation, and control (GN&C) systems. Minia-ture RUAVs offer major advantages when used for aerialsurveillance, reconnaissance, and inspection in clutteredenvironments and small spaces in which larger RUAVs andfixed-wing aircraft are not well suited. Their usefulness re-sults from their low-cost, small size, VTOL (vertical takeoff

Additional Supporting Information may be found in the online versionof this article.

and landing) capabilities, and ability to fly in very low alti-tudes, hover, cruise, and achieve aggressive maneuvers.

The design of autopilots for miniRUAVs has many the-oretical and technical challenges. Indeed, the limited pay-load of miniature vehicles imposes severe constraints onthe selection of navigation sensors and onboard electron-ics. Furthermore, due to their complex dynamics, nonlin-earities, and high degree of coupling among control inputsand state variables, the design of reliable and robust con-trollers is a challenge. During the past decade, most re-search activities on the control of miniature RUAVs havebeen devoted to performing hovering in constrained envi-ronments. However, little work has been done on the devel-opment of fully autonomous miniRUAVs that can achievereal-world applications. In fact, milestones in UAVs andmost reported experimental results about advanced flightbehaviors have been achieved using fixed-wing aircraft andsmall-scale helicopters that weigh several kilograms, suchas the CMU robotic helicopter (La Civita, Papageorgiou,

Journal of Field Robotics 27(3), 311–334 (2010) C© 2009 Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com). • DOI: 10.1002/rob.20327

Page 2: Guidance and nonlinear control system for autonomous flight of

312 • Journal of Field Robotics—2010

Messner, & Kanade, 2006; Scherer, Singh, Chamberlain, &Elgersma, 2008), the Berkeley BEAR project (Kim & Shim,2003), the USC AVATAR helicopter (Saripalli, Montgomery,& Sukhatme, 2003), the GTMax Unmanned Helicopter(Johnson & Kannan, 2005), and the MIT acrobatic helicopter(Gavrilets, Mettler, & Feron, 2004).

These robotic helicopters have relied on various linearand nonlinear control techniques to perform autonomousflight. Conventional approaches to flight control and mostinitial attempts to achieve autonomous helicopter flighthave been based on linear controller design. These ap-proaches involve simplified model derivation and modellinearization about a set of preselected equilibrium condi-tions or trim points. Linear controllers are then designed,such as decentralized single-input single-output (SISO)proportional integral derivative (PID) controllers (Kim &Shim, 2003), multiple-input multiple-output (MIMO) lin-ear quadratic regulator (LQR) controllers (Shin, Fujiwara,Nonami, & Hazawa, 2005), and robust H∞ controllers(La Civita et al., 2006). The main drawback of linear con-trollers relates to overlooking coupling terms and nonlin-earities, resulting in performance degradation when the air-craft moves away from a design trim point or hoveringcondition.

To overcome some of the limitations and drawbacks oflinear approaches, a variety of nonlinear flight controllershave been developed and applied to helicopter UAV con-trol. Among these, feedback linearization (Koo & Sastry,1998), model predictive control (Kim & Shim, 2003), dy-namic inversion (Reiner, Balas, & Garrard, 1995), adap-tive control (Johnson & Kannan, 2005), and backsteppingmethodology (Mahony & Hamel, 2004; Olfati-Saber, 2001)have received much of the attention and showed greatpromise. For aggressive flight control, human-inspired andlearning-based strategies have been developed and suc-cessfully tested using acrobatic helicopters (Abbeel, Coates,Quigley, & Ng, 2007; Gavrilets et al., 2004).

Among noted and recent research contributions tothe miniature RUAV control problem, we find many onmini-quadrotor platforms. Indeed, quadrotor helicoptersare emerging as a popular platform for UAV research, andmany research groups are now working on quadrotors asUAV test beds for autonomous control and sensing. Inseveral projects, quadrotor vehicles achieved autonomousflight using linear controllers on linearized dynamic mod-els. The most applied and successful linear controllers arePID controllers (Hoffmann, Huang, Waslander, & Tomlin,2007) (STARMAC quadrotor), proportional-derivative (PD)controllers (Bouabdallah, Murrieri, & Siegwart, 2005) (OS4quadrotor), and LQR controllers (How, Bethke, Frank,Dale, & Vian, 2008) (RAVEN test bed). Many researchersfrom the control community have also developed non-linear control systems for mini-quadrotor vehicles. Onesuch system, based on the Draganflyer quadrotor air-frame, has demonstrated successful hovering using nested

saturation-based nonlinear controllers (Castillo, Dzul, &Lozano, 2004; Kendoul, Lara, Fantoni, & Lozano, 2007).In the X4-flyer project (Guenard, Hamel, & Moreau,2005), a backstepping technique was applied to de-sign a nonlinear controller for a miniature quadrotorvehicle. Attitude control was also demonstrated on teth-ered quadrotor test beds using sliding mode control(Madani & Benallegue, 2006). Another tethered test bed(Tayebi & McGilvray, 2006) used a quaternion-based non-linear control scheme to perform attitude stabilization. Thislist is not exhaustive, and there are many other interest-ing works on the control, navigation, and visual servoingof mini-quadrotor UAVs (e.g., Altug, Ostrowski, & Taylor,2005; He, Prentice, & Roy, 2008).

Although significant progress has been made onthe control and development of miniature RUAVs, theachieved flight performances are modest compared to thoseof larger UAVs. From a control systems perspective, most ofthe control designs previously cited present a trade-off be-tween flight performance and control law complexity. In-deed, linear controllers that are designed for near-hoveringflight fail to provide good flight performance at nonnom-inal operating conditions in which attitude angles are big,such as during high-speed and aggressive flights. On theother hand, nonlinear controllers are often difficult to im-plement on small microprocessors and to tune online. Froman experimental and achieved performance perspective,the available experimental results are generally limited tohovering flight in structured and indoor environments.

The aim of our research is to demonstrate the feasibil-ity of using autonomous mini- and microRUAVs for searchand rescue missions by developing and validating a low-cost aerial platform capable of achieving various missionscenarios autonomously (Figure 1). This paper presents thedevelopment of an embedded lightweight autopilot thatstrikes a balance between simplicity and performance. Adetailed description of the autopilot hardware and GN&Calgorithms and their evaluation through real-time experi-ments is also provided.

Although there have been good examples of au-tonomous control of rotorcraft UAVs and of mini-quadrotorhelicopters in particular, our work and obtained results rep-resent significant advances for autonomous MAVs:

1. From a control systems perspective, we have designed ahierarchical model-based nonlinear controller that usesan inner–outer-loop control scheme and has the follow-ing benefits:

• It considers system nonlinearities and couplingswhile guaranteing the asymptotic stability of theclosed-loop system.

• It is a multipurpose controller that can handle dif-ferent flight modes such as hovering, flying forward,flying sideward, takeoff and landing, and trajectorytracking.

Journal of Field Robotics DOI 10.1002/rob

Page 3: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 313

Figure 1. Our mini-quadrotor UAV in fully autonomous flightat the playground of the Chiba University campus.

• It is simple to implement and easy to tune onlineeven when the plant parameters are not well known.

2. From a hardware perspective, we have carefully se-lected lightweight avionics components that fit the lim-ited payload of miniature vehicles, and we have de-signed an embedded real-time architecture that includesnavigation sensors [inertial measurement unit (IMU),global positioning system (GPS)], pressure sensor (PS),mission sensors (camera), a flight control computer(FCC), and a wi-fi communication module. In contrastto other experiments reported in the literature, we donot rely on any accurate IMU or GPS whose costs andweights are significantly higher. We believe that cost re-duction will yield a substantial speedup of the use ofMAVs for civilian applications.

3. From a UAV system integration perspective, we havedeveloped and implemented guidance, navigation, andvision algorithms that offer advanced autonomous be-haviors to MAVs and miniature rotorcraft that weighless than 0.7 kg.

4. From an experimental perspective, we have performedseveral flight tests in outdoor and natural environ-ments using a mini-quadrotor helicopter (size 53 cmand total weight 700 g). The quadrotor MAV, equippedwith the developed autopilot, has undergone an exten-sive program of flight tests, resulting in various au-tonomous flight behaviors (automatic takeoff and land-ing, accurate hovering, long-distance flight, waypointnavigation, trajectory tracking, and vision-based targettracking).

Roughly speaking, the main achievement of this workis to provide miniature rotorcraft and MAVs with advancedflight behaviors and autonomous capabilities despite theirlimited payload and available computational resources.

In the next section, the problem statement is discussed.The air vehicle and the autopilot hardware are describedin Section 3. Section 4 gives an overview about the GN&Csystems. The theory of the designed nonlinear controller,including the analysis of the closed-loop system stability, ispresented in Section 5. The flight results, presented in Sec-tion 6, are a clear illustration that waypoint navigation, tra-jectory tracking, and visual navigation capabilities can beextended to smaller platforms such as MAVs.

2. PROBLEM STATEMENT THROUGH A TYPICALMISSION SCENARIO

An autonomous aerial vehicle, equipped with an onboardcamera, can be considered as an eye in the sky and thuscan be used for many applications. In this section, wemotivate the use of autonomous UAVs/MAVs for searchand rescue applications and we present the main issuesto be addressed for achieving these kinds of missions au-tonomously.

Let us consider the mission scenario in which MAVsare used to assist rescue teams during a natural disas-ter such as an earthquake. Many difficulties make searchand rescue missions challenging. Among the problems thatshould be addressed by rescue teams, one can cite thefollowing:

1. Determination of priority areas: When an earthquakeoccurs, many areas and cities are generally damaged. Si-multaneous intervention in all inhabited areas may notbe possible. Therefore, the most heavily damaged ar-eas where help and assistance are most urgently neededshould be determined quickly.

2. Damage evaluation: Once a high-priority area is identi-fied, rescue teams need to have an idea about the natureand degree of damage. A priori knowledge about dam-age is a key determinant for mission success. This infor-mation will help to define the appropriate size of rescueteams and needed equipment and material, etc.

3. Identification of safe access roads: Another common andimportant problem during rescue missions is the identi-fication of access roads to the area of interest. Bridgesand roads may be destroyed by earthquakes.

4. Search for survivors and injured persons: Injured per-sons need quick help and immediate medical assistance.Finding survivors in a short time is a challenging taskbecause of the large search area and building debris.

Many lives can be saved if the above issues can be ef-fectively addressed. We believe that the use of advancedtechnologies such as autonomous rotorcraft MAVs will in-crease mission success and facilitate search and rescue op-erations. Miniature RUAVs present many advantages for

Journal of Field Robotics DOI 10.1002/rob

Page 4: Guidance and nonlinear control system for autonomous flight of

314 • Journal of Field Robotics—2010

search and rescue applications because of their maneuver-ability and their ability to perform stationary flight. Despitetheir low cost, rotorcraft MAVs can be deployed quicklyand operated easily to achieve the previously listed tasksautonomously. Furthermore, their small size allows themto fly between buildings at low altitudes for accurate datacollection without any risk and danger for people.

To develop a fully autonomous rotorcraft MAV thatcan achieve the previously listed tasks autonomously, someflight capabilities and behaviors are required:

• A robust platform with onboard navigation and missionsensors that can send images in real time to the opera-tion station.

• Autonomous GPS-based waypoint navigation capabil-ity in order to fly to the area of interest and provide in-formation (images) about access roads and the nature ofdamage.

• When the MAV arrives at the target zone, it may executesome motion patterns to explore the area and to avoidobstacles. Therefore, accurate trajectory tracking capa-bility is necessary, especially during search operations.

• The GPS signal may be weak or not available at someplaces. An alternative option for increasing the naviga-tion accuracy is to use vision systems. Cameras are smalland lightweight and provide rich information for mis-sion requirements but also for MAV motion estimation.A visual navigation system will significantly increasethe autonomous capabilities of the rotorcraft.

This paper addresses the above issues and describes the de-sign of an autonomous rotorcraft MAV with onboard in-telligent capabilities. Furthermore, the paper presents ex-perimental results that show that our quadrotor MAV canachieve autonomously all the navigation tasks listed. Withthese flight capabilities, the developed platform will beready to embark on many of the applications envisaged forit, including search and rescue missions.

3. AIR VEHICLE AND AUTOPILOT HARDWARE

To demonstrate autonomous flight of a rotorcraft MAV(RMAV), the vehicle platform should be suitably integratedwith subsystems as well as hardware and software. Thechoice of the air vehicle and avionics is a crucial step towardthe development of an autonomous platform. That choice ismainly dependent on the intendent application as well asperformance, cost, and weight. In this section, we will de-scribe the air vehicle and introduce the major subsystemsimplemented in our RMAV system, shown in Figure 2.

3.1. Air Vehicle Description

Safety and performance requirements drove the selection ofthe quadrotor vehicle as a safe, easy-to-use platform withvery limited maintenance requirements. A versatile hobbyquadrotor, the X-3D-BL produced by Ascending Technolo-

GPS antennapropeller

wi-fi antenna

carbon fiber frame

landing gear

batterycamera

video transmitter

brushless

MNAV (IMU,

GPS, PS)

flight control computer

electronics interface

electronic speed controller

Figure 2. The quadrotor micro air vehicle equipped with thedeveloped autopilot. The total weight of the aerial platform isabout 650 g.

gies GmbH in Germany, is adopted as a vehicle platformfor our research. The vehicle design consists of a carbon-fiber airframe with two pairs of counterrotating, fixed-pitchblades as shown in Figure 2. The vehicle is 53 cm rotor tipto rotor tip and weighs 400 g, including the battery. Its pay-load capacity is about 300 g. The propulsion system consistsof four brushless motors, powered by a 2,100-mAh three-cell lithium polymer battery. The flight time in hovering isabout 12 min with full payload.

The original hobby quadrotor includes a controllerboard called X-3D that runs three independent PD loopsat 1 kHz, one for each rotational axis (roll, pitch, andyaw). Angular velocities are obtained from three gyro-scopes (Murata ENC-03R), and angles are computed by in-tegrating gyroscope measurements. More details about theX-3D-BL platform and its internal controller can be found inGurdan et al. (2007). In our implementation, the internal at-titude controller that comes with the original platform hasbeen disabled except for gyro feedback stabilization andyaw angle control, which cannot be disabled.

3.2. Navigation Sensors and EmbeddedArchitecture

The hardware components that constitute the basic flightavionics of our platform include a small microcon-troller from Gumstix and the MNAV100CA sensor fromCrossbow.

3.2.1. FCC

The embedded architecture is built around the Gumstix(connex 400) microcontroller. A Gumstix motherboard is avery small Linux OpenEmbedded computer that featuresa Marvell PXA255 processor. This computing unit presentsmany advantages for MAV applications because it weighsonly 8 g despite its good performance. Indeed, it has a CPUclock of 400 MHz, a flash memory of 16 MB, and SDRAM of

Journal of Field Robotics DOI 10.1002/rob

Page 5: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 315

64 MB. Two expansion boards (console-st and wifistix) havebeen mounted on the motherboard, thereby providing twoRS-232 ports and wi-fi communication. The total weight ofthe Gumstix-based FCC is about 30 g including the wi-fimodule and antenna.

3.2.2. Navigation and Control Sensors

After studying and comparing different sensors, we haveselected the MNAV100CA sensor from Crossbow due toits low cost and light weight (about 35 g without GPS an-tenna). The MNAV100CA is a calibrated digital sensor sys-tem with servo drivers, designed for miniature ground andair robotic vehicle navigation and control. All sensors re-quired for complete airframe stabilization and navigationare integrated into one compact module. It includes an IMUthat outputs the raw data of three gyroscopes, three ac-celerometers, and three magnetometers at 50 Hz. It also in-cludes a static PS (Motorola freescale MPXH6115A6U witha sensitivity of 46 mV/kPa), which is used to determinethe height above the ground. Position and velocity mea-surements are provided at 4 Hz using an integrated GPSreceiver module (u-blox TIM-LP). The position accuracy isabout ±2 m for the horizontal position and about ±5 m foraltitude. The PPM interface allows for software interpreta-tion of radio control (RC) receiver commands (PPM) andswitching between autonomous and manual flight modes.

3.2.3. Autopilot Architecture

When designing the embedded system, much attentionhas been paid to simplicity, modularity, and safety. A di-agram showing the interaction between the different com-ponents onboard the vehicle is shown in Figure 3. All sen-sor data, including IMU measurements (three accelerations,three angular rates, and three magnetometers readings),GPS data [lattitude, longitude, altitude, and three transla-tional velocities in the north-east-down (NED) frame], PSdata (height), and the RC receiver decoded signals (throttle,

pitching/rolling/yawing torques, switch, communicationstatus), are sent from MNAV to the Gumstix FCC throughthe serial port RS-232. For wireless communication withthe ground control station (GCS), we have mounted thewifistix card from Gumstix on the 92-pin connector of theGumstix motherboard, thereby providing a communicationmodule with high bandwidth (about 50 Mbits/s). The com-munication range is about 500 m, but it can be increasedup to 800 m by reducing the communication bandwidth to2 Mbits/s.

The FCC has three main tasks: (1) read data fromthe MNAV sensor and estimate the three-dimensional (3D)pose (position and orientation) of the vehicle, (2) imple-ment guidance and control algorithms, and (3) manage thewi-fi communication (uplink and downlink or telemetry)with the GCS. The X-3D-BL hobby quadrotor is designed tobe controlled manually through an RC transmitter–receiver.It has no high-level commands and data interface. Thus, inorder to interface our autopilot with the X-3D-BL board, wehave added a small AVR microcontroller (Atmega32) thatreceives the control inputs from the FCC and generates aPPM signal. In fact, the four control inputs (total thrust,pitching toque, rolling torque, and yawing torque), com-puted by the control system, are sent from the main FCCto the AVR microcontroller through the serial port RS-232.They are then encoded into a PPM signal, which is sent tothe X-base board, thereby simulating an RC receiver sig-nal. From Figure 3, we can see that the sixth channel ofthe RC receiver is also connected to the auxiliary AVR mi-crocontroller. This channel is used to terminate the flightat any moment if the embedded system is not behavingcorrectly.

Our platform also includes an analog wireless camerathat is used as the main mission sensor. The vision mod-ule is composed of a camera, KX171 from Range Video, a1.3-GHz video transmitter and receiver, and a tilt systemthat is driven by the FCC through the MNAV servo driverinterface.

Flight Cmputer

Gumstix connex

400 MHz

2.4 GHz

Wifi card for

telemetry

40 MHz

RC receiver

Crossbow MNAV Sensor

1.3 GHz video

transmitter

RS-232PPM

AVR uController

RS

-23

2

Brushlessmotor controller

M1

Brushlessmotor controller

M4

Brushlessmotor controller

M3

Brushlessmotor controller

M2

PPM

I2C

X-3D as internal controller

X-Base as interface card

Tiltcamera

analog

Autopilot architecture

Electronics of the original X-3D-BL

GPS

RS-232

& 51-pin

connector

Servo

Control

PPM

Interface

Atmel

128

16

-Bit

A/D

2 Pressures

3 Accelero-meters

3 Gyros

3 Magneto-meters

3 Tempera-tures

Sensor

& Servo

Data

8 Channel

Servo

PWM

6th

ch

ann

el

Figure 3. The hardware components of the autopilot and their interaction onboard the vehicle. The flight avionics of our platforminclude a Gumstix microcontroller, an IMU, a GPS, static and dynamic PSs, and a wireless camera.

Journal of Field Robotics DOI 10.1002/rob

Page 6: Guidance and nonlinear control system for autonomous flight of

316 • Journal of Field Robotics—2010

Table I. Platform components.

Part Description Weight (g) Cost ($)

Airframe Carbon fiber frame + CSM-core 95Propulsion 4 brushless motors + 4 X-BLDC controller + 4 propellers 156 ≈1,400X-3D-BL electronics X-3D + X-base 23Battery Thunder Power Li-polymer, 2,100 mAh 143RC receiver Futaba, 40 MHz (6 channels) 10 50Sensors Crossbow MNAV100CA 35 1,500Flight computer Gumstix connex 400 MHz + wi-fi module 28 230Antennas GPS antenna + wi-fi antenna 38 0PPM generator Atmega32 microcontroller 3 10Video system Kx-171 camera + 1.3-GHz video transmitter + tilt system 77 410Other Pad + autopilot board/support + · · · 50 20Total 658 3,620

Table I shows the repartition of mass and cost betweenthe different subsystems of the vehicle.

4. GN&C SYSTEMS

GN&C algorithms are the core of the flight software ofthe MAV to successfully complete the assigned missionthrough autonomous flight. They offer the MAV the abilityto follow waypoints1 and to execute other preprogrammedmaneuvers such as automatic takeoff and landing, hov-ering, and trajectory tracking. As shown in Figure 4, theoverall system architecture considered in this paper con-sists of six layers: (1) GCS for mission definition and high-level decision making, (2) guidance for path planning andtrajectory generation, (3) navigation for vehicle state vec-tor estimation, (4) nonlinear controller for stabilization andtrajectory tracking, (5) communication with the GCS andinterface between the autopilot and the vehicle, and (6)the MAV platform. All GN&C algorithms have been im-plemented in visual C using multithread programming. Inour design, the autopilot software is implemented as a pro-cess within the Linux operating system. It is composed ofsmaller units of computation, know as tasks (or threads).By using POSIX-style semantics, theses tasks are called andscheduled separately (Jang & Liccardo, 2006) while sharingCPU and memory resources.

4.1. GCS

The GCS shown later in Figure 17 provides several criticalfunctionalities. The main role of the GCS is to displaythe flight information downloaded from the MAV in realtime and to upload the various operating commands suchas operation mode, guidance commands, and controllergains. Furthermore, images from the onboard camera

1Here, waypoint navigation is defined as the process of automati-cally following a predetermined path defined by a set of geodetic(GPS) coordinates.

Mission planning & high-level decision making

Virtual cockpit & flight data displaying

Receive GCS cmds (uplink)

Homing procedure

Fault detection & safety procedures

Mode transition management

Trajectory generation

Path planning

Altitude control

Horizontal motion control

Nonlinear coupling

Orientation control

Autonomous-manualflight mode

PS/INS fusion

GPS/INS fusion

AHRS

RC commands

PPM signal generation &flight termination system

UAV & sensors

Sensors data acquisition

Telemetry

GCS

Guidance

Hierarchical Nonlinear ControllerNavigationState estimation

Image processing algorithms

vision system

Figure 4. Overall software architecture of the autopilot.GN&C algorithms are implemented onboard the Gumstix mi-crocontroller using multithread programming.

Journal of Field Robotics DOI 10.1002/rob

Page 7: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 317

are displayed in real time at the GCS, which also hostsimage processing algorithms for visual navigation. Thedeveloped GCS interface allows the operator to switchbetween different flight modes (GPS-based flight, opticflow–based flight, visual navigation, and target tracking)and to select appropriate navigation tasks (takeoff, landing,hovering, waypoint navigation, trajectory tracking, flighttermination, etc.).

The GCS software is implemented on a standard lap-top using C++, MFC, OpenGL, and OpenCV libraries. Itincludes three different threads that are running indepen-dently while sharing memory resources. The first threadruns at 10 Hz and handles wireless communication withthe onboard FCC for receiving and decoding flight data(telemetry) and sending visual estimates and other naviga-tion commands. The communication program uses socketand UDP protocol over a wi-fi network. The second threadis running at 5 Hz and implements the GCS interface thatallows the display of flight data in real time and send-ing flight commands by clicking the appropriate buttons.The third thread, running at 12 Hz, implements image pro-cessing algorithms for optic flow computation and featuretracking.

4.2. Guidance Strategy

The proposed guidance system takes inputs from the nav-igation system and uses targeting information to send sig-nals to the flight control system that will allow the vehicleto reach its destination within the operating constraints ofthe vehicle. In the current system, the flight plan or missionis defined by a human operator using the GCS interface.This plan is expressed in terms of desired 3D waypointsand other maneuvers such as hovering, landing, trajectorytracking, and target tracking. During mission execution, theflight plan can be modified or updated in real time. Thesecommands are transmitted from the GCS to the processorof the onboard avionics unit via a 2.4-GHz wi-fi link.

The embedded guidance system has the role of se-lecting the appropriate maneuver and generating the cor-responding reference trajectories. It is composed of threemain parts: (1) path planning, (2) mode transition man-agement, and (3) trajectory generation. Although this guid-ance system is simple, it provides sufficient capabilities forUAVs operating in obstacle-free environments for manyapplications, such as reconnaissance and exploration. Ad-vanced guidance and navigation capabilities are necessaryto achieve more complex tasks and missions.

4.2.1. Path Planning

The transmitted waypoints from the GCS are expressedin the geodetic LLA (latitude–longitude–altitude) coordi-nates. They are then converted into the NED local frame,where they are expressed in meters according to the start-ing point. A path planning routine constructs a path, which

is defined by a sequence of N desired waypoints and de-sired speeds of travel along path segment Si connectingwaypoint wpti to wpti+1. The path planner is also requiredto have some strategy to detect when the waypoint wptiis reached by the rotorcraft in order to switch to the nextone (wpti+1). The proposed strategy attempts to achievethe desired position initially with 10-m accuracy in bothlateral and longitudinal directions. It then takes an addi-tional 20 s to achieve the position with 2-m accuracy andfinally takes an additional 5 s to achieve higher accuracy. Ifthe waypoint is not achieved to within 2 m in 20 s, the pathplanner assumes that external factors (i.e., wind) are inter-fering and ends the attempt. Flight paths that passed withinthese thresholds are considered to have reached the way-point, so the guidance law then directs the UAV to reachthe next waypoint. In this way, we are guaranteed somebaseline level of performance (10 m), and the vehicle willattempt to achieve a higher level of accuracy without ex-cessive time delays.

4.2.2. Mode Transition Management

The mode transition manager has the role of managingthree autonomous flight modes (GPS-based flight, vision-based flight, and target tracking mode) and various flightbehaviors (landing, hovering, trajectory tracking, etc.).These different modes and maneuvers are either selectedby the human operator through the GCS interface or bythe Fail-Safe system, described in Subsection 4.5. To mini-mize errors and reduce risk, each flight mode and maneu-ver has preconditions that must be satisfied; otherwise thetask will be rejected. For example, when the GPS signal isweak or lost, the system switches automatically to vision-based flight and the landing procedure is activated aftersome waiting time to achieve a safe vision-based landing.

4.2.3. Trajectory Generation

The goal of this routine is to generate a physically feasi-ble trajectory according to the flight behavior selected bythe mode transition manager. In the case of waypoint navi-gation, for example, the trajectory generation routine takesstraight-line path segments and produces feasible time-parameterized desired trajectories that are the desired 3Dposition [xd (t), yd (t), zd (t)], the desired heading ψd (t), andtheir time derivatives [xd (t), yd (t), zd (t), ψd (t)]. The kine-matic model used for trajectory generation uses specifiablelimits on the maximum speed and acceleration the rotor-craft may have during a maneuver. In fact, reference trajec-tories are generated in two steps: first, the desired velocitytrajectory is computed from the specified accelerations andthe desired maximum speed during forward flight. Second,the position reference trajectory is obtained by integratingthe velocity trajectory with respect to time. When the vehi-cle approaches a desired waypoint, it starts decelerating inorder to reach the waypoint by zero velocity. This simple

Journal of Field Robotics DOI 10.1002/rob

Page 8: Guidance and nonlinear control system for autonomous flight of

318 • Journal of Field Robotics—2010

but effective strategy allows accurate and robust transitionfrom one waypoint to the next without overshoot on sharpcorners. Furthermore, the obtained position and velocitytrajectories are differentiable with respect to time, which isneeded for some control designs.

To achieve nonaggressive takeoff and soft landing,smooth reference trajectories zd (t) along the vertical axis aregenerated in real time based on the desired ascent/descentvelocity and the actual height of the vehicle. For search op-erations, a spiral trajectory is generated in real time, therebyallowing the rotorcraft to explore some target area. Fur-thermore, this spiral trajectory will allow evaluation of theperformance of the developed flight controller for track-ing nonstraight and arbitrary trajectories. Our implemen-tation also allows the easy incorporation of many othertrajectories.

Although the resulting trajectories are not necessarilyoptimal, the lower computation burden of this trajectorygenerator ensures its implementation on small onboardmicroprocessors.

4.3. Navigation System

The main objective of a navigation system is to provide in-formation about the vehicle’s self-motion (orientation, posi-tion, etc.) and its surrounding environment (obstacles, tar-get, etc.). This information is then exploited by the flightcontroller for motion control and by the guidance systemfor mission achievement. The navigation system that wehave developed and implemented on our platform can besplit into two main parts: (1) a conventional navigation sys-tem for full state estimation, which includes attitude head-ing reference system (AHRS), GPS/INS, and PS/inertialnavigation system (PS/INS); and (2) an advanced visualnavigation system, which includes an optic flow–based lo-calization module and a visual odometer for flight controland target tracking.

a. AHRS: An extended Kalman filter (EKF) is used tofuse IMU raw data (3 gyro., 3 accelero., 3 magneto.)in order to provide estimates about the vehicle’s atti-tude, heading, and angular rates at an updating rateof 50 Hz.

b. GPS/INS: GPS measurements (3D position in LLA co-ordinates and 3D velocity vector in the NED frame)are fused with the INS data using an EKF. The filteredand propagated position and velocity estimates are pro-vided at an updating rate of 10 Hz.

c. PS/INS: The height estimated by the GPS/INS is notaccurate enough (5–10-m error) to allow good altitudecontrol. The altitude controller is thus based on theheight estimated by the static PS after fusing it with thevertical acceleration using a kinematic Kalman filter. Wehave then improved the height estimation accuracy to±1.5 m in normal flight conditions. However, the low-

cost barometric PS used is sensitive to weather condi-tions such as wind and temperature.

d. Optic flow–based localization system: The standardnavigation sensors and capabilities onboard the rotor-craft have been enhanced to include vision-based nav-igation. Details about this system as well as exper-imental results can be found in our previous paper(Kendoul, Fantoni, & Nonami, 2009). Functionally, theproposed vision system is based on optic flow compu-tation (offboard processing) and solving the structure-from-motion problem (onboard processing) for vehicleego-motion estimation and obstacle detection. Becauseit is necessary for the rotorcraft to fly between buildingsand indoors during search missions, a strong GPS sig-nal will probably not be available. To combat drift in thenavigation solution, position and velocity updates arebased on the vision system.

e. Visual relative navigation system: Because many real-world applications may require relative navigation ca-pability, we have augmented our navigation system bya second vision algorithm that provides relative posi-tion and velocity to some selected target. The proposedvision system determines the vehicle’s relative position,velocity, and height by extracting and tracking visualfeatures in video sequence of the onboard camera. Thetarget can be chosen either by the operator by selectingthe desired target on the image window displayed at theGCS or by another high-level recognition algorithm. Inthe present configuration, only the first solution is avail-able. This vision system is of great interest and impor-tance for search and rescue missions because it allowsthe MAV to fly in close proximity to objects of interest(windows, etc.) and to track targets (survivors, etc.) inorder to collect more accurate information. A detaileddescription of this vision system and experimental vali-dation can be found in Kendoul, Nonami, Fantoni, andLozano (2009). In Section 6, we also present experimen-tal results from a real-time flight test in which the ro-torcraft tracks some moving ground target using visualestimates.

It is not currently possible to connect a camera directlyto the FCC; therefore, it is necessary to do the image pro-cessing offboard. The present configuration transmits thevideo signal to the GCS for all image processing (optic flowcomputation and feature tracking), and then the results arerelayed back to the onboard FCC as inputs to the visualnavigation system running on the FCC at 10 Hz.

4.4. Nonlinear Hierarchical Controller

To enable more complex missions for autonomous RUAVsand for quadrotor MAVs in particular, we have developeda multipurpose nonlinear controller that allows accuratehovering and precise trajectory tracking but also waypointnavigation and moving target tracking. It takes as inputs

Journal of Field Robotics DOI 10.1002/rob

Page 9: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 319

reference trajectories (guidance system outputs) and stateestimates (navigation system outputs) and produces forcesand torques that are required to control the vehicle’s mo-tion. The theory of the designed nonlinear controller andits stability analysis are provided in Section 5.

4.5. Safety Procedures and FlightTermination System

When developing autonomous UAVs and MAVs for civil-ian applications, some level of safety must be guaranteed.Therefore, we have implemented several safety procedureson our rotorcraft to keep single failures from having catas-trophic effects on both people and the vehicle itself.

4.5.1. Preprogrammed Fail-Safes

The developed autopilot has a number of preprogrammedfail-safes instructing the vehicle to perform a certain taskwhen some unexpected problem occurs. Some of those arediscussed here:

• Wi-fi communication: If the vehicle loses signal with theGCS for more than 5 s, the hovering mode is automati-cally enabled. If the communication is not established in20 s, then the vehicle will switch to the “home mode,”and it will navigate to a preassigned location (home).

• Fly zone: When the vehicle travels out of the fly zone, thehovering mode is activated, waiting for new commandsfrom the GCS to correct the trajectory or to recover thevehicle.

• GPS unit failure: In the case of GPS outages or failure,the rotorcraft will achieve hovering using visual esti-mates. Depending on the situation, the operator maycontinue the mission using the vision system or activatethe safe landing procedure.

• Undesired behavior: If the aircraft experiences some un-desired behavior, then the safety pilot can switch tomanual flight or terminate the flight.

4.5.2. Flight Termination System

The safety procedures discussed above are implementedin the main FCC. A malfunction of the flight computermay occur due to some software bug or hardware problem.Thus, all the programmed safety procedures cannot be exe-cuted. To provide more safety, a simple independent flighttermination system (FTS) is used to reduce and decrementthe thrust from its nominal value to zero, thereby resultingin a soft emergency landing.

As discussed in Subsection 3.2., a secondary microcon-troller is used as the primary interface to the physical vehi-cle. Moreover, it forms a FTS because it implements a flighttermination program that can be activated in three differentways:

1. At any point in the mission, the operator can terminatethe flight by pushing the “emergency” button on theGCS interface.

2. If the main FCC stops working for any reason, then theflight termination program is automatically activated.

3. The sixth channel of the RC receiver is directly con-nected to the FTS. Therefore, the safety pilot can use thischannel to terminate the flight at any time, provided thatthe vehicle is in the radio communication range.

These safety precautions are very important especially dur-ing the autopilot development and testing stages, when thesystem is vulnerable due to programming errors, bad con-troller tuning, etc. We have successfully tested all the listedsafety procedures without damaging the vehicle.

5. NONLINEAR FLIGHT CONTROLLER:MODELING, DESIGN, AND STABILITY

Generally, it is difficult to design a reliable 3D flightcontroller for rotorcraft UAVs/MAVs, because helicopterdynamics is inherently unstable, highly nonlinear, and cou-pled in the inner axis. In this research, we designed a con-troller based on the nonlinear model of the rotorcraft. Ourobjective is to design a 3D flight controller that performswell in practice as well as in theory. This is accomplishedby deriving a mathematical model for miniRUAV dynamicsand exploiting its structural properties to transform it intotwo cascaded subsystems coupled by a nonlinear intercon-nection term. A partial passivation design has been used tosynthesize control laws for each subsystem, thereby result-ing in a hierarchical and nonlinear inner–outer-loop con-troller. The inner loop with fast dynamics performs attitudetracking and generates the required torques. The outer loopwith slow dynamics is used to generate the thrust and thereference angles required to follow a commanded transla-tional trajectory. The asymptotic stability of the entire con-nected system is proven by exploiting the theories of sys-tems in cascade. The resulting nonlinear controller is thussimple to implement and easy to tune, and it results in goodflight performance.

5.1. Rotorcraft Dynamics Model

Controller design for rotorcraft UAVs and MAVs is a com-plex matter and typically requires the availability of themathematical model of its dynamics. A rotorcraft can beconsidered a rigid body that incorporates a mechanism forgenerating the required forces and torques. The derivationof the nonlinear dynamics is first performed in the body-fixed coordinates B and then transformed into the NED in-ertial frame I. Let {e1, e2, e3} denote unit vectors along therespective inertial axes and {xb, yb, zb} denote unit vectorsalong the respective body axes, as defined in Figure 5.

The equations of motion for a rigid body of mass m ∈ R

and inertia J ∈ R3×3 subject to external force Fext ∈ R

3 andtorque τ ∈ R

3 are given by the following Newton–Euler

Journal of Field Robotics DOI 10.1002/rob

Page 10: Guidance and nonlinear control system for autonomous flight of

320 • Journal of Field Robotics—2010

Figure 5. The quadrotor body diagram with the associatedforces and frames.

equations, expressed in the body frame B:

mV + � × mV = Fext,

J � + � × J� = τ,(1)

where V = (u, v, w) and � = (p, q, r) are, respectively,the linear and angular velocities in the body-fixed referenceframe. The translational force Fext combines gravity, mainthrust, and other body force components.

Using Euler angle parameterization and the aeronauti-cal convention “ZYX,” the airframe orientation in space isgiven by a rotation matrix R from B to I, where R ∈ SO3 isexpressed as follows:

R = Rψ.Rθ .Rφ

=

⎛⎜⎝

cθ cψ sφ sθ cψ − cφ sψ cφ sθ cψ + sφ sψ

cθ sψ sφ sθ sψ + cφ cψ cφ sθ sψ − sφ cψ

−sθ sφ cθ cφ cθ

⎞⎟⎠ ,

(2)

where η = (φ, θ, ψ) denotes the vector of three Euler anglesand s and c are abbreviations for sin(.) and cos(.).

By considering this transformation between the body-fixed reference frame and the inertial reference frame, itis possible to separate the gravitational force from otherforces and write the translational dynamics in I as follows:

ξ = υ,

mυ = −RF + mge3,(3)

where ξ = (x, y, z) and υ = (x, y, z) are the rotorcraft posi-tion and velocity in I. In the above notation, g is the gravi-tational acceleration and F is the resulting force vector in B(excluding the gravity force) acting on the airframe.

For control design purposes, we will transform the atti-tude dynamics into an appropriate form using Euler angleparameterization. Let us first recall the kinematic relationbetween � and η:

η = (η)�, (4)

where the Euler matrix (η) is given by

(η) =⎛⎝1 sin(φ) tan(θ ) cos(φ) tan(θ )

0 cos(φ) − sin(φ)0 sin(φ) sec(θ ) cos(φ) sec(θ )

⎞⎠ . (5)

It is important to note that the matrix has a singularity atθ = ± π/2, and its inverse matrix �(η) = −1(η) is given by

�(η) =

⎛⎜⎝

1 0 − sin θ

0 cos φ cos θ sin φ

0 − sin φ cos θ cos φ

⎞⎟⎠ . (6)

By differentiating Eq. (4) with respect to time and recallingthe second equation (1), we write

η = � + � = �η − J−1sk(�)J� + J−1τ.

The sk operation is defined here from R3 to R

3×3 such thatsk(x) is a skew-symmetric matrix associated to the vectorproduct sk(x)y := x × y for any vector y ∈ R

3.By multiplying both sides of the last equation by

�(η)T J�(η), the attitude dynamics of the quadrotor can beexpressed in the form (Olfati-Saber, 2001)

M(η)η + C(η, η)η = �(η)T τ (7)with a well-defined positive definite inertia matrix M(η) =�(η)T J�(η). The Coriolis and centrifugal matrix C(η, η) isgiven by2

C(η, η) = −�(η)T J �(η) + �(η)T sk[�(η)η]J�(η). (8)

By considering the symmetry of the quadrotor vehi-cle, the inertia matrix J can be approximated by J =diag(J1, J2, J3). Therefore, the matrix M(η) is given by

M(η)

=

⎛⎜⎝

J1 0 −J1sθ

0 J2s2φ + J3c

2φ (J2 − J3)cθcφsφ

−J1sθ (J2 − J3)cθcφsφ J1s2θ + c2θ (J2s

2φ + J3c2φ)

⎞⎟⎠ .

(9)

Recalling Eqs. (3) and (7), the nonlinear model of the rotor-craft can be expressed in the following form:

mξ = −RF + mge3,

M(η)η + C(η, η)η = �(η)T τ.(10)

The expressions of the forces and moments in Eqs. (10) de-pend on the rotorcraft configuration.

A quadrotor helicopter is an underactuated mechan-ical system with six degrees of freedom (DOF) and fourcontrol inputs, which are the total thrust u, produced bythe four rotors, and the torques (τφ, τθ , τψ ) obtained byvarying the rotor speeds. Therefore, the force and torquevectors in Eqs. (10) can be expressed as F = (0, 0, u)T andτ = (τφ, τθ , τψ )T . Aerodynamic effects, rotor dynamics, and

2We have exploited the fact that � = ��.

Journal of Field Robotics DOI 10.1002/rob

Page 11: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 321

gyroscopic effects are ignored. This is justified by the factthat mini-quadrotor platforms have a small airframe, flyat relatively low speeds, and have small propellers. For amore accurate model of quadrotor helicopters, one can re-fer to Bouabdallah et al. (2005) and Hoffmann et al. (2007)and the references therein.

A common simplified model used to represent the rela-tion between the control inputs (u, τθ , τφ, τψ ) and the rotorspeeds (w1, w2, w3, w4), is given by

⎛⎜⎜⎜⎝

u

τφ

τθ

τψ

⎞⎟⎟⎟⎠ =

⎛⎜⎜⎜⎝

ρ ρ ρ ρ

0 −lρ 0 lρ

−lρ 0 lρ 0

κ −κ κ −κ

⎞⎟⎟⎟⎠

⎛⎜⎜⎜⎜⎝

w21

w22

w23

w24

⎞⎟⎟⎟⎟⎠ , (11)

where (ρ, κ) are positive constants characterizing the pro-peller aerodynamics and l denotes the distance from rotorsto the center of mass.

The dynamical model considered for control designand that can represent a wide range of RUAV configura-tions including the quadrotor helicopter is given by

mξ = −uR(η)e3 + mge3,

M(η)η + C(η, η)η = �(η)T τ.(12)

The rotorcraft dynamics described by system (12) can beconsidered as two connected subsystems in which thetranslation and rotation dynamics are coupled through therotation matrix R(η). It is also important to note that the ro-tation dynamics do not depend on translation components;on the other hand, the translational dynamics do dependon angles via the rotation matrix R(η). In fact, the rotorcrafttranslation is a direct result of its attitude change, and it iscontrolled by directing the force vector in the appropriatedirection. This structural property of rotorcraft UAVs is ex-ploited here to design a nonlinear hierarchical controller.

Remark 1. The flight controller, designed in the next sec-tion, is based on nonlinear model (12). Therefore, it can beapplied to control rotorcraft UAVs whose dynamics can berepresented by system (12). However, for larger RUAVs, webelieve that control design should consider aerodynamiceffects and rotor dynamics for more effective and accu-rate control (Huang, Hoffmann, Waslander, & Tomlin, 2009;Pounds, Mahony, & Corke, 2006).

5.2. Flight Controller Design

Controller design for nonlinear systems subject to strongcoupling offers both practical significance and theoreti-cal challenges. In this paper, the control design for smallRUAVs and the quadrotor MAV in particular is addressedin three steps:

1. Decouple the translational and attitude dynamics bytransforming nonlinear model (12) into two linear sub-systems coupled by a nonlinear interconnection term.

2. Synthesize two independent controllers for the transla-tion and rotation subsystems.

3. Prove the asymptotic stability of the entire connectedclosed-loop system by exploiting cascaded systems the-ories.

Because the attitude dynamics in Eqs. (12) is a fully actu-ated mechanical system for θ �= kπ/2, then it is exact feed-back linearizable. In fact, by applying the change of vari-ables

τ = J�(η)τ + T C(η, η)η, (13)

we obtain a 3D double integrator where τ is a new con-trol input. Hence, the dynamics of the rotorcraft in Eqs. (12)transforms into

x = −1m

u(cos φ sin θ cos ψ + sin φ sin ψ), φ = τφ,

y = −1m

u(cos φ sin θ sin ψ − sin φ cos ψ), θ = τθ , (14)

z = −1m

u cos θ cos φ + g, ψ = τψ .

In contrast to previous works on quadrotors, our objectivehere is to design a multipurpose controller that can per-form autonomous stabilization of the vehicle but also accu-rate trajectory tracking for both position and attitude. Thesecapabilities are required to achieve many realistic applica-tions.

Let ξd (t), υd (t) = ξd (t), ηd (t), and ηd (t) be the desiredposition, velocity, attitude, and angular velocity vectors,respectively. The control objective is then to find con-trol laws u = α(ξ, ξd , υ, υd ) and τ = β(η, ηd, η, ηd ) such thatthe tracking errors χ = (ξ − ξd , υ − υd )T ∈ R

6 and e = (η −ηd, η − ηd )T ∈ R

6 converge asymptotically to zero.

Remark 2. In fully autonomous mode, only the ref-erence trajectories [ξd (t), υd (t), ψd (t), ψd (t)] are given bythe operator or computed by some high-level guidancesystem. The reference angles [φd (t), θd (t)] and their deriva-tives are computed by the outer-loop controller. The pro-posed control architecture also allows semiautonomousflight (attitude tracking control) in which the reference an-gles [φd (t), θd (t), ψd (t)] are directly given by the operatorthrough the GCS software or RC transmitter.

Proposition 1. The trajectory tracking control problem for ro-torcraft UAVs represented by dynamical model (14) can be for-mulated as the control of two linear subsystems that are coupled

Journal of Field Robotics DOI 10.1002/rob

Page 12: Guidance and nonlinear control system for autonomous flight of

322 • Journal of Field Robotics—2010

by a nonlinear term �(u, ηd, eη), that is,

χ = A1χ + B1(μ − ξd )︸ ︷︷ ︸f (χ,μ,ξd )

+ 1m

u H (ηd, eη)︸ ︷︷ ︸�(u,ηd ,eη)

,

e = A2e + B2(τ − ηd ),

(15)

where the vector H (ηd, eη) ∈ R6 represents dynamic inversion

errors and the matrices A1 ∈ R6×6, B1 ∈ R

6×3, A2 ∈ R6×6, and

B2 ∈ R6×3 are defined as

A1 = A2 =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

0 0 0 1 0 0

0 0 0 0 1 0

0 0 0 0 0 1

0 0 0 0 0 0

0 0 0 0 0 0

0 0 0 0 0 0

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

, B1 = B2 =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

0 0 0

0 0 0

0 0 0

1 0 0

0 1 0

0 0 1

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦

. (16)

Proof. Let us define a virtual or intermediary control vec-tor μ ∈ R

3 as follows:

μ = f(u, φd, θd , ψd ) = −1m

uR(φd, θd , ψd )e3 + ge3, (17)

where f(.) : R3 → R

3 is a continuous invertible function.Physically, the control vector μ corresponds to the desiredforce vector. Its magnitude is the total thrust u producedby the propellers, and its orientation is defined by the bodyattitude (φ, θ, ψ). φd , θd , and ψd in Eq. (17) are thus the de-sired roll, pitch, and yaw angles.

By recalling Eq. (17), the components of μ are given by

μx = −1m

u(cos φd sin θd cos ψd + sin φd sin ψd ),

μy = −1m

u(cos φd sin θd sin ψd − sin φd cos ψd ),

μz = −1m

u cos θd cos φd + g.

(18)

(μx, μy, μz) are the force vector components along X–Y–Z axes that are needed for tracking some reference tra-jectory. These desired control inputs are computed by theouter-loop controller. They are then used to compute thedesired force vector magnitude and desired attitude angles,(u, φd, θd ) = f−1(μx, μy, μz). From Eqs. (18), one can solvefor u and (φd, θd ) as

u = m

√μ2

x + μ2y + (g − μz)2,

φd = sin−1

(− m

μx sin ψd − μy cos ψd

u

),

θd = tan−1

(− μx cos ψd + μy sin ψd

g − μz

).

(19)

Because the desired angles (φd, θd , ψd ) are the outputs ofthe orientation subsystem, they cannot be assigned or pro-vided instantaneously. They are thus considered as refer-ence trajectories for the inner-loop controller. By replac-ing (φ, θ, ψ) in Eqs. (14) by (φd + eφ, θd + eθ , ψd + eψ ) andrecalling Eqs. (18), the translational dynamics can be ex-pressed as follows:

ξ = μ + 1m

u h(ηd, eη), (20)

where eη = (η − ηd ) is the attitude tracking error vector. Thecomponents of the coupling term h = (hx, hy, hz)T can becomputed in the same manner as in Kendoul, Fantoni, andLozano (2008).

Now, the cascaded system (15) is obtained by comput-ing the time derivative of both position and attitude track-ing errors (χ, e) and recalling Eq. (20).

To synthesize the control laws μ = α(χ, ξd ) and τ =β(e, ηd ) for connected system (15), we will use the followingtheorem expressed by Sontag (1988):

Theorem 1. If there is a feedback μ = α(χ, ξd ) suchthat χ = 0 is an asymptotically stable equilibrium of χ =f (χ, α(χ, ξd ), ξd ), then any partial state feedback controlτ = β(e, ηd ), which renders the e-subsystem equilibriume = 0 asymptotically stable, also achieves asymptotic stability of(χ, e) = (0, 0). Furthermore, if the two subsystems are both glob-ally asymptotically stable (GAS), then, as t → ∞, every solution(χ (t), e(t)) either converges to (χ, e) = (0, 0) (GAS) or is un-bounded.

This theorem states that partial-state feedback designcan be applied to control system (15) by synthesizing twoindependent controllers μ = α(χ, ξd ) and τ = β(e, ηd ). Inthis control design, the interconnection term �(u, ηd, eη)acts as a disturbance on the χ -subsystem, which must bedriven to zero.

Because the χ - and e-subsystems in system (15) are lin-ear, we can use simple linear controllers such as PD or PID.Therefore, we choose

μ = −Kχ χ + ξd , Kχ ∈ R3×6,

τ = −Ke e + ηd , Ke ∈ R3×6,

(21)

such that the matrices Aχ = A1 − B1Kχ and Ae = A2 −B2Ke are Hurwitz.

By substituting Eqs. (21) into Eqs. (15), the closed-loopsystem dynamics is given by

χ = Aχ χ + �(χ, eη),

e = Ae e.(22)

Although Aχ and Ae are Hurwitz, the global asymptoticstability of closed-loop system (22) cannot be directly de-duced because of the interconnection term �(χ, eη). Ac-cording to Theorem 1, one should prove that the trajectories

Journal of Field Robotics DOI 10.1002/rob

Page 13: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 323

[χ (t), e(t)] are bounded in order to guarantee the GAS forclosed-loop system (22).

5.3. Stability Analysis of the CompleteClosed-Loop System

One of the major tools usually used to show the bounded-ness of connected system trajectories is the input-to-state-stability (ISS) property (Sontag, 1988). The ISS property is astrong condition that is often difficult to verify. In Kendoulet al. (2008) and Kendoul, Zhenyu, and Nonami (2009), wehave proposed a theorem that renders the stability analysisof a class of connected systems less complex. That theoremis recalled below, and its proof can be found in Kendoulet al. (2008).

Theorem 2. Let τ = β(e, ηd ) be any C1 partial-state feed-back such that the equilibrium point e = 0 is GAS and locallyexponentially stable (LES). Suppose that there exist a positiveconstant c1 and one class-K function γ (‖eη‖), differentiable at‖eη‖ = 0, such that

‖χ‖ ≥ c1 ⇒ ‖�(χ, eη)‖ ≤ γ (‖eη‖) ‖χ‖. (23)

If there exist a positive semidefinite radially unbounded functionV (χ ) and positive constants c2 and c3 such that for ‖χ‖ ≥ c2

∂V

∂χf (χ, α(χ, ξd ), ξd ) ≤ 0,

∥∥∥∥∂V

∂χ

∥∥∥∥ ‖χ‖ ≤ c3V (χ ),

(24)

then the feedback τ = β(e, ηd ) guarantees the boundedness of allthe solutions of Eqs. (22). Furthermore, if χ = f (χ, α(χ, ξd ), ξd )is GAS, then the equilibrium point (χ, e) = (0, 0) is GAS.

Here, we apply Theorem 2 in order to prove the globalasymptotic stability of connected closed-loop system (22).

Proposition 2. Closed-loop system (22) satisfies all the con-ditions of Theorem 2, which implies that all the trajectories[χ (t), e(t)] are bounded and the equilibrium point (χ, e) = (0, 0)is GAS.

Proof. Because Aχ and Ae are Hurwitz, the χ -subsystem(without the interconnection term) and the e-subsystem areglobally exponentially stable (GES), which is stronger thanthe GAS property. The GES of the χ -subsystem implies thatthere exist a positive definite radially unbounded functionV (χ ) and positive constants c2 and c3 such that for ‖χ‖ ≥c2: ∂V

∂χAχχ ≤ 0 and ‖ ∂V

∂χ‖‖χ‖ ≤ c3V (χ ). Therefore, condi-

tion (24) of Theorem 2 is satisfied. Now, it remains to showthat the interconnection term �(χ, eη) satisfies growth re-striction (23) of Theorem 2, and this is done in Kendoul et al.(2008) and Kendoul, Zhenyu, et al. (2009).

Remark 3. The final control inputs (u, τφ, τθ , τψ ) are com-puted using Eqs. (13) and (19), which are nonlinear and

consider system’s nonlinearities and coupling betweenstate variables:

u = m‖ μ(χ, ξd ) − g e3‖ = m‖ − Kχχ + ξd − g e3‖,τ = J�(η)τ + T C(η, η)η = J�(η)(−Kee + ηd )

+ T C(η, η)η. (25)

Note that the nonlinear attitude controller in Eqs. (25) actsas a linear controller at the near-hovering condition wherethe attitude angles are small. However, in contrast to linearcontrollers that fail to provide good performance at non-nominal conditions, the nonlinear terms in Eqs. (25) be-come significant when the angles are big and participatepositively to control the attitude, yielding higher trackingaccuracy and robustness even at relatively big angles (seeFigure 6).

Remark 4. In practice, the implementation of controller(25) requires some consideration. First, intermediary con-trol laws (21) have been slightly modified to include anintegral term, thereby increasing the tracking accuracywithout destroying the GAS property of the closed-loopsystem. Second, the reference angles [φd (t), θd (t), ψd (t)] areprocessed using second-order low-pass digital filters to re-duce noise and to compute both first and second deriva-tives. For effective and robust control in most cases, onealso has to pay attention to many other things, such as inte-gral terms resetting when the vehicle is on the ground (be-fore takeoff and after landing) or also when a waypoint isreached (for the outer-loop controller).

Remark 5. The gain matrices (Kχ, Ke) in Eqs. (25) weretuned empirically based on observing flight properties dur-ing experimental tests. Because the controller gains appearin the linear part of the controller, it was easy to adjust thegains despite the absence of accurate values for the plantparameters. Table II shows the controller gains used duringflight tests.

The block diagram of the overall controller is shown inFigure 7.

6. FLIGHT TESTS AND EXPERIMENTAL RESULTS

To evaluate the performance of our autonomous rotorcraftMAV, we have performed real-time flight tests with variousmission scenarios. Here, we present experimental resultsfrom six flight tests that demonstrate the autonomous ca-pabilities of our vehicle including accurate attitude track-ing, automatic takeoff and landing, long-distance flight,waypoint navigation, trajectory tracking, and vision-basedflight. The objective of these flight tests is to show that thedeveloped rotorcraft, equipped with the designed autopi-lot, is capable of achieving autonomously the search andrescue mission described in Section 2.

Journal of Field Robotics DOI 10.1002/rob

Page 14: Guidance and nonlinear control system for autonomous flight of

324 • Journal of Field Robotics—2010

0 10 20 30 40 50 60 70 80 90 100 110

0

50

Time[s]

pitch

an

gle

[d

eg

]

0 10 20 30 40 50 60 70 80 90 100 110

0

10

20

30

Time[s]

roll

an

gle

[d

eg

]

0 5 10 15 20 25

0

10

20

30

40

50

Time[s]

pitch

an

gle

[d

eg

]

8 9 10 11 12 13 14 15

0

1

2

3

4

5

Time[s]

roll

an

gle

[d

eg

]

commandresponse

commandresponse

commandresponse

commandresponse

zoom zoom

Figure 6. Performance of the inner-loop nonlinear controller during attitude trajectory tracking. The vehicle is able to track atti-tude commands with high accuracy even at relatively big and rapidly varying angles. In translational flight (70–100 s), the trackingerror increased slightly because the pitch and roll dynamics are sensitive to aerodynamic effects such as blade flapping.

PID

PID

PID

Nonlinear

Coupling

equ.(18)

Digital Filter

Digital Filter

Digital Filter

PID

PID

PID

++

J Ψ(η) Φ(η) C (η, η) η

mu

ltip

lixe

r

dem

ult

ipli

xer

Traj

ecto

ries

Gen

erat

ion

Gui

danc

e Sy

stem

3D P

ose

Est

imat

ion

Nav

igat

ion

Syst

em matrix multiplica-tion

matrix addition

+

+

+

+

+

+

-

-

-

-

-

-

xd, xd

yd, yd

zd, zd

x, x

y, y

z, z

θ, θ

θd, θd

φ, φ

ψ, ψ

φd, φd

ψd, ψd

φd, φd

ψd, ψd

θd

φd

ψdψd

u

θ

φ

ψψ

φ

θ

μx

μy

μz

η=(φ,θ,ψ)

η=(φ,θ,ψ)

Rotorcraft

Nonlinear

Dynamic

Model

INNER-LOOP CONTROLLER

OUTER-LOOP CONTROLLER

T

nonlinear coupling equ. (13)

Figure 7. Structure of the inner–outer-loop nonlinear controller.

Journal of Field Robotics DOI 10.1002/rob

Page 15: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 325

Table II. Controller gains.

Parameter Value

kpx, kpy 0.8kix, kiy 0.02kdx, kdy 1kpz 0.8kiz 0.02kdz 1kpφ, kpθ 28kiφ, kiθ 0.5kdφ, kdθ 1kpψ 3kiψ 0.05kdψ 0.2

6.1. Attitude Trajectory Tracking

In this flight test, we conducted attitude flight control in or-der to explore the effectiveness and robustness of the inner-loop nonlinear controller. For best evaluation of the attitudecontroller performance, we performed an outdoor test inwhich reference trajectories are generated in the followingmanner:

1. Between 0 and 25 s, preprogrammed sinusoidal tra-jectories are generated with 0.5-Hz frequency and atime-varying magnitude. The pitch angle magnitude in-creases from 0 to 45 deg and the roll angle magnitude isset to 0 deg (0–22 s), then to 30 deg (22–24 s), and finallyto 0 deg (24–25 s).

2. Between 25 and 110 s, the reference trajectoriesare sent by the operator via the RC transmitter(semiautonomous control), such that the induced for-ward velocities are relatively high (about 4 m/s).

The attitude control results are shown in Figure 6,where we can see the rotorcraft accurately tracking the ref-erence commands. Good tracking was obtained even at rel-atively big and rapidly varying angles. Furthermore, thecontroller handles coupling between pitch and roll axeseven when the angles are big.

At relatively high forward speeds, the controller is alsoable to track reference commands without apparent degra-dation in performance, as shown in Figure 6 (70–100 s).The small tracking errors are due to aerodynamic effects,such as blade flapping and the pitching-up phenomenon,that occur at relatively high speeds and significant anglesof attack.

6.2. Automatic Takeoff, Hovering, and Landing

The nonlinear controller given by Eq. (25) is used in thisflight test to achieve automatic takeoff, accurate hovering,

and precise autolanding. The experimental results, shownin Figure 8, demonstrate accurate tracking of the height ref-erence command, yielding to effective altitude control andautomatic takeoff and landing. The rotorcraft also achieveda stable hovering flight and was able to stay inside a50-cm-radius circle. The horizontal motion is also accu-rately controlled during takeoff and landing maneuverswith less than 1-m error, which is a good performance forthis scale rotorcraft flying outdoors and subject to externaldisturbances such as wind.

6.3. Long-Distance Autonomous Flight

This flight test was performed in March 2008 at Agra, India,during the U.S.-Asian MAV competition. The objectives ofthis test were to do the following:

1. Demonstrate the capability of our MAV to fly au-tonomously until the zone of interest, located at about1 km from the launching point.

2. Check the quality and range of the wireless communica-tion as well as the video transmission.

The rotorcraft was thus tasked to achieve an autonomousforward flight at a translational velocity3 of 4.25 m/s whiletransmitting images to the GCS.

The obtained results are shown in Figure 9, where wecan see that the velocity and attitude reference trajecto-ries are well tracked. Position trajectories show that theMAV flew a relatively long distance autonomously, whichis about 1.2 km (

√x2 + y2). When the MAV reached the lim-

its of the fly zone, the safety pilot switched to manual flightand recovered the vehicle. The test also showed that therange of the wi-fi wireless communication was about 600 m,whereas the quality of the video transmission is acceptableuntil 1,000 m. In this test, the safety procedure related tocommunication lost was disabled, thereby allowing the ve-hicle to continue mission execution even when the commu-nication link is lost.

6.4. Fully Autonomous Waypoint Navigation

Here, we demonstrate the ability of the GN&C system toachieve accurate waypoint navigation and to perform hov-ering and automatic takeoff and landing. In this flight test,a set of four waypoints4 were chosen by just clicking thedesired locations on the two-dimensional (2D) map of theGCS interface (see Figure 10). The MAV should then passthe assigned waypoints in a given sequence. This flight testsimulates a reconnaissance mission in which the UAV istasked to fly some target areas for information collection.

3V =√

V 2x + V 2

y = √32 + 32 = 4.25.

4The current GN&C system allows an unlimited number of desiredwaypoints with more than 1,000 waypoints that can be sent at onetime from the GCS.

Journal of Field Robotics DOI 10.1002/rob

Page 16: Guidance and nonlinear control system for autonomous flight of

326 • Journal of Field Robotics—2010

0 10 20 30 40 50 60 70 80 90–1

–0.5

0

0.5

1

Time[s]

Nort

h p

ositio

n P

n [m

]

–1

–0.5

0

0.5

1

East positio

n P

e [m

]

–2

0

2

4

6

8

10

12

Altitude (

upw

ard

) [m

]

–0.5 0 0.5 1–1

5

0

0.5

1

East position[m]

Nort

h p

ositio

n [m

]

0 10 20 30 40 50 60 70 80 90Time[s]

0 10 20 30 40 50 60 70 80 90Time[s]

–2

–1

0

1

2

3

4

pitch a

ngle

[deg]

–4

–3

–2

–1

0

1

2

roll

angle

[deg]

0 10 20 30 40 50 60 70 80 90Time[s]

0 10 20 30 40 50 60 70 80 90Time[s]

commandresponse

commandresponse

commandresponse

commandresponse

commandresponse PS/INSresponse GPS/INS

touchdown

touchdown

Figure 8. Experimental results from an outdoor flight test in which the quadrotor achieved a fully autonomous takeoff, hovering,and landing. The rotorcraft was able to stay inside a 50-cm-radius circle during hovering. The attitude reference trajectories wereaccurately tracked with an error of less than 0.5 deg during hovering.

The mission is achieved autonomously when the op-erator just has to send high-level commands through theGCS interactive interface. The mission is started by click-ing the “takeoff” button. The MAV then performs an au-tomatic takeoff and hovers at a 10-m height. The altitudecan be changed at any time from the GCS. When the “up-load waypoint” button is pushed, the MAV starts waypointnavigation.5 When the mission is finished, the “land” but-ton can be pushed for automatic landing.

In Figure 10, we can see the selected waypoints as wellas the MAV trajectory plotted in real time. We can clearlyobserve that the MAV passed successfully through all thewaypoints. Position and velocity reference trajectories aretracked with high accuracy, as shown in Figures 11–12. Theobserved small tracking errors may be attributed to windgust disturbances and to GPS data errors (about ±2 m) andlatency (about 0.8 s).

5Waypoints can be updated and/or modified in real time.

The automatic takeoff and landing capabilities arealso demonstrated in this flight test with accurate altitudecontrol even at relatively high horizontal speeds of 5 m/s. Itis interesting, however, to note that altitude control is moreaccurate at low forward speeds as shown in Figure 8 andlater in Figure 18 (about 50-cm maximum error). Indeed,the thrust variation created by different angles of attack atvarying forward speeds and wind conditions causes a dis-turbance that pushes the vehicle above the reference height.The controller takes some time to reject these disturbancesbecause of the time delay in thrust.6

Figure 13 shows that the inner-loop controller per-forms well and the desired angles are tracked with smallerrors. As discussed in Subsection 6.1., degradation in theattitude tracking capabilities as forward speed increases areobserved in this flight test.

6We believe that this time delay is introduced voluntarily in theoriginal platform to facilitate manual flight.

Journal of Field Robotics DOI 10.1002/rob

Page 17: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 327

0 50 100 150 200 250 300 3500

200

400

600

800

1000

Time[s]

No

rth

po

sitio

n [

m]

0 50 100 150 200 250 300 350–800

–600

–400

–200

0

Time[s]

Ea

st

po

sitio

n [

m]

0 50 100 150 200 250 300–1

0

1

2

3

4

5

Time[s]

No

rth

ve

locity [

m/s

]

0 50 100 150 200 250 300–5

–4

–3

–2

–1

0

1

Time[s]

Ea

st

velo

city [

m/s]

0 50 100 150 200 250 300–30

–25

–20

–15

–10

–5

0

5

Time[s]

Pitch

an

gle

[d

eg

]

0 50 100 150 200 250 300–20

–10

0

10

Time[s]

Ro

ll a

ng

le [

de

g]

reference

experimental

reference

experimental

reference

experimentalreference

experimental

Figure 9. Autonomous forward flight results for the 700-g quadrotor using GPS measurements. The developed autopilot allowedthe MAV to fly a relatively long distance (1.2 km) autonomously at a translational speed of about 4.25 m/s. The vehicle’s angle ofattack was about 20 deg, and nominal winds were measured at 5–10 mph (2.2–4.4 m/s).

A video clip of this flight test can be foundat http://www.youtube.com/watch?v=Lo9qJz69uuQ&feature=channel page.

6.5. Arbitrary Trajectory Tracking

The performance of the autopilot for trajectory tracking isan important evaluation. The final objective of the rotor-craft MAV is to perform autonomously search, rescue, andsurveillance missions. Thus, trajectory tracking capability isvery useful because a spiral trajectory following, for exam-ple, allows the MAV to explore some area of interest. Fur-thermore, accurate trajectory tracking is required for pre-cise maneuvers in cluttered environments in order to avoidobstacles, for example.

In this flight test, a spiral trajectory was implementedto demonstrate the tracking performance of the nonlin-ear controller. The reference trajectory is generated usinga kinematic model of a modified Archimedean spiral in orderto obtain a spiral with constant separation distance (10 m)between successive turnings but also with a constant tan-gential speed (3 m/s).

Results from a flight test in which the MAV au-tonomously executes a spiral trajectory tracking are shownin Figures 14 and 15 (see also Figure 10). One can see thatthe MAV tracked successfully the reference trajectories in-cluding the height trajectory for automatic takeoff and al-titude holding. Figure 14 also shows the effectiveness andimportance of relying on PS/INS rather than GPS/INS forheight control, especially for MAVs flying at a low altitudewhere accurate height control is necessary.

Attitude control results, shown in Figure 16, confirmthe good performance of the inner-loop controller even inthis complicated motion pattern. One thing to note in theattitude control results, shown in Figures 13 and 16, is thesignificant errors in yaw control compared to pitch and rollcontrol. This is mainly due to the internal yaw controllerimplemented in the X-3D board, which affects the perfor-mance of our yaw controller (see Subsection 3.1.).

In this flight test, we have also demonstrated theflight termination system. The MAV achieved a softemergency landing without damaging the vehicle, ascan be seen in Figure 14 and the associated videoclip, http://www.youtube.com/watch?v=r4eOUDA3JJo&feature=channel page.

Journal of Field Robotics DOI 10.1002/rob

Page 18: Guidance and nonlinear control system for autonomous flight of

328 • Journal of Field Robotics—2010

Figure 10. Part of the GCS interface showing a 2D map of the test field and the MAV trajectories during waypoint navigation andspiral trajectory tracking.

0

10

20

30

40

50

60

70

80

Nort

h p

ositio

n X

[m

]

–100

–80

–60

–40

–20

0

East

positio

n Y

[m

]

–2

0

2

4

6

8

10

12

14

16

Altitude Z

[m

]

–100 –90 –80 –70 –60 –50 –40 –30 –20 –10 0 10

0

10

20

30

40

50

60

70

80

East position Y [m]

Nort

h p

ositio

n X

[m

]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

referenceresponse

referenceresponse

referenceresponse

hoveringautomatic take-off

automatic landing

trajectory tracking

referenceresponse pressur sensorestimate GPS/INS

wpt 1

wpt 2

wpt 3 wpt 4

42 62

0

70

zoom

18 32

zoom

0

10

Figure 11. Autonomous waypoint navigation experiment. In this test, the quadrotor vehicle is commanded to perform automatictakeoff, waypoint navigation, stationary flight at a 10-m height, and automatic landing. In this test, the flight path is defined by fourwaypoints and the reference trajectories are generated in real time by the guidance system. These results show that the rotorcraftpassed successfully through all the waypoints and reference trajectories are accurately tracked along the 350-m flight path.

Journal of Field Robotics DOI 10.1002/rob

Page 19: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 329

0 20 40 60 80 100 120 140 160 180 200 220–6

–4

–2

0

2

4

6

Time[s]

No

rth

ve

locity V

x [m

/s]

–4

–2

0

2

4

6

Ea

st ve

locity V

y [m

/s]

–1.5

–1

–0.5

0

0.5

1

1.5

Upw

ard

ve

locity V

z [m

/s]

0

1

2

3

4

5

6

7

Tota

l Th

rust [N

]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

commandresponse

commandresponse

commandresponse PS/INSresponse GPS/INS

Figure 12. MAV translational velocities during waypoint navigation. The velocity reference trajectories are well tracked at lowand relatively high speeds (5 m/s). Tracking errors in the vertical velocity may be attributed to the fast dynamics of the verticalmotion and noisy measurements, which are obtained by differentiating the PS signal.

–30

–20

–10

0

10

20

pitch a

ngle

[deg]

–20

–10

0

10

20

30

roll

angle

[deg]

–50

0

50

100

yaw

angle

[deg]

0

–0.1

0

0.1

0.1

contr

ol to

rques [N

.M]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

0 20 40 60 80 100 120 140 160 180 200 220Time[s]

0

0.1

–0.1

commandresponse

commandresponse

commandresponse

start motors stop motors

Figure 13. MAV attitude during waypoint navigation. Good tracking of commanded attitude is obtained even in the presence ofwind gust disturbances. At high translational speeds, aerodynamic disturbances become significant, giving tracking errors on theorder of 3–5 deg.

Journal of Field Robotics DOI 10.1002/rob

Page 20: Guidance and nonlinear control system for autonomous flight of

330 • Journal of Field Robotics—2010

0 50 100 150 200–30

–20

–10

0

10

20

30

Time[s]

No

rth

po

sitio

n P

n [m

]

–30

–20

–10

0

10

20

30

Ea

st p

ositio

n P

e [m

]

–2

0

2

4

6

8

10

12

Altitude (

up

wa

rd)

[m]

–30 –20 –10 0 10 20 30–30

–20

–10

0

10

20

30

East position[m]

Nort

h p

ositio

n [m

]

0 50 100 150 200Time[s]

0 50 100 150 200Time[s]

commandresponse

commandresponse PS/INSestimate GPS/INS

commandresponse

commandresponse

Figure 14. Tracking a spiral trajectory outdoors at a constant forward speed of 3 m/s. In this test, the quadrotor MAV is com-manded to take off and fly a spiral trajectory at a constant speed while maintaining a constant height of 10 m and a constantseparation distance (10 m) between successive turnings. Notice that the controller relies on the PS estimates for height control,which are more accurate than the GPS measurements. These results demonstrate that the designed nonlinear controller enablesminiature rotorcraft to accurately track nonstraight trajectories and to execute different flight patterns.

–4

–3

–2

–1

0

1

2

3

4

Nort

h v

elo

city

Vx

[m/s

]

–4

–3

–2

–1

0

1

2

3

4

East

velo

city

Vy

[m/s

]

–2

–1.5

–1

–0.5

0

0.5

1

Upw

ard

velo

city

Vz

[m/s

]

0

1

2

3

4

5

6

7

Tota

l Thru

st [N

]

0 50 100 150 200Time[s]

0 50 100 150 200Time[s]

0 50 100 150 200Time[s]

0 50 100 150 200Time[s]

commandresponse

commandresponse PS/INSresponse GPS/INS

commandresponse

Figure 15. MAV translational velocities during spiral trajectory tracking. The time-parameterized desired velocities are generatedby the guidance system in such away as to maintain a constant speed of 3 m/s along a spiral pattern.

Journal of Field Robotics DOI 10.1002/rob

Page 21: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 331

–20

–15

–10

–5

0

5

10

15

20

pitch

an

gle

[d

eg

]

–20

–15

–10

–5

0

5

10

15

20

roll

an

gle

[d

eg

]

–40

–30

–20

–10

0

10

20

30

40

yaw

an

gle

[d

eg

]

–0.05

0

0

0.05

0

0.05

contr

ol to

rqu

es [N

.M]

0 50 100 150 200Time[s]

0 50 100 150 200Time[s]

0 50 100 150 200Time[s]

0 50 100 150 200Time[s]

-0.05

commandresponse

commandresponse

commandresponse

Figure 16. MAV attitude during spiral trajectory tracking. Yaw control is less accurate than pitch and roll control because of theinternal yaw controller implemented in the X-3D board, which affects the performance of our inner-loop controller.

6.6. Vision-Based Flight and AutonomousTarget Tracking

This flight test demonstrates the possibility of the devel-oped platform locking and tracking some ground movingtarget using vision. We have placed a box of about 70 ×50 cm at approximately 16 m from the GCS. This box servesas a target that must be tracked by the MAV.

In the first phase of the experiment, the rotorcraft wasrelying on GPS measurements to control its motion, sim-ulating a search maneuver. When the target occurs in thefield of view (FOV) of the camera, the operator selectsthat target on the image window displayed in real timeon the GCS screen. By pushing ”TGT mode” button (seeFigure 17), the flight mode is switched into vision-basedflight. The vision system provides real-time estimates about

Figure 17. Interface of the GCS during vision-based target tracking.

Journal of Field Robotics DOI 10.1002/rob

Page 22: Guidance and nonlinear control system for autonomous flight of

332 • Journal of Field Robotics—2010

100 150 200 250 300 350–20

–15

–10

–5

0

Time[s]

x p

ositio

n [m

]

100 150 200 250 300 350–4

–2

0

2

4

Time[s]

y p

ositio

n [m

]

100 150 200 250 300 350

0

2

4

6

8

Time[s]

z p

ositio

n (

heig

ht)

[m

]

reference

GPSvision

reference

GPSvision

reference

GPSPS/INS

80

80

80

GPS-based flight (hovering) Vision-based flight (moving target tracking)

Figure 18. Autonomous vision-based flight for moving target tracking. In the first phase, the vehicle is commanded to take offand hover at a 6-m height using GPS measurements. When the target occurs in the camera FOV, the flight mode is switched tovision-based flight and the vehicle is commanded to track the target by controlling the relative distance (black line) to zero. Thevehicle flew about 16 m along the X axis, as shown in the first graph (blue line), tracking a ground moving target before landingon it using vision. For best comparison of GPS measurements and visual estimates in the Y axis, GPS drift was compensated att = 180 s by adding 2 m to its Y measurement.

the vehicle’s position, velocity, and height relative to the se-lected target. The tracking process is performed by control-ling this relative pose in order to regulate it to zero.

The target was displaced by pulling some wire at-tached to its center. To demonstrate the good performanceof the tracking system, the target was moved for some timeand then stopped and then moved again. Finally, the MAVwas commanded to achieve a precise autolanding on thetarget using visual estimates.

As we can see in Figure 18, the transition from theGPS mode to the vision mode was performed stably andsmoothly without any oscillation or aggressive maneu-ver. The GPS ground-track (blue line) in the first graph ofFigure 18 shows that the MAV flew a 16-m distance whilekeeping the relative position (black line), estimated by thevision system, near zero. This means that the MAV trackedeffectively the moving target and kept it in its FOV, ideallyat the image center.

The height was also controlled accurately by hold-ing a desired altitude of 6 m. Takeoff and landing ma-neuvers were performed autonomously, resulting in a pre-

cise landing7 on the target using visual measurements (seeFigure 19).

A video clip of this flight test can be found at http://www.youtube.com/watch?v=-IpbOd-UuG4. Other videosabout vision-based flights are available at http://www.youtube.com/fkendoul.

The flight results presented in this paper demon-strate a significant improvement in capability over pre-vious minirotorcraft and quadrotor MAVs in particular.From the obtained experimental results according to sixdifferent mission scenarios, we conclude that the imple-mented autopilot and GN&C algorithms present good per-formance for autonomous flight. In contrast to standardlinear controllers that fail to provide good performance atnonnominal conditions, the designed nonlinear control sys-tem allowed a miniquadrotor to accurately track reference

7The rotorcraft landed on the edge of the 70 × 50 cm box, whichis a very good performance for vision-based landing using a mini-quadrotor UAV.

Journal of Field Robotics DOI 10.1002/rob

Page 23: Guidance and nonlinear control system for autonomous flight of

Kendoul et al.: Guidance and Nonlinear Control System For Rotorcraft UAVs • 333

Figure 19. Vision-based target tracking. (a) MAV during vision-based tracking of a moving target; (b) MAV during preciseautolanding on the target using vision.

trajectories even at relatively high speeds, big angles, cou-pled motions, and windy conditions. These results alsodemonstrate that rotorcraft MAVs, weighing less than 0.7kg, can achieve advanced autonomous flight despite theirsmall size and the weaknesses of the low-cost sensors.

7. CONCLUSIONS

In this paper, we have described the design of a minirotor-craft UAV with onboard intelligence capabilities that canbe used for search and rescue missions. This research dealtwith both theoretical and practical issues of autonomousRMAVs, ranging from controller stability analysis to ex-perimental flight tests. Indeed, we have proposed a non-linear flight controller that is practical for real-time imple-mentation while guaranteeing the global asymptotic stabil-ity of the closed-loop system. We have also demonstratedthe ability to provide effective waypoint navigation, accu-rate trajectory tracking, and visual navigation capabilitiesto small low-cost systems such as the mini-quadrotor he-licopter. This proves the relevance of the GN&C system,which relies solely on light weight and low-cost sensors.From the flight test results, we conclude that the devel-oped rotorcraft MAV shows good and reliable performancefor autonomous flight, which makes it ready to embarkon many applications including search and rescue mis-sions. This constitutes an important step toward our goalof developing fully autonomous MAVs capable of achiev-ing real-world applications.

In future work, it would be interesting to improvethe control system by considering the aerodynamic effectsat higher speeds and aggressive maneuvers. It may alsobe useful to increase the vehicle autonomy by developing

guidance systems that are able to tackle a number of op-erational events without operator intervention such as, forexample, trajectory replanning following the detection ofan obstacle or changes in the mission or environment con-figuration. Currently, we are developing a GPS-aided vi-sual system for cooperative navigation and control of sev-eral minirotorcraft UAVs. We are also investigating differ-ent technologies for obstacles and collision avoidance insingle- and multi-UAV platforms.

ACKNOWLEDGMENTS

This work was supported by the Japan Society for the Pro-motion of Science (JSPS).

REFERENCES

Abbeel, P., Coates, A., Quigley, M., & Ng, A. (2007, December).An application of reinforcement learning to aerobatic he-licopter flight. In Proceedings of the Neural InformationProcessing Systems Conference, Vancouver, Canada.

Altug, E., Ostrowski, J. P., & Taylor, C. J. (2005). Control of aquadrotor helicopter using dual camera visual feedback.International Journal of Robotics Research, 24(5), 329–341.

Bouabdallah, S., Murrieri, P., & Siegwart, R. (2005). Towardsautonomous indoor micro VTOL. Autonomous Robots,18(2), 171–183.

Castillo, P., Dzul, A., & Lozano, R. (2004). Real-time stabiliza-tion & tracking of a four rotor mini-rotorcraft. IEEE Trans-actions on Control Systems Technology, 12(4), 510–516.

Gavrilets, V., Mettler, B., & Feron, E. (2004). Human-inspiredcontrol logic for automated maneuvering of miniaturehelicopter. Journal of Guidance, Control, and Dynamics,27(5), 752–759.

Journal of Field Robotics DOI 10.1002/rob

Page 24: Guidance and nonlinear control system for autonomous flight of

334 • Journal of Field Robotics—2010

Guenard, N., Hamel, T., & Moreau, V. (2005, June). Dy-namic modeling & intuitive control strategy for an “X4-flyer.” In Proceedings of 5th International Conferenceon Control and Automation, Budapest, Hungary (vol. 1,pp. 141–146).

Gurdan, D., Stumpf, J., Achtelik, M., Doth, K., Hirzinger, G., &Rus, D. (2007, April). Energy-efficient autonomous four-rotor flying robot controlled at 1 kHz. In Proceedings ofthe IEEE International Conference on Robotics & Automa-tion, Rome, Italy (pp. 361–366).

He, R., Prentice, S., & Roy, N. (2008, May). Planning in infor-mation space for a quadrotor helicopter in a GPS-deniedenvironment. In Proceedings of the IEEE InternationalConference on Robotics & Automation, Pasadena, CA(pp. 1814–1820).

Hoffmann, G. M., Huang, H., Waslander, S. L., & Tomlin, C. J.(2007, August). Quadrotor helicopter flight dynamics andcontrol: Theory and experiment. In Proceedings of theAIAA Guidance, Navigation, and Control Conference andExhibit (AIAA 2007-6461).

How, J. P., Bethke, B., Frank, A., Dale, D., & Vian, J. (2008). Real-time indoor autonomous vehicle test environment. IEEEControl Systems Magazine, 28(2), 51–64.

Huang, H., Hoffmann, G. M., Waslander, S. L., & Tomlin, C. J.(2009, May). Aerodynamics and control of autonomousquadrotor helicopters in aggressive maneuvering. In Pro-ceedings of the IEEE International Conference on Roboticsand Automation, Kobe, Japan (pp. 3277–3282).

Jang, J. S., & Liccardo, D. (2006, October). Automation of smallUAVs using a low cost MEMS sensor and embedded com-puting platform. In Proceedings of the IEEE/AIAA 25thDigital Avionics Systems Conference, Portland, OR (pp. 1–9).

Johnson, E., & Kannan, S. (2005). Adaptive trajectory controlfor autonomous helicopters. Journal of Guidance, Control,and Dynamics, 28(3), 524–538.

Kendoul, F., Fantoni, I., & Lozano, R. (2008, July). Asymptoticstability of hierarchical inner-outer loop-based flight con-trollers. In Proceedings of the 17th IFAC World Congress,Seoul, Korea (pp. 1741–1746).

Kendoul, F., Fantoni, I., & Nonami, K. (2009). Optic flow-basedvision system for autonomous 3D localization and controlof small aerial vehicles. Robotics and Autonomous Sys-tems, 57, 591–602.

Kendoul, F., Lara, D., Fantoni, I., & Lozano, R. (2007). Real-timenonlinear embedded control for an autonomous quad-rotor helicopter. Journal of Guidance, Control, and Dy-namics, 30(4), 1049–1061.

Kendoul, F., Nonami, K., Fantoni, I., & Lozano, R. (2009). Anadaptive vision-based autopilot for mini flying machinesguidance, navigation and control. Autonomous Robots,27(3), 165–188.

Kendoul, F., Zhenyu, Y., & Nonami, K. (2009, May). Embed-ded autopilot for accurate waypoint navigation and trajec-tory tracking: Application to miniature rotorcraft UAVs.In Proceedings of the IEEE International Conference onRobotics and Automation, Kobe, Japan (pp. 2884–2890).

Kim, H. J., & Shim, D. H. (2003). A flight control system foraerial robots: Algorithms and experiments. Control Engi-neering Practice, 11(2), 1389–1400.

Koo, T., & Sastry, S. (1998, December). Output tracking con-trol design of a helicopter model based on approximatelinearization. In Proceedings of the IEEE Conference onDecision and Control (pp. 3635–3640).

La Civita, M., Papageorgiou, G., Messner, W. C., & Kanade, T.(2006). Design and flight testing of an H∞ controller for arobotic helicopter. Journal of Guidance, Control, and Dy-namics, 29(2), 485–494.

Madani, T., & Benallegue, A. (2006, November). Backsteppingsliding mode control applied to a miniature quadrotor fly-ing robot. In Proceedings of the 32nd Annual Conferenceof the IEEE Industrial Electronics Society, Paris, France(pp. 700–705).

Mahony, R., & Hamel, T. (2004). Robust trajectory tracking for ascale model autonomous helicopter. International Journalof Robust and Nonlinear Control, 14(12), 1035–1059.

Olfati-Saber, R. (2001). Nonlinear control of underactuatedmechanical systems with application to robotics andaerospace vehicles. Ph.D. Thesis Report, Departmentof Electrical Engineering and Computer Science, MIT,Cambridge, MA.

Pounds, P., Mahony, R., & Corke, P. (2006, December). Mod-elling and control of a quad-rotor robot. In Proceedings ofthe Australian Conference on Robotics and Automation,Auckland, New Zealand.

Reiner, J., Balas, G., & Garrard, W. (1995). Robust dynamic in-version for control of highly maneuverable aircraft. Jour-nal of Guidance, Control, and Dynamics, 18(1), 18–24.

Saripalli, S., Montgomery, J., & Sukhatme, G. (2003). Visually-guided landing of an unmanned aerial vehicle. IEEETransactions on Robotics and Automation, 19(3), 371–381.

Scherer, S., Singh, S., Chamberlain, L., & Elgersma, M. (2008).Flying fast and low among obstacles: Methodology andexperiments. International Journal of Robotics Research,27(5), 549–574.

Shin, J., Fujiwara, D., Nonami, K., & Hazawa, K. (2005). Model-based optimal attitude and positioning control of small-scale unmanned helicopter. Robotica, 23, 51–63.

Sontag, E. (1988). Smooth stabilization implies coprime factor-ization. IEEE Transactions on Automatic Control, 34(4),435–443.

Tayebi, A., & McGilvray, S. (2006). Attitude stabilization of aVTOL quadrotor aircraft. IEEE Transactions on ControlSystems Technology, 14(3), 562–571.

Journal of Field Robotics DOI 10.1002/rob