autonomous landing of a micro aerial vehicle on a moving...

16
Research Article Autonomous Landing of a Micro Aerial Vehicle on a Moving Platform Using a Composite Landmark Bo-Yang Xing , 1 Feng Pan , 2 Xiao-Xue Feng , 1 Wei-Xing Li, 1 and Qi Gao 1 1 Beijing Institute of Technology, School of Automation, Beijing 100081, China 2 Kunming-BIT Industry Technology Research Institute INC, Kunming 6501064, China Correspondence should be addressed to Xiao-Xue Feng; [email protected] Received 6 December 2018; Revised 12 March 2019; Accepted 24 March 2019; Published 5 May 2019 Academic Editor: Paul Williams Copyright © 2019 Bo-Yang Xing et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In the existing vision-based autonomous landing systems for micro aerial vehicles (MAVs) on moving platforms, the limited range of landmark localization, the unknown measurement bias of the moving platform (such as wheel-slip or inaccurate calibration of encoders), and landing trajectory knotting seriously aect system performance. To overcome the above shortcomings, an autonomous landing system using a composite landmark is proposed in this paper. In the proposed system, a notched ring landmark and two-dimensional landmark are combined as an R2D landmark to provide visual localization over a wide range. In addition, the wheel-slip and imprecise calibration of encoders are modeled as the unknown measurement bias of the encoders and estimated online via an extended Kalman lter. The landing trajectory is planned by a solver as a convex quadratic programming problem in each control cycle. Meanwhile, an iterative algorithm for adding equality constraints is proposed and used to verify whether the planned trajectory is feasible or not. The simulation and actual landing experiment results verify the following: the visual localization with the R2D landmark has the advantages of wide localization range and high localization accuracy, the pose estimation result of the moving platform with unknown encoder measurement bias is continuous and accurate, and the proposed landing trajectory planning algorithm provides a continuous trajectory for reliable landing. 1. Introduction Micro aerial vehicles (MAVs) are highly agile and versatile ying robots. Recent work has demonstrated their capabili- ties in many dierent applications including but not limited to surveillance, object transportation, agriculture, and aerial photography. This work is focused on the specic task of a fully autonomous landing on a moving platform. Some work has focused on autonomous landing on stationary platforms (SPL systems) such as reference [1], presenting a Global Posi- tioning System- (GPS-) based landing system, which used the GPS position of the landing zone to guide a MAV to land. Several similar systems are presented in references [24]. Vision-based landing for MAVs has been an actively studied eld in recent years [5]. Some examples are the work pre- sented in [6], where the visual system is used to estimate a vehicle position relative to a ring landmark. Reference [7] presents a method for MAV autonomous takeo, tracking, and landing on a 2D landmark array. Compared with the SPL systems, it is meaningful to research the moving plat- form landing systems (MPL systems), which are more appli- cable and can use vehicles or ships as landing targets. Generally, the vision-based MPL systems include the follow- ing three features. (1) 2D landmark localization: to detect the landing zone, most state-of-the art works exploit computer vision from onboard cameras. In those studies, 2D land- mark localization is the most common approach for providing camera position relative to landing zone. Generally, the 2D landmark is arranged on the top of the moving platform, so its size is often strictly Hindawi International Journal of Aerospace Engineering Volume 2019, Article ID 4723869, 15 pages https://doi.org/10.1155/2019/4723869

Upload: others

Post on 27-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

Research ArticleAutonomous Landing of a Micro Aerial Vehicle on a MovingPlatform Using a Composite Landmark

Bo-Yang Xing ,1 Feng Pan ,2 Xiao-Xue Feng ,1 Wei-Xing Li,1 and Qi Gao 1

1Beijing Institute of Technology, School of Automation, Beijing 100081, China2Kunming-BIT Industry Technology Research Institute INC, Kunming 6501064, China

Correspondence should be addressed to Xiao-Xue Feng; [email protected]

Received 6 December 2018; Revised 12 March 2019; Accepted 24 March 2019; Published 5 May 2019

Academic Editor: Paul Williams

Copyright © 2019 Bo-Yang Xing et al. This is an open access article distributed under the Creative Commons Attribution License,which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

In the existing vision-based autonomous landing systems for micro aerial vehicles (MAVs) on moving platforms, the limited rangeof landmark localization, the unknown measurement bias of the moving platform (such as wheel-slip or inaccurate calibration ofencoders), and landing trajectory knotting seriously affect system performance. To overcome the above shortcomings, anautonomous landing system using a composite landmark is proposed in this paper. In the proposed system, a notched ringlandmark and two-dimensional landmark are combined as an R2D landmark to provide visual localization over a wide range. Inaddition, the wheel-slip and imprecise calibration of encoders are modeled as the unknown measurement bias of the encodersand estimated online via an extended Kalman filter. The landing trajectory is planned by a solver as a convex quadraticprogramming problem in each control cycle. Meanwhile, an iterative algorithm for adding equality constraints is proposed andused to verify whether the planned trajectory is feasible or not. The simulation and actual landing experiment results verify thefollowing: the visual localization with the R2D landmark has the advantages of wide localization range and high localizationaccuracy, the pose estimation result of the moving platform with unknown encoder measurement bias is continuous andaccurate, and the proposed landing trajectory planning algorithm provides a continuous trajectory for reliable landing.

1. Introduction

Micro aerial vehicles (MAVs) are highly agile and versatileflying robots. Recent work has demonstrated their capabili-ties in many different applications including but not limitedto surveillance, object transportation, agriculture, and aerialphotography. This work is focused on the specific task of afully autonomous landing on a moving platform. Some workhas focused on autonomous landing on stationary platforms(SPL systems) such as reference [1], presenting a Global Posi-tioning System- (GPS-) based landing system, which used theGPS position of the landing zone to guide a MAV to land.Several similar systems are presented in references [2–4].Vision-based landing for MAVs has been an actively studiedfield in recent years [5]. Some examples are the work pre-sented in [6], where the visual system is used to estimate a

vehicle position relative to a ring landmark. Reference [7]presents a method for MAV autonomous takeoff, tracking,and landing on a 2D landmark array. Compared with theSPL systems, it is meaningful to research the moving plat-form landing systems (MPL systems), which are more appli-cable and can use vehicles or ships as landing targets.Generally, the vision-based MPL systems include the follow-ing three features.

(1) 2D landmark localization: to detect the landing zone,most state-of-the art works exploit computer visionfrom onboard cameras. In those studies, 2D land-mark localization is the most common approach forproviding camera position relative to landing zone.Generally, the 2D landmark is arranged on the topof the moving platform, so its size is often strictly

HindawiInternational Journal of Aerospace EngineeringVolume 2019, Article ID 4723869, 15 pageshttps://doi.org/10.1155/2019/4723869

Page 2: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

limited. To provide a large range of localization dataunder this limitation, researchers have adopted land-marks such as 2D codes, ring landmarks, or charac-ter landmarks [8–10]. Although interesting resultshave been achieved, they are not necessarily applica-ble to dynamically moving targets in an open out-door environment

(2) Pose estimation of the moving platform: it is difficultto guarantee that the moving platform will be visiblethroughout the entire duration of the landing. Toaddress missing visual information, state estimationand multisensor fusion methods have been intro-duced to predict the motion of the moving platform[11, 12]. The most commonly used scheme is fusionof the encoder data and 2D landmark data [13].Alternative solutions are realized with the use ofadditional sensors attached to the moving target;these sensors include inertial measurement units[14], GPS [15] receivers, or infrared markers [16].For reliable landing on a moving platform, it is verynecessary to consider wheel-slip or obstruction ofthe moving platform during its movement. This isby no means standard in the literature, since all theMPL systems mentioned before directly estimatethe state of the moving platform without consideringthe measurement bias of the sensors

(3) Online landing trajectory planning: for the MAV tobe truly autonomous, the landing trajectory plan-ning must be performed on the onboard processorin real time [17]. In fact, the desired landing trajec-tory must not only pass all the waypoints accuratelyand continuously but also reduce energy consump-tion and meet the dynamic constraints of the MAV.Therefore, optimal control theory is introduced tosolve the quadratic programming (QP) problem oflanding trajectory planning. Examples include mini-mum snap [18], minimum time [19], or shortest pathunder uncertain conditions [20]. A common prop-erty of these methods is that the planned trajectorymay be nonconvex or even knotted when the numberof waypoint constraints is too small. Furthermore,although the algorithm with inequality constraintscan obtain better trajectories, the computationalcomplexity will increase significantly [21]. Therefore,those algorithms always rely on external computationfor trajectory planning

In this paper, a MAV system capable of autonomouslylanding on a moving target using mainly onboard sensingand computing is presented. The only prior knowledge aboutthe moving landing target is its real-time measurement of theencoders. The proposed R2D-MPL system uses a compositelandmark comprising a notch ring (NR) landmark and 2Dlandmark to provide accurate visual localization data on themoving platform over a wide spatial range. To deal with tem-porarily missing visual information, the unknown measure-ment bias of encoders caused by wheel-slip and imprecisecalibration is taken into consideration in the target’s

dynamical model, and the state of the moving platform isestimated online based on an extended Kalman filter (EKF).Meanwhile, the R2D-MPL system computes trajectoriesbased on the energy necessary to execute an efficient landing,which takes into account the dynamic constraints of theMAV and optimizes the equality constraints of waypointsthrough an iterative algorithm. The proposed system is vali-dated by simulation as well as in real-world experimentsusing low-cost, lightweight consumer hardware. The experi-mental results show that the approach can realize the auton-omous and reliable landing of the MAV when the encodershave measurement bias.

The rest of this paper is organized as follows. Section 2presents the overview of the proposed R2D-MPL system.Section 3 introduces the monocular localization algorithmbased on the proposed R2D landmark. Section 4 introducesthe pose estimation algorithm based on the extended Kalmanfilter. Section 5 introduces the landing trajectory planningalgorithm based on the minimum jerk rule. Section 6 intro-duces the design of the hardware and experimental results.The experimental results of the proposed visual localizationalgorithm are given in Section 6.1, the result of the simulationlanding experiment is given in Section 6.2, and the result ofthe actual landing experiment is given in Section 6.3. Section7 presents the conclusions of this paper.

2. Overview of the R2D-MPL System

Landing on a moving platform is more complex thanlanding on a stationary platform. Recognition errors fora 2D landmark, inaccurate motion models, and maneuveringby moving platforms will seriously affect landing results. Toovercome the above problems, a general and hierarchicalR2D-MPL system is proposed in this paper, which is shownin Figure 1.

The R2D-MPL system shown in Figure 1 contains thefollowing four layers. The sensor layer consists of a varietyof heterogeneous sensors, including the R2D landmark,encoder, inertial measurement unit (IMU), and GPS. Thesensor layer provides the measurements of the movingplatform and MAV. The fusion layer uses the measurementdata from the sensor layer to obtain the real-time pose esti-mation of the MAV and the moving platform. In addition,the proposed system can further introduce other sensorssuch as a laser range finder, UWB, or visual odometer toimprove the estimation performance. The decision layer gen-erates the desired waypoint and trajectory online based onthe real-time pose estimation from the data fusion layer.Finally, the control layer realizes the visual tracking of themoving platform and the pose control of the MAV.

3. Visual Localization Method Based on aComposite R2D Landmark

Ring landmarks and 2D landmarks are the most commonlyused landmarks in visual localization systems. A ring land-mark has the advantage of providing good estimation resultsfrom larger distances since it uses the projection informationof the whole contour. However, it has the problems of a lack

2 International Journal of Aerospace Engineering

Page 3: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

of freedom and singular solution. A 2D landmark allowsestimation of the camera’s 6Dof pose, but its localizationaccuracy is affected by the vertex detection result; therefore,it is only suitable for close-range localization. To accuratelyestimate the MAV’s pose, a composite R2D landmark isdesigned and placed on the top of the moving platform.The R2D landmark comprises a notched ring (NR) landmarkand a 2D landmark; thus, the R2D landmark has the poten-tial to realize precise and continuous localization in a widerange space. As shown in Figure 2, the NR landmark has anotch with outer ring diameter do and inner ring diameterdi; the 2D landmark is located at the center of the NR land-mark, and its size is da.

For the R2D landmark recognition, the original image isprocessed by the 6∗6 template Gaussian filter and adaptivebinary algorithm. The serial number and the corners’ 2D-3D matching relationship are verified based on the codinginformation in the contours. Then, the ring contour is seg-mented sampled and fitted to find the inner and outer ringprojections. Finally, the notch is identified with the binaryimage established by combining the ring contour with theellipse projection. After recognizing the R2D landmark, the5Dof pose of the camera can be calculated with the outer ringprojection. Furthermore, the complete 6Dof pose can be cal-culated by considering the pixel position of the notch. First,the outer ring ellipse projection is transformed to the cameracoordinate system, and the ellipse parameter equation can begiven as follows.

L1x2 + 2L2xy + L3y

2 + 2L4x + 2L5y + L6 = 0 1

The elliptical oblique cone [22] projection matrix Q canbe calculated as follows.

Q =

L1 L2L4f

L2 L3L5f

L4f

L5f

L6f 2

, 2

where f is the camera focal length. λ1, λ2, and λ3 are theeigenvalues of Q, and ε2 and ε3 are the feature vectors of λ2and λ3, respectively. Assuming λ1 > λ2 > 0 > λ3, the positionand attitude of the ellipse plane in the canonical camera coor-dinate system can be calculated as follows.

n =n1

n2

n3

= S1λ1 − λ2λ2 − λ3

ε2 + S2λ1 − λ2λ2 − λ3

ε3, 3

t =

t1

t2

t3

= S3do−λ2λ3

S1λ3λ1 − λ2λ2 − λ3

ε2

+ S2λ2λ1 − λ2λ2 − λ3

ε3

4

IMUGPSSensorlayer

Encoders R2Dlandmark

Data fusionlayer

Pose estimationof MAV

Pose estimationof platform

Pose ofMAV

Pose of moving platform

WaypointLandingstrategy

Decisionlayer

Trajectoryplanning

Landing trajectory

Controllayer Position

controllerMoving platform

visual tracking

Figure 1: Flow chart of the R2D-MPL system.

3International Journal of Aerospace Engineering

Page 4: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

In equations (3) and (4), the vector n represents the pitch,roll, and yaw angles of the ellipse plane, and t represents thelocation of the ellipse center in the canonical camera coordi-nate system. Because the coefficients S1, S2, and S3 areunknown, only a 5Dof singular pose (including the 2Dofrotation matrix rr and the 3Dof transform vector tr) of thecamera can be obtained.

rr =cos n1 0 sin n1

0 1 0−sin n1 0 cos n1

1 0 00 cos n2 −sin n2

0 sin n2 cos n2

,

5

tr = t 6

To obtain the nonsingular solutions, the following twokinds of constraints are added. First, two simple constraints(n3 < 0 and t3 > 0) can be established based on the fact thatthe NR landmark is in front of the camera and its z-axis facesthe camera. Second, assuming that the slope of the NR land-mark can be ignored, the pitch and roll angles calculatedbased on the rotation matrix should be close to the resultsprovided by the IMU or 2D landmark. Therefore, definer∗r ∣ t∗r as the regular 5Dof pose solution provided by theNR landmark, which can be calculated based on equations(5) and (6) and the above constraints. The last Dof is approx-imately calculated based on the position of the NR landmarkcenter pixel and the notch center pixel. Thus, the 6Dof poseRr ∣ Tr provided by the NR landmark is given as follows.

Rr =

r11 r12 r13

r21 r22 r23

r31 r32 r33

=

cos ρ sin ρ 0

−sin ρ cos ρ 0

0 0 1

r∗r , Tt = Rrt∗r ,

7

ρ = arctan Hy − Scy,Hx − Scx , 8

where Scx, Scy T is the NR landmark center pixel and

Hx,HyT is the notch center pixel. To simplify, Rr ∣ Tr

is converted to the 3D position and Euler angles in the Eulercoordinate system E . Define ζr = xr , yr , zr , θr , βr , ϕr

T

as the 6Dof camera pose in E , and it can be calculatedas follows.

Pr =

xr

yr

zr

= RTr Tr, ξr =

θr

βr

φr

=

arctan r32, r33

arctan −r31, r232 + r233

arctan r21, r11

9

Ideally, the 2D landmark and NR landmark can simulta-neously provide the 6Dof camera pose. The localization ofthe NR landmark is obtained based on equations (1), (2),(3), (4), (5), (6), (7), (8), and (9). The localization of the 2Dlandmark is obtained based on the commonly used EPNPalgorithm [23], which is fast and simple. However, the EPNPalgorithm is seriously affected by the accuracy of the cor-ners’ 2D-3D matching relationship; thus, the 2D landmarkis only suitable for close-range localization. To synthesizethe localization results of the two landmarks, a weightedfusion algorithm based on the corners’ projection error ofthe 2D landmark is proposed in this paper, and the 6Dofcamera pose provided by the R2D landmark in E is givenas follows.

ζR2D = αζr + 1 − α ζd,

α = edemax

, 0 < α < 1,10

where ζd is the 6Dof camera pose provided by the 2D land-mark, ed is the average projection error of the 2D land-mark’s corners, and emax is the maximum projection error.

4. Pose Estimation of the Moving Platform

Pose estimation of the moving platform can improve therobustness of the MPL system since the visual localizationresults are not always available. To obtain continuous andaccurate pose estimation of the moving platform, a precisemotion model and appropriate multisensor fusion algorithmare necessary. In this section, a general motion model for theMPL system is proposed, which considers the unknownencoder measurement bias. Meanwhile, the precise and con-tinuous position, velocity, and orientation of the movingplatform are estimated online based on the extended Kalmanfilter (EKF). Here, a three-wheeled omnidirectional car isused as the moving platform, and its coordinate system isshown in Figure 3.

ym

x

y

R2D landmark Notched ringlandmark

(NR landmark)

2D landmark

do di da

Figure 2: R2D landmark.

4 International Journal of Aerospace Engineering

Page 5: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

As seen from Figure 3, the body coordinate system of themoving platform is defined as m , its origin is located at thecenter of the moving platform, and the ym-axis points to thehead. The xm-axis and zm-axis accord with the right-handrule. The inverse kinematics model [11] of the three-wheeled omnidirectional car is given as follows.

w1

w2

w3

= F

vmx

vmy

ω

=

−1 0 L

cos π

3 −sin π

3 L

cos π

3 sin π

3 L

vmx

vmy

ω

,

11

where L is the radius of the moving platform, its height is H,wi i = 1, 2, 3 is the linear velocity of each wheel, vmx and vmyare the linear velocity in m , ω is the angular velocityin m , and F is the kinematics transformation matrix. Theglobal coordinate system is defined as n , the 2D positionof the moving platform in m is defined as x, y T , and itshead orientation is represented as ϕ. Thus, the 2D motionmodel of the moving platform can be described as follows.

x = vmx cos φ + vmy sin φ , 12

y = vmy cos φ − vmx sin φ , 13

φ = ω 14

Based on equations (12), (13), and (14), the pose estima-tion of the moving platform can be achieved via the extendedKalman filter (EKF) methods. However, the cumulative erroris disappointing due to the unsatisfied calibration of theencoder. Generally, the moving platform commonly driveson rough ground, causing the system to face the problem ofwheel-slip. For example, when the moving platform passesover the road with a gap or the moving platform is blockedby obstacles, the encoder measurement definitely containsbias. The encoder measurement without calibration willcause incorrect pose estimation. To solve this problem, theunknown measurement bias of the encoder is considered inthe motion model. The state vector of the moving platformis defined as X = x, y, φ, vmx , vmy , ω, e1, e2, e3

T including theposition and orientation in n , the linear velocity and angu-lar velocity in m , and the measurement bias of each

encoder. In addition, considering the body linear velocity inthe state vector has the following advantages:

(1) It makes the proposed R2D-MPL system moregeneral to introduce other sensors to measurevmx , vmy , ω

T . For example, a gyroscope can be addedto measure the angular velocity, and an optical flowsensor can be added to measure the linear velocityof the moving platform

(2) It makes the proposed R2D-MPL system more gen-eral to handle the transmission delay of encoder datawith the ring queue and timestamp methods [24]

A nonlinear function is used to describe the motionmodel of the moving platform in this paper.

X k+1 = f X k , η k

=

x k vmx k cos φ k + vmy k sin φ k Ts

y k vmy k cos φ k − vmx k sin φ k Ts

φ k + ω k Ts

vmx k

vmy k

ω k

e1 k

e2 k

e3 k

,

15

where Ts is the sampling period, η k is the process noise,which is described as Gaussian noise, and its covariancematrix is Q k . Thus, the predicted value of X k+1 at thek + 1 moment is given as below:

X−k+1∣k = f X k 16

Define P k+1∣k − as the covariance matrix of the stateestimation; it can be calculated as follows, where ∇f x isthe Jacobian matrix of f · .

P−k+1∣k = ∇f xP k ∇f

Tx +Q k 17

The R2D landmark localization result and the encoderdata are utilized to perform the measurement update here.

Define ζR2D k = XR2D k ,YR2D k , φR2D kT as the

camera pose in m ; ζp k = xp k , yp k , φp kT is the esti-

mated MAV pose in n , which is calculated based on the

x

y

ym

o{n}

w2

w1

w3(x,y,𝜑)

xm

(xp,yp,𝜑p)

{m}

Figure 3: Description of the moving platform coordinate system.

5International Journal of Aerospace Engineering

Page 6: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

fusion algorithm proposed in reference [7], and thus, themoving platform pose zc k+1 = Xc k+1 ,Yc k+1 , φc k+1

T inn can be calculated as follows.

zc k+1 = ζp k+1 −

cos φcp k+1 sin φcp k+1 0

−sin φcp k+1 cos φcp k+1 0

0 0 1ζR2D k+1 ,

18

where φcp k+1 = φp k+1 − φR2D k+1 . Define the encoder mea-

surement vector as ze k+1 = w1 k+1 ,w2 k+1 ,w3 k+1T , which

is augmented with zc k+1 to obtain the final measurement

vector z k+1 = zc k+1T , ze k+1

T , T . The nonlinear functionbetween the measurement vector z k+1 and X k+1 is givenas follows.

z k+1 = h X k+1 , σ k+1

=

x k+1

y k+1

φ k+1

F

vmx k+1

vmy k+1

ω k+1

+

e1 k+1

e2 k+1

e3 k+1

+ σ k+1 ,19

where σ k+1 is the Gaussian measurement noise, with thecovariance matrix being R k+1 . The measurement predictionof z k+1 is given as follows.

z k+1 = h X−k+1∣k 20

The residual vector ν k+1 and its approximate covariancematrix are given as follows, where ∇hx is the Jacobian matrixof h · .

v k+1 = z k+1 − z k+1 , 21

S k+1 ≈ ∇hxP−k+1∣k ∇h

Tx + R k+1 22

The Kalman filter gain can be calculated as follows.

K k+1 = P−k+1∣k ∇h

Tx S

−1k+1 23

Finally, the state estimation and covariance matrix of themoving platform are given as follows.

X k+1 = X−k+1∣k +K k+1 v k+1 , 24

P k+1 = I −K k+1 ∇hx P−k+1∣k , 25

where I is the unit matrix. Above all, the real-time pose esti-mation of the moving platform can be realized based onequations (11), (12), (13), (14), (15), (16), (17), (18), (20),(21), (22), (23), (24), and (25), which can be used as the feed-back data in the decision layer and control layer.

5. Landing Trajectory Planning AlgorithmBased on the Minimum Jerk Rule

To achieve the precise, continuous, and minimum-energycost landing process, an online two-stage landing trajectoryplanning algorithm based on the minimum jerk rule is pro-posed in this section, which includes the visual trackingstage and planning landing stage. In the visual trackingstage, the MAV approaches the circular boundary B1 tbased on the pixel position of the moving platform. In theplanning landing stage, the real-time waypoint planningbased on the pose change of the moving platform is per-formed in each control period. Meanwhile, a continuousand accurate trajectory passing through the waypoint is gen-erated based on the minimum energy criterion. The landingprocess of the proposed system is shown in Figure 4, inwhich the black dotted line is the moving platform’s trajec-tory. The black arrowhead line is the MAV’s trajectory dur-ing the visual tracking stage, the red arrowhead line is theMAV’s trajectory during planning of the landing stage, andthe blue arrows are the moving platform’s velocity vectorat each moment.

5.1. Visual Tracking Stage. The visual tracking method isutilized to guide the MAV to approach the boundary B1 t(a circle with a radius of r1 and centered at the center ofthe moving platform). To obtain the stationary image, atwo-axis pan is installed in front of the MAV in the pro-posed system. The x-axis pixel error between the R2D land-mark central pixel and the camera center is used to control

y m

V

P2

C1

P1

B1

Top view

Visual trackingTrajectory planning landing

Figure 4: Top view of the landing process.

6 International Journal of Aerospace Engineering

Page 7: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

the heading of the MAV, and the y-axis pixel error is usedto control the pitch angle of the pan. During this stage,the MAV remains at a fixed altitude h1 and fixed speedv1 to approach the moving platform until their distanceis sufficiently close. Define μ t as the distance between theMAV and the moving platform, which can be calculatedas follows.

μ t = βd1 t + 1 − β d2 t , 0 < β < 1,

d1 t = z t

tan θpan t,

d2 t = xp t − x t 2 + yp t − y t2,

26

where xp t , yp t , zp t T is the estimated position of the

MAV in n , x t , y t T is the estimated position of themoving platform in n , θpan t is the pitch angle of thepan, and β is the weighted parameter. Thus, the landing pro-cess will switch to the planning landing stage once μ t issmaller than r1.

5.2. Planning Landing Stage. As shown in Figure 4, the land-ing strategy of the proposed algorithm is to chase the movingplatform, approaching it from behind, and finally land.Therefore, to obtain a dynamic landing trajectory that meetsthe above requirements, waypoints P1 t and P1 t arereplanned in each control cycle. The waypoint P1 t =x1 t , y1 t , z1 t T is selected as a point on the reverseextension line of the velocity vector V t , which ensures thatthe MAV approaches the moving platform from behind.Define the radius of C1 t as r2, and its center is located at

the center of the moving platform; thus, the parameters ofwaypoint P1 t are given as follows:

x1 t

y1 t

z1 t

ψ1 t

=

x t − sin ψ1 t r2

y t − cos ψ1 t r2

h2 +vm

x t 2 + vm

y t 2

1 + e−r2b1 +

ω t

1 + e−r2b2

arctan vm

x t

vm

y t

,

27

where vmx t , vmy t , ω t T is the estimated linear and angu-

lar velocity of the moving platform in m , vm

x t , vmy t ,ω t T is its first-order differential, h2 is the flight altitudeduring this stage, and b1, b2 T are parameters ensuring thatthe MAV can automatically change altitude during turningor maneuvering of the moving platform.

The waypoint P2 t = x2 t , y2 t , z2 t T is the finallanding point. At each control cycle, the planner selects Kprediction times ts by uniformly sampling a fixed-durationprediction horizon. For each time ts, the planner predictsthe future state X−

k+i∗ts i = 1, 2,⋯, K that the moving plat-form will reach, which is calculated using its dynamicalmodel starting from its last estimate available from the esti-mator designed in Section 4. The X−

k+i∗ts is used as the can-didate final waypoint P2

∗ i for each candidate trajectory.Out of all candidate trajectories, the one requiring a mini-mum amount of energy [12] for execution is selected; mean-while, the waypoint P2 t can be determined.

Input: A, b: initial constraint parameters, ds: the width of the feasible corridor, vmax: max velocity constraint of the MAV, γ: Con-straint relaxation coefficient, N : maximum number of iterationsOutput: A∗, b∗: final constraint parameters, q∗: final landing trajectory parameters1: Initialize: Ai =A, bi = b;2: for i = 0 to N do3: Calculated q∗ with equality constraints Aiq∗ = bi and OOQP package:4: for j = 1 to size of (bi) do5: Define P j as the corresponding point on the feasible corridor of bi;6: if bi j − P j > ds then7: Add new position equality constraints: ∣ bi j

∗ ∣ = P j + γ · ds;8: end if9: if ∣ bi j ∣ > vmax then10: Add new velocity equality constraints: bi j

∗ = vmax;11: end if12: if new constraints are added then13: Relaxation of adjacent waypoints constraints: bi j − 1 ∗ = bi j − 1 − 1 − γ · ds, bi j + 1 ∗ = bi j + 1 − 1 − γ · ds,14: Extended matrix Ai;15: end if16: end for17: end for

Algorithm 1: IAEC algorithm.

7International Journal of Aerospace Engineering

Page 8: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

Above all, to generate a smooth and minimum-energylanding trajectory, an online trajectory planning algorithmis proposed here. For simplification, the three dimensionsof the trajectory are planned independently. The fifth-orderpolynomial is used to describe the landing trajectory withthe polynomial parameter vector q = q0, q1,⋯, q5

T . Theminimum jerk trajectory is planned at each control period,and the optimal function is given as follows.

min f q =min 〠2

i=0

ti

ti−1

0, 0, 0, 6, 24τ, 60τ2 q T

0, 0, 0, 6, 24τ, 60τ2 q dτ

28

Several equality constraints can be established consider-ing the waypoint constraints and the continuity of the posi-tion, velocity, and acceleration at the waypoint P1 t .Taking the x-axis as an example, its constraint equation isgiven as follows.

1, 0, 0⋯ 010

0, 1, 0⋯ 010

0, 0, 2, 0⋯ 09

1, t1, t21, ⋯2

, t51, 0⋯ 06

0, 1, 2t1, ⋯2

, 5t41, 0⋯ 06

0, 0, 2, 6t1, ⋯1

, 20t31, 0⋯ 06

0⋯ 06

, 1, t2, ⋯2

, t52

0⋯ 06

, 0, 1, 2t2, ⋯2

, 5t42

0⋯ 06

, 0, 0, 2, 6t2, ⋯1

, 20t32

1, t1, t21, ⋯2

, t51,−1,−t1, t21, ⋯2

,−t51

0, 1, 2t1, ⋯2

, 5t41, 0,−1,−2t1, ⋯2

,−5t41

0, 0, 2, 6t1, ⋯1

, 20t31, 0, 0,−2,−6t1, ⋯1

,−20t31

q =

xp t

xp t

xp t

x1 t

xp t + vnx t

0

x2 t

vnx t

0

0

0

0

,

29

where vx t = vmx t 2 + vmy t 2 is the estimated linear veloc-

ity of the moving platform in n , t1 is the time that theMAV reaches the waypoint P1 t , and t2 is the time thatthe MAV reaches the waypoint P2 t ; they are determinedby the distance μ t and the maximum velocity constraintof the MAV. As shown in equation (28), the trajectory firststarts at the MAV’s current position xp t ; meanwhile, its

velocity and acceleration should be equal to the velocity xpt and the acceleration xp t of the MAV. Then, the middlepoint of the trajectory needs to pass the waypoint P1 t , andthe MAV needs to accelerate to ensure its velocity

approaches xp t + vnx t . Finally, the trajectory ends withP2 t and has the same velocity vnx t as the moving platform.The above equality constraints Aq = b can only ensure thatthe trajectory passes through the desired waypoint, althoughthe planned trajectory may exceed the feasible corridor orphysical actuation constraints of the MAV. The above prob-lems can be solved by using inequality constraints Aq < b,but the large amount of computation required cannot meetthe requirements of real-time planning. Furthermore, an iter-ative algorithm for adding equality constraints (IAEC) is pro-posed and used to verify whether the planned trajectory isfeasible or not.

Therefore, the proposed IAEC algorithm first plans thelanding trajectory based on equation (28) and equation(29). Then, it will look for points in the trajectory that exceedthe feasible corridor or physical actuation constraints of theMAV, new equality constraints will be added at those points,and the trajectory planning is carried out again. Finally, theexpected trajectory is obtained by repeating the above pro-cess until the iteration condition is reached, and the convexquadratic programming (QP) problem in equation (28) issolved by the OOQP package [25]. The proposed IAECmethod is given in Algorithm 1.

6. Experiment and Analysis

To verify the proposed R2D-MPL system, an X450 quadro-tor is used in this section. The quadrotor uses an 8-inch pro-peller and four 980 kV brushless motors as the drive system.Its weight is 1.35 kg, and the flight time is 15 minutes. Theproposed R2D landmark localization algorithm, pose esti-mation algorithm, and trajectory planning algorithm aretested on the onboard ODROID-XU4 processor. The heightH of the three-wheeled omnidirectional car is 15 cm, and itsradius L is 42 cm. The R2D landmark parameters do is70 cm, di is 60 cm, and da is 54 cm. The communication linkbetween the quadrotor and the moving platform is a 2.4Gwireless radio.

6.1. R2D Landmark Localization Experiment. The localiza-tion performance of the R2D landmark is verified, andthe result is shown in Figure 5. The blue dotted line inFigure 5(a) is the localization result for the 2D landmark. Itcan be seen that the 2D landmark cannot be recognized whenthe z-axis of the camera exceeds 3m. The red dotted line isthe localization result based on the NR landmark. It can beseen that the NR landmark can only be recognized in therange of 1.5m to 5m. The black line is the localization resultbased on the R2D landmark result, and it can obtain the con-tinuous localization result in the range of 0.5m to 6mthrough fusing the results of the 2D landmark and NR land-mark. Figure 5(b) shows the localization error of each land-mark; it can be concluded that the proposed algorithm

8 International Journal of Aerospace Engineering

Page 9: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

effectively integrates the results of the two sub-landmarksand realizes continuous and complete localization.

In addition, an experiment comparing the localizationperformance of 2D, NR, and R2D landmarks with the samesize is proposed, the results of which are given in Table 1.In this experiment, the outermost width of each landmark’scontour is taken as its size. It can be seen from Table 1 thatthe R2D landmark has a larger localization range and higherlocalization accuracy under the same size.

0 4 8 12 16 20

0 4 8 12 16 20

0 4 8 12Time (s)

16 20

−2

−1

0

Axi

s X (m

)

0

1

2

Axi

s Y (m

)A

xis Z

(m)

0

5

NR landmark2D landmark

Ground truthR2D landmark

(a) Localization result

1 2 3 4 5 6Distance (m)

0

0.2

0.4

0.6

0.8

Loca

lizat

ion

erro

r (m

)

NR landmark2D landmarkR2D landmark

(b) Localization error

Figure 5: Localization results of the NR landmark, 2D landmark, and R2D landmark.

Table 1: Landmark comparison experiment results (landmark sizeis 0.7m).

Landmarks Localization range m Localization error m

2D landmark 1.2–4.5 0.35–1.2

NR landmark 1.5–6.0 0.5–0.6

R2D landmark 0.5–6.0 0.1–0.6

9International Journal of Aerospace Engineering

Page 10: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

6.2. Moving Platform Pose Estimation Experiment. To ver-ify the pose estimation performance of the proposed system(method 1), the following simulation experiments aredesigned. The Gazebo and ROS simulation software are usedto build the experimental environment. The quadrotorhovers at the coordinate point (0,0,3). Meanwhile, the mov-ing platform runs along a circular trajectory centered at(0,3,0) and radius of 3m at a speed of 1m/s. In addition,the pose estimation algorithm (method 2) proposed in refer-ence [12] is adopted for comparison, which also utilizes theEKFmethod but does not consider the encoder measurementbias. The simulation results are given in Figure 6.

Figure 6(a) shows the position estimate results of themoving platform. The red line is the ground truth of themoving platform, the green dots are the visual measure-ment results, the blue dotted line is the estimated resultfor method 2, and the black dash-dot line is the result formethod 1. It can be seen that the result of method 2 has alarge deviation since it does not consider the encoder mea-surement bias. Figure 6(b) shows the velocity estimate resultsfor the moving platform. The red line is the true velocity ofthe moving platform. The blue dotted line is the estimatedresult for method 2, which has the obvious deviation. Theblack solid line is the estimated result for method 1, whichobtains a satisfying estimation result.

In addition, the position estimate error is shown inFigure 7. As seen from this figure, the position estimate errorof the proposed algorithm converges rapidly, and the positionestimate error of method 2 fluctuates greatly. The maximumposition estimate error of method 1 is smaller than 0.25m,which is much better than the result for method 2. Thus,the proposed algorithm can obtain accurate real-time posi-tion estimates of the moving platform.

To simulate wheel-slip of the moving platform, randomwalk noise is added to the encoders’measurements. The esti-mated result of the encoder measurement bias is given inFigure 8. In this experiment, the moving platform is blockedand stopped after 13 s, while the wheels are still spinning andcausing wheel-slip. In Figure 8, the blue dotted line repre-sents the true encoder data, the black solid line representsthe encoder measurements, and the red dotted line repre-sents the estimated bias based on the proposed algorithm. Itcan be seen that the true encoder data are equal to zero sincethe moving platform is blocked; however, the encoder mea-surement is not equal to zero due to the wheel-slip. The esti-mate encoder measurement biases gradually approach theencoders’ measurement after the wheel-slip occurs, whichproves that the proposed algorithm can effectively estimatethe encoder measurement bias.

6.3. Landing Experiments. First, the following experiment isused to verify the IAEC algorithm. Five desired waypoints,[2, 1, 2], [3, 5, 3], [5, 2, 5], [10, 8, 4], and [12, 2, 5], are givenin this experiment. In addition, the velocities and accelera-tions at the start and end points are zero. The proposed IAECalgorithm is also compared with the algorithm using onlyP1 t , P2 t waypoint constraint (method 1) and the algo-rithm using the inequality constraint algorithm (method 2)[26]. The result is shown in Figure 9.

As seen in Figure 9, method 2 gives the optimal trajectoryplanning result, method 1 has obvious knots at waypoints,and the proposed IAEC algorithm achieves a compromiseeffect between the two. In terms of calculation time, method1 spent 10.52ms to complete trajectory planning onODROID-XU4, while method 2 spent 380.3ms to completetrajectory planning, and the IAEC algorithm spent 60.35ms

0

4−2

1

Axi

s Z (m

)

2

Axis X (m)

0

3

22

4 0

Ground truthVisual measurementMethod 2Method 1

Axis Y (m)

(a) Position estimate result

0 5 10 15 20 25 30 35

0 5 10 15 20 25 30 35

−2

0

2

X-ax

is (m

/s)

−2

0

2

Y-ax

is (m

/s)

True velocityMethod 2Method 1

Time (s)

(b) Velocity estimate result

Figure 6: Estimate results of the moving platform.

10 International Journal of Aerospace Engineering

Page 11: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

to complete trajectory planning after 6 iterations. Althoughthe IAEC algorithm reduces the passing accuracy by relaxingthe waypoint constraints, it effectively solves the trajectoryknotting and requires less computation. For the MPL systemmentioned in this paper, it only needs to ensure the accuracyof the final landing waypoint P2 t .

To verify the performance of the proposed R2D-MPLsystem, several simulation landing experiments are per-formed, which are compared with the system proposed in

reference [12] (method 3). To our knowledge, the systemproposed in method 3 is currently the most reliable MPLsystem using only onboard sensing and computation. Thegreatest improvements of the proposed R2D-MPL systemare the following: (1) the R2D-MPL system accounts forthe measurement bias of encoders caused by wheel-slipand imprecise calibration in pose estimation for movingplatforms, and (2) the R2D-MPL system also accounts forthe trajectory knotting problem caused by dynamic

0 5 10 15 20 25Time (s)

0

0.25

0.5

0.75

1

1.25

Posit

ion

estim

ate e

rror

(m)

Method 2Method 1

Figure 7: Position estimate error.

0 5 10 15 20 25

0 5 10 15 20 25

0 5 10 15 20 25

−2

0

2

Enco

der 1

(m/s

)

−2

0

2

Enco

der 2

(m/s

)

−2

0

2

Enco

der 3

(m/s

)

True encoder dataEncoder measurementEstimate bias

Time (s)

Wheel-slip

Figure 8: Encoder measurement bias estimate result.

11International Journal of Aerospace Engineering

Page 12: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

constraints of the MAV. Therefore, the following experi-ments were carried out under the moving platform with dif-ferent velocity and motion models; meanwhile, differentfixed measurement biases for encoder 1 were added artifi-cially. Define AVT as the average time used for a successfullanding, define LR as the average position error of successfullandings, and define SR as the landing success rate. Theresults of 20 experiments under different situations are givenin Table 2.

As seen in Table 2, both methods 1 and 2 have highlanding success rates when the moving platform movesslowly and the measurement bias is small. However, the per-formance of method 1 is better than that of method 2, espe-cially when encoder 1 has a large measurement bias. Method1 has an acceptable landing success rate and high landingaccuracy based on the accurate modeling; meanwhile, it

consumes less flight time based on the efficient and continu-ous landing trajectory.

Furthermore, the actual landing experiments for linearmotion and circular motion are tested, and the video ofthe experiment can be found at https://www.youtube.com/watch?v=ZljQ1Ng-EIQ. The linear motion velocity VL ofthe moving platform is 0.25m/s. The circular motion velocityVC is 0.5m/s, and the radius RC is 1m. The parameters of thelanding trajectory planning algorithm are given as follows: h1is 1.8m, r1 is 1.68m, h2 is 1.2m, r2 is 0.7m, b1 is 0.012, b2 is0.003, v1 is 0.7m/s, and β is 0.6. The experimental results areshown as follows.

Figure 10(a) shows the result for the moving platform inlinear motion. In Figure 10(a, 1), the moving platform detectsthe moving platform for the first time. In Figure 10(a, 2),the quadrotor switches from vision tracking to landing

Table 2: Landing comparison experiment results (20 experiments).

Motion model ParametersBias AVT AVT LR/SR LR/SR

(Encoder 1) (R2D-MPL) (Method 3) (R2D-MPL) (Method 3)

Line VL = 0 25m/s 0.00m/s 12.4 s 14.2 s 0.18m/90% 0.25m/85%

Line VL = 0 50m/s 0.00m/s 13.2 s 15.7 s 0.21m/85% 0.27m/75%

Line VL = 0 50m/s 0.15m/s 13.6 s 22.4 s 0.22m/80% 0.38m/25%

Line VL = 0 50m/s 0.30m/s 14.1 s 25.1 s 0.25m/85% 0.40m/5%

Circle VC = 0 25m/sRC = 1m 0.00m/s 15.7 s 23.2 s 0.21m/80% 0.42m/40%

Circle VC = 0 50m/sRC = 1m 0.00m/s 17.4 s 27.4 s 0.24m/85% 0.41m/30%

Circle VC = 0 25m/sRC = 2m 0.15m/s 18.1 s Fail 0.28m/75% Fail

Circle VC = 0 25m/sRC = 2m 0.30m/s 19.5 s Fail 0.32m/80% Fail

822

3

64

4

Axi

s Z (m

)

Axis Y (m)6

Axis X (m)

5

48

10 2

12

Method 1Method 2IAEC

Figure 9: Trajectory planning experiment.

12 International Journal of Aerospace Engineering

Page 13: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

trajectory planning. The landing trajectory planning is per-formed in Figure 10(a, 3). The quadrotor performs trajec-tory planning until successfully landing on the movingplatform in Figure 10(a, 4). Figure 10(b) shows the 3D land-ing trajectory of the linear motion experiment, and Figure 11shows the landing result for the moving platform under thecircular motion.

7. Conclusions

In this paper, a MAV composite landmark guidance systemcapable of autonomously landing on a moving platform

using only onboard sensing and computing is presented.This system relied on state-of-the-art computer vision algo-rithms, detection and motion estimation of the movingplatform, and path planning for fully autonomous landing.No external infrastructure, such as motion-capture systemsor an ultra-wideband system, is needed. The only priorknowledge about the moving platform is its real-timemeasurement of the encoders; meanwhile, the unknownmeasurement biases of encoders are considered in thedynamical model of the moving platform. The proposed sys-tem is validated by simulation as well as with real-worldexperiments using low-cost and lightweight consumer

21

3

4

(a) Landing experiment

5.5

50

X-axis (m)

4.5

1

−2

Z-a

xis (

m)

−1.5 4

2

−1 3.5−0.5Y-axis (m)

Visual measurementTrajectory of MAVTrajectory of moving platform (EKF)

1 2

34

(b) 3D trajectory

Figure 11: Circular motion landing experiment.

2

1

34

(a) Landing experiment

00

−2.4 0.8

0.4

−1.6

Y-axis (m)

0.8

1.6−0.80

1.2

2.4−2.4

1.6

−1.6

2.0

Z-a

xis (

m)

X-axis (m)

Visual measurementTrajectory of MAVTrajectory of moving platform (EKF)

12

3

4

(b) 3D trajectory

Figure 10: Linear motion landing experiment.

13International Journal of Aerospace Engineering

Page 14: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

hardware. Finally, the proposed approach achieved a fullyautonomous MAV system capable of landing on a movingtarget with wheel-slip and bias in encoder measurements,using only onboard sensing and computing and withoutrelying on any external infrastructure.

Data Availability

The data used to support the findings of this study are avail-able from the corresponding author upon request.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work has been supported by the National Natural Sci-ence Foundation (NNSF) of China under Grants 61603040and 61433003, Yunnan Applied Basic Research Projects ofChina under Grant 201701CF00037, and Yunnan ProvincialScience and Technology Department Key Research Program(Engineering): 2018BA070.

References

[1] B. W. Parkinson, M. L. O'Connor, and K. T. Fitzgibbon, “Air-craft automatic approach and landing using GPS,” in GlobalPositioning System: Theory and Applications, Volume II, Prog-ress in Astronautics and Aeronautics, pp. 397–425, AmericanInstitute of Aeronautics and Astronautics, 1996.

[2] Y. H. Shin, S. Lee, and J. Seo, “Autonomous safe landing-areadetermination for rotorcraft UAVs using multiple IR-UWBradars,” Aerospace Science and Technology, vol. 69, pp. 617–624, 2017.

[3] H. Duan, S. Shao, B. Su, and L. Zhang, “New developmentthoughts on the bio-inspired intelligence based control forunmanned combat aerial vehicle,” Science China TechnologicalSciences, vol. 53, no. 8, pp. 2025–2031, 2010.

[4] H. Duan, Y. Zhang, and S. Liu, “Multiple UAVs/UGVs het-erogeneous coordinated technique based on receding hori-zon control (RHC) and velocity vector control,” ScienceChina Technological Sciences, vol. 54, no. 4, pp. 869–876,2011.

[5] M. Trittler, T. Rothermel, and W. Fichter, “Autopilot for land-ing small fixed-wing unmanned aerial vehicles with opticalsensors,” Journal of Guidance, Control, and Dynamics,vol. 39, no. 9, pp. 2011–2021, 2016.

[6] T. Krajník, M. Nitsche, J. Faigl et al., “A practical multirobotlocalization system,” Journal of Intelligent & Robotic Systems,vol. 76, no. 3-4, pp. 539–562, 2014.

[7] S. Yang, S. A. Scherer, and A. Zell, “An onboard monocularvision system for autonomous takeoff, hovering and landingof a micro aerial vehicle,” Journal of Intelligent & Robotic Sys-tems, vol. 69, no. 1-4, pp. 499–515, 2013.

[8] Y. Bi and H. Duan, “Implementation of autonomous visualtracking and landing for a low-cost quadrotor,” Optik - Inter-national Journal for Light and Electron Optics, vol. 124,no. 18, pp. 3296–3300, 2013.

[9] A. Borowczyk, D.-T. Nguyen, A. P.-V. Nguyen, D. Q. Nguyen,D. Saussié, and J. le Ny, “Autonomous landing of a quadcopter

on a high-speed ground vehicle,” Journal of Guidance, Control,and Dynamics, vol. 40, no. 9, pp. 2378–2385, 2017.

[10] V. Sudevan, A. Shukla, and H. Karki, “Vision based autono-mous landing of an Unmanned Aerial Vehicle on a stationarytarget,” in 2017 17th International Conference on Control,Automation and Systems (ICCAS), pp. 362–367, Jeju, SouthKorea, October 2017.

[11] T. D. Larsen, K. L. Hansen, N. A. Andersen, and O. Ravn,“Design of kalman filters for mobile robots; evaluation of thekinematic and odometric approach,” in Proceedings of the1999 IEEE International Conference on Control Applications(Cat. No.99CH36328), pp. 1021–1026, Kohala Coast, HI,USA, USA, 1999.

[12] D. Falanga, A. Zanchettin, A. Simovic, J. Delmerico, andD. Scaramuzza, “Vision-based autonomous quadrotor landingon a moving platform,” in 2017 IEEE International Symposiumon Safety, Security and Rescue Robotics (SSRR), pp. 200–207,Shanghai, China, 2017.

[13] H. H. Helgesen, F. S. Leira, T. A. Johansen, and T. I. Fossen,“Tracking of marine surface objects from unmanned aerialvehicles with a pan/tilt unit using a thermal camera and opticalflow,” in 2016 International Conference on Unmanned AircraftSystems (ICUAS), pp. 107–117, Arlington, VA, USA, June2016.

[14] T. Muskardin, G. Balmer, S. Wlach, K. Kondak, M. Laiacker,and A. Ollero, “Landing of a fixed-wing uav on a mobileground vehicle,” in 2016 IEEE International Conference onRobotics and Automation (ICRA), pp. 1237–1242, Stockholm,Sweden, 2016.

[15] A. Borowczyk, D.-T. Nguyen, A. P.-V. Nguyen, D. Q. Nguyen,D. Saussié, and J. Le Ny, “Autonomous landing of a multirotormicro air vehicle on a high velocity ground vehicle,” IFAC-PapersOnLine, vol. 50, pp. 10488–10494, 2017.

[16] K. E. Wenzel, A. Masselli, and A. Zell, “Automatic take off,tracking and landing of a miniature uav on a moving carriervehicle,” Journal of Intelligent & Robotic Systems, vol. 61,no. 1-4, pp. 221–238, 2011.

[17] X. Wang, G. Lu, Z. Shi, and Y. Zhong, “Robust LQR controllerfor landing unmanned helicopters on a slope,” in 2016 35thChinese Control Conference (CCC), pp. 10639–10644,Chengdu, China, 2016.

[18] M. W. Mueller, M. Hehn, and R. A. D’Andrea, “A computa-tionally efficient motion primitive for quadrocopter trajectorygeneration,” IEEE Transactions on Robotics, vol. 31, no. 6,pp. 1294–1310, 2017.

[19] M. Hehn and R. D'Andrea, “Quadrocopter trajectory genera-tion and control,” IFAC Proceedings Volumes, vol. 44,pp. 1485–1491, 2011.

[20] M. P. Vitus, W. Zhang, and C. J. Tomlin, “A hierarchicalmethod for stochastic motion planning in uncertain environ-ments,” in 2012 IEEE/RSJ International Conference on Intelli-gent Robots and Systems, pp. 2263–2268, Vilamoura,Portugal, 2012.

[21] C. Richter, A. Bry, and N. Roy, “Polynomial trajectory plan-ning for aggressive quadrotor flight in dense indoor envi-ronments,” in Robotics Research, pp. 649–666, Springer,2016.

[22] J. Gonalves, J. Lima, and P. Costa, “Real time tracking of anomnidirectional robot - an extended Kalman filter approach,”in Proceedings of the Fifth International Conference on Infor-matics in Control, Automation and Robotics - Volume 4:ICINCO, pp. 5–10, Funchal - Madeira, Portugal, 2015.

14 International Journal of Aerospace Engineering

Page 15: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

[23] V. Lepetit, F. Moreno-Noguer, and P. Fua, “EPnP: an accurateO(n) solution to the PnP problem,” International Journal ofComputer Vision, vol. 81, no. 2, pp. 155–166, 2009.

[24] S. Lynen, M.W. Achtelik, S. Weiss, andM. Chli, “A robust andmodular multi-sensor fusion approach applied to MAV navi-gation,” in 2013 IEEE/RSJ International Conference on Intelli-gent Robots and Systems, pp. 3923–3929, Tokyo, Japan, 2013.

[25] E. M. Gertz and S. J. Wright, “Object-oriented software forquadratic programming,” ACMTransactions onMathematicalSoftware, vol. 29, no. 1, pp. 58–81, 2003.

[26] J. Chen, T. Liu, and S. Shen, “Online generation of collision-free trajectories for quadrotor flight in unknown clutteredenvironments,” in 2016 IEEE International Conference onRobotics and Automation (ICRA), pp. 1476–1483, Stockholm,Sweden, 2016.

15International Journal of Aerospace Engineering

Page 16: Autonomous Landing of a Micro Aerial Vehicle on a Moving ...downloads.hindawi.com/journals/ijae/2019/4723869.pdfning must be performed on the onboard processor in real time [17]. In

International Journal of

AerospaceEngineeringHindawiwww.hindawi.com Volume 2018

RoboticsJournal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Active and Passive Electronic Components

VLSI Design

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Shock and Vibration

Hindawiwww.hindawi.com Volume 2018

Civil EngineeringAdvances in

Acoustics and VibrationAdvances in

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Electrical and Computer Engineering

Journal of

Advances inOptoElectronics

Hindawiwww.hindawi.com

Volume 2018

Hindawi Publishing Corporation http://www.hindawi.com Volume 2013Hindawiwww.hindawi.com

The Scientific World Journal

Volume 2018

Control Scienceand Engineering

Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com

Journal ofEngineeringVolume 2018

SensorsJournal of

Hindawiwww.hindawi.com Volume 2018

International Journal of

RotatingMachinery

Hindawiwww.hindawi.com Volume 2018

Modelling &Simulationin EngineeringHindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Chemical EngineeringInternational Journal of Antennas and

Propagation

International Journal of

Hindawiwww.hindawi.com Volume 2018

Hindawiwww.hindawi.com Volume 2018

Navigation and Observation

International Journal of

Hindawi

www.hindawi.com Volume 2018

Advances in

Multimedia

Submit your manuscripts atwww.hindawi.com