machine vision/gps integration using ekf for the uav aerial refueling problem

11
IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 6, NOVEMBER 2008 791 Machine Vision/GPS Integration Using EKF for the UAV Aerial Refueling Problem Marco Mammarella, Giampiero Campa, Member, IEEE, Marcello R. Napolitano, Mario L. Fravolini, Yu Gu, and Mario G. Perhinschi Abstract—The purpose of this paper is to propose the applica- tion of an extended Kalman filter (EKF) for the sensors fusion task within the problem of aerial refueling for unmanned aerial vehicles (UAVs). Specifically, the EKF is used to combine the po- sition data from a global positioning system (GPS) and a machine vision (MV)-based system for providing a reliable estimation of the tanker–UAV relative position throughout the docking and the re- fueling phase. The performance of the scheme has been evaluated using a virtual environment specifically developed for the study of the UAV aerial refueling problem. Particularly, the EKF-based sen- sor fusion scheme integrates GPS data with MV-based estimates of the tanker–UAV position derived through a combination of feature extraction, feature classification, and pose estimation algorithms. The achieved results indicate that the accuracy of the relative po- sition using GPS or MV estimates can be improved by at least one order of magnitude with the use of EKF in lieu of other sensor fusion techniques. Index Terms—Aerial refueling, extended Kalman filter (EKF), machine vision (MV), sensor fusion, unmanned aerial vehicles (UAVs), visual assisted control. NOMENCLATURE 3DW 3-D Window. AR Aerial refueling. CG Center of gravity. EKF Extended Kalman filter. FE Feature extraction. GPS Global positioning system. IMU Inertial measurement unit. INS Inertial navigation system. LHM Lu–Hager–Mjolsness MNP Mutual nearest point. MV Machine vision. PE Pose estimation. PM Point matching. PSD Power spectral density. UAV Unmanned aerial vehicle. VR Virtual reality. WGN White Gaussian noise. Manuscript received March 29, 2007; revised October 17, 2007, January 7, 2008, and March 18, 2008. First published September 26, 2008; current version published October 20, 2008. This paper was recommended by Editor V. Marik. M. Mammarella, G. Campa, M. R. Napolitano, Y. Gu, and M. G. Perhinschi are with the Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV 26505 USA (e-mail: mmammare@ mix.wvu.edu; [email protected]; marcello.napolitano@mail. wvu.edu; [email protected]; [email protected]). M. L. Fravolini is with the Department of Electronics and Information Engineering, University of Perugia, Perugia 06010, Italy (e-mail: fravolin@ diei.unipg.it). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMCC.2008.2001693 I. INTRODUCTION O NE OF THE biggest limitations for the current use of unmanned aerial vehicles (UAVs) is their lack of aerial refueling (AR) capabilities. There are currently two hardware configurations used for the AR of manned aircraft. The first configuration is used by the U.S. Air Force and features a re- fueling boom maneuvered by a boom operator to connect with the fuel receptacle of the aircraft to be refueled. The second configuration is used by the U.S. Navy and is based on the use of a flexible hose with an aerodynamically stabilized perforated cone. This system is known as the “Probe and Drogue” system. The effort described in this paper is relative to the U.S. Air Force refueling boom system with the general goal of extending the use of this system to the refueling of UAVs. For this purpose, a key problem is given by the need of accurate measurements of the “tanker–UAV” relative position and orientation from the “precontact” to the “contact” position and during the refueling phase. Although sensors based on laser, infrared radar, and global positioning system (GPS) technologies are suitable for au- tonomous docking [1], [2], there might be limitations associ- ated with their use. For example, GPS coverage might not be reliable in every region of the world. Additionally, the use of UAV GPS signals might not always be possible since the GPS signals could be distorted by the tanker airframe [3]. The afore- said condition is likely to occur when the aircrafts are in close proximity for refueling purposes. The use of machine vision (MV) technology has been recently proposed in addition—or as an alternative—to these technologies [4]–[6]. An MV-based system has been investigated for close proximity operations of aerospace vehicles [7] and for UAV navigation purposes [8]. From a control point of view, the objective is to guide the UAV within a defined 3-D window (3DW, also called the “refueling box”) below the tanker where the boom operator can then man- ually proceed to the docking of the refueling boom followed by the refueling phase. An MV-based approach assumes the avail- ability of a digital camera installed on the UAV providing the images of the target (i.e., the refueling tanker)—which are then processed to solve a pose estimation problem—leading to the real-time measurement of the relative position and orientation vectors. This MV-based position measurement is then integrated with the position measurements provided by the GPS systems on both the tanker and UAV aircraft. The overall relative position estimate is then used by the docking control laws to guide the UAV from a “precontact” to a “contact” position. 1094-6977/$25.00 © 2008 IEEE

Upload: independent

Post on 02-Feb-2023

0 views

Category:

Documents


0 download

TRANSCRIPT

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 6, NOVEMBER 2008 791

Machine Vision/GPS Integration Using EKFfor the UAV Aerial Refueling Problem

Marco Mammarella, Giampiero Campa, Member, IEEE, Marcello R. Napolitano,Mario L. Fravolini, Yu Gu, and Mario G. Perhinschi

Abstract—The purpose of this paper is to propose the applica-tion of an extended Kalman filter (EKF) for the sensors fusiontask within the problem of aerial refueling for unmanned aerialvehicles (UAVs). Specifically, the EKF is used to combine the po-sition data from a global positioning system (GPS) and a machinevision (MV)-based system for providing a reliable estimation of thetanker–UAV relative position throughout the docking and the re-fueling phase. The performance of the scheme has been evaluatedusing a virtual environment specifically developed for the study ofthe UAV aerial refueling problem. Particularly, the EKF-based sen-sor fusion scheme integrates GPS data with MV-based estimates ofthe tanker–UAV position derived through a combination of featureextraction, feature classification, and pose estimation algorithms.The achieved results indicate that the accuracy of the relative po-sition using GPS or MV estimates can be improved by at least oneorder of magnitude with the use of EKF in lieu of other sensorfusion techniques.

Index Terms—Aerial refueling, extended Kalman filter (EKF),machine vision (MV), sensor fusion, unmanned aerial vehicles(UAVs), visual assisted control.

NOMENCLATURE

3DW 3-D Window.AR Aerial refueling.CG Center of gravity.EKF Extended Kalman filter.FE Feature extraction.GPS Global positioning system.IMU Inertial measurement unit.INS Inertial navigation system.LHM Lu–Hager–MjolsnessMNP Mutual nearest point.MV Machine vision.PE Pose estimation.PM Point matching.PSD Power spectral density.UAV Unmanned aerial vehicle.VR Virtual reality.WGN White Gaussian noise.

Manuscript received March 29, 2007; revised October 17, 2007, January 7,2008, and March 18, 2008. First published September 26, 2008; current versionpublished October 20, 2008. This paper was recommended by Editor V. Marik.

M. Mammarella, G. Campa, M. R. Napolitano, Y. Gu, and M. G. Perhinschiare with the Department of Mechanical and Aerospace Engineering, WestVirginia University, Morgantown, WV 26505 USA (e-mail: [email protected]; [email protected]; [email protected]; [email protected]; [email protected]).

M. L. Fravolini is with the Department of Electronics and InformationEngineering, University of Perugia, Perugia 06010, Italy (e-mail: [email protected]).

Color versions of one or more of the figures in this paper are available onlineat http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TSMCC.2008.2001693

I. INTRODUCTION

ONE OF THE biggest limitations for the current use ofunmanned aerial vehicles (UAVs) is their lack of aerial

refueling (AR) capabilities. There are currently two hardwareconfigurations used for the AR of manned aircraft. The firstconfiguration is used by the U.S. Air Force and features a re-fueling boom maneuvered by a boom operator to connect withthe fuel receptacle of the aircraft to be refueled. The secondconfiguration is used by the U.S. Navy and is based on the useof a flexible hose with an aerodynamically stabilized perforatedcone. This system is known as the “Probe and Drogue” system.The effort described in this paper is relative to the U.S. Air Forcerefueling boom system with the general goal of extending theuse of this system to the refueling of UAVs. For this purpose,a key problem is given by the need of accurate measurementsof the “tanker–UAV” relative position and orientation from the“precontact” to the “contact” position and during the refuelingphase.

Although sensors based on laser, infrared radar, and globalpositioning system (GPS) technologies are suitable for au-tonomous docking [1], [2], there might be limitations associ-ated with their use. For example, GPS coverage might not bereliable in every region of the world. Additionally, the use ofUAV GPS signals might not always be possible since the GPSsignals could be distorted by the tanker airframe [3]. The afore-said condition is likely to occur when the aircrafts are in closeproximity for refueling purposes. The use of machine vision(MV) technology has been recently proposed in addition—oras an alternative—to these technologies [4]–[6]. An MV-basedsystem has been investigated for close proximity operationsof aerospace vehicles [7] and for UAV navigation purposes[8].

From a control point of view, the objective is to guide the UAVwithin a defined 3-D window (3DW, also called the “refuelingbox”) below the tanker where the boom operator can then man-ually proceed to the docking of the refueling boom followed bythe refueling phase. An MV-based approach assumes the avail-ability of a digital camera installed on the UAV providing theimages of the target (i.e., the refueling tanker)—which are thenprocessed to solve a pose estimation problem—leading to thereal-time measurement of the relative position and orientationvectors.

This MV-based position measurement is then integrated withthe position measurements provided by the GPS systems onboth the tanker and UAV aircraft. The overall relative positionestimate is then used by the docking control laws to guide theUAV from a “precontact” to a “contact” position.

1094-6977/$25.00 © 2008 IEEE

792 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 6, NOVEMBER 2008

Fig. 1. MV-based control scheme for UAV AR.

Within previous efforts [5], [13], the authors used a simplelinear interpolation method to perform sensor fusion. However,due to its simplicity, the linear interpolation method does notinclude information about the different noise characteristics ofthe sensors to be integrated. Furthermore, since the methodrelies on a simple static input–output equation, it does not useany information—such as bandwidth, for example—about thedynamics relative to the measured signals.

Sensor fusion techniques based on extended Kalman filter(EKF) algorithms have been extensively investigated during thelast decades [29]–[31], [34]–[36], and have become a standardtool to perform sensor fusion due to their flexibility and theircapabilities to account for the dynamics of the system underobservation.

The goal of this effort is the development of an EKF-basedmethod to perform sensor fusion between the position measure-ments supplied by both the MV and the GPS subsystems forspecific use within autonomous AR systems.

The study has been performed using a simulation environ-ment developed by the authors for the analysis of the UAVAR [5], [9], [13], [17], [18]. This environment features de-tailed mathematical models for the tanker, the UAV, the refu-eling boom, the wake effects, the atmospheric turbulence, andthe sensor noise. The simulation interacts with a virtual reality(VR) environment by moving visual 3-D models of the aircraftin a virtual world and by acquiring a stream of images from theenvironment. These images are then processed by an MV sensorblock, which includes the algorithms for feature extraction (FE),point matching (PM), and pose estimation (PE). The positionand orientation data coming from the MV and GPS sensors arethen combined by the sensor fusion system and supplied to theUAV docking control laws. These control laws guide the aircraftduring the docking maneuver and maintain the UAV within thespecified 3-D window during the refueling phase. The generalblock diagram of the MV scheme is shown in Fig. 1.

The paper is organized as follows. The AR problem is de-scribed in the next section. The following sections focus on thedescription of the UAV aircraft as well as that of the GPS, iner-tial navigation system (INS), and other used sensors, along withan overview of the main components of the MV system, whichare the FE algorithm, the PM algorithm, and the PE algorithm,

Fig. 2. Reference frames for the AR problem.

respectively. Next, the EKF equations are introduced; their usewithin the sensor fusion application is described. Simulationresults and robustness analysis are shown and discussed in thefinal section.

II. MV-BASED AR PROBLEM

The tanker–UAV system is shown in Fig. 2 along with the rel-evant geometric distances and the associated reference frames.

A. Reference Frames and Notation

The study of the AR problem requires the definition of thefollowing reference frames (RFs):

1) ERF, or E: earth-fixed reference frame.2) PRF, or P : ERF having the x-axis aligned with the planar

component of the tanker velocity vector.3) TRF or T : body-fixed tanker reference frame located at

the tanker center of gravity (CG).4) URF or U : body-fixed UAV reference frame located at the

UAV CG.5) CRF or C: body-fixed UAV camera reference frame.Geometric points are expressed here using homogeneous

(4-D) coordinates and are indicated with a capital letter anda left superscript denoting the associated reference frame. Forexample, a point P expressed in the F reference frame has co-ordinates FP = [x, y, z, 1]T , where the right “T ” superscript in-dicates transposition. Vectors are defined as difference betweenpoints; therefore their fourth coordinate is always “0.” In addi-tion, vectors are denoted by two uppercase letters, indicating thetwo points at the extremes of the vector. For example, EBR =EB–ER is the vector from the point R to the point B expressedin the ERF. The transformation matrices are 4 × 4 matrices re-lating points and vectors expressed in an initial reference frameto points and vectors expressed in a final reference frame. Theyare denoted with a capital T with a right subscript indicatingthe “initial” reference frame and a left superscript indicating the“final” reference frame. For example, the matrix ETT representsthe homogeneous transformation matrix that transforms a vec-tor/point expressed in TRF to a vector/point expressed in ERF.

MAMMARELLA et al.: MACHINE VISION/GPS INTEGRATION 793

B. Geometric Formulation of the AR Problem

The general objective is to guide the UAV such that its fuelreceptacle (point R in Fig. 2) is “transferred” to the center ofthe 3DW, under the tanker (point B in Fig. 2). As previouslystated, it is assumed that the boom operator can take control ofthe refueling operations once the UAV fuel receptacle reachesand remains within this 3DW. It should be emphasized that pointB is fixed within the TRF. Since the dimensions of the 3DW asspecified by the U.S. Air Force are not available in the technicalliterature, a cubic 3DW with a 2-m side was arbitrarily selected.Additionally, it is assumed that the tanker and the UAV canshare a short-range data communication link during the dockingmaneuver. Furthermore, the UAV is assumed to be equippedwith a digital camera and an onboard computer hosting the MValgorithms acquiring the images of the tanker. Finally, the 2-Dimage plane of the MV is defined as the “y–z” plane of the CRF.

C. Receptacle-3DW-Center Vector

The reliability of the AR docking maneuver is based on theaccuracy of the measurement of the vector TRB, which is thedistance between the UAV fuel receptacle and the center of the3D refueling window, expressed in TRF

TRB = TB − TTUUR (1)

where TTU = (UTC · CTT )−1 and CTT = CTU (ETU )−1ETT .Since the fuel receptacle and the 3DW center are located atfixed and known positions with respect to the CG of the UAVand tanker, respectively, both UR and TB are known and con-stant. The matrix CTU expresses the position and altitude of CRFwith respect to the URF, and therefore, is known and generallyconstant. The transformation matrix CTT can be evaluated ei-ther “directly”—i.e., using the relative position and orientationinformation provided by the MV system—or “indirectly”—i.e.,by using the matrices ETU and CTT , which, in turn, can be eval-uated using information from the position and altitude sensorsof the tanker and UAV, respectively.

III. AIRCRAFT MODELING

The UAV model used in the AR simulation is representative ofthe ICE-101 UAV [32]. The model has been developed using theconventional modeling approach outlined in [33]. The resultingUAV model is described by a 12-variable state space model

x(t) = f(x, u). (2)

The state vector is given by

x = [V, α, β, p, q, r, ψ, θ, ϕ, x, y,H ] (3)

where V represents the magnitude of the aircraft velocity, α andβ are, respectively, the longitudinal and lateral aerodynamic flowangles; p, q, and r are the components of the angular velocityin the body reference frame. The yaw, pitch, and roll anglesψ, θ, and ϕ represent the aircraft orientation—following thex–y–z Euler convention—with respect to the ERF, and x, y, zis the position of the aircraft center of mass with respect to theERF [33].

Fig. 3. Position of control surface in the UAV.

The input vector u is given by

u = [δThrottle , δAMT R , δAMT L , δTEF R , δTEF L , δLEF R ,

δLEF L , δPF , δSSD R , δSSD L ] (4)

where AMT denotes “all moving tips,” TEF denotes “trailingedge flaps,” LEF denotes “leading edge flaps,” PF denotes “pitchflaps,” and SSD denotes “spoiler slot deflectors.” The locationsof the aforesaid control surfaces on the aircraft are shown inFig. 3 [32].

A first-order model has been used for the modeling of theactuators of the different control surfaces. Finally, for simulationpurposes, it is assumed that the refueling is performed at steadystate rectilinear conditions (Mach = 0.65, H = 6000 m).

The atmospheric turbulence on the probe system and onboth the tanker and the UAV aircraft has been modeled us-ing the Dryden wind turbulence model at light/moderate con-ditions [13], [37], [38], [41]. The wake effects induced by thetanker on the UAV were modeled as perturbations to the UAVaerodynamic coefficients CD ,CL ,Cm ,CY , Cl , Cn and appliedas a function of the tanker–UAV distance and of the UAV angleof attack, using the interpolation approach outlined in [13]. Itshould be noticed that these disturbances are normally handledby the UAV control laws, and do not directly affect any sensorsystem such as, for example, the one proposed within this paper.

The design of the UAV docking control laws was performedusing a linear quadratic regulator (LQR) approach applied on thesystem resulting from the linearization of the UAV dynamics—augmented to include the integral of the UAV position—abouta reference trajectory, as described in [13] and [40]. The weightmatrices were selected to achieve a tradeoff between track-ing error, disturbance rejection, and high-frequency bandwidthattenuation.

The nonlinear model of a KC-135 aircraft [39] with linearizedaerodynamics was used to model the tanker. The boom wasmodeled as a system consisting of two rigid elements. The firstelement is connected to the tanker through two joints allowingvertical and lateral relative rotations. The second element isconnected to the first by a prismatic joint allowing telescopicextension [13].

794 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 6, NOVEMBER 2008

IV. MV SYSTEM

A. Corner Detection Algorithm

The Harris corner detector [10]—more specifically the ver-sion revised by Noble [11]—was implemented within this effort.A brief review of the method is provided next.

The method is based on the assumption that corner pointsare associated with the maximum values of the local intensityautocorrelation function. Let GL be the gray-level intensity ofthe image with GLX , GLY , GLX Y , and GLY X being the cor-responding directional derivatives. The matrix of the intensityderivatives can then be defined as follows:

M =[

GL2X GLX Y

GLY X GL2Y

]. (5)

The derivatives of the intensity of the image are determinedby convolution with the derivative of a 2-D Gaussian function.If at a certain point the eigenvalues of the matrix M are large,then a small change in any direction will cause a substantialchange in the gray level. This indicates that the point is a corner.Hence, a “Cornerness” value C for each pixel of the image iscalculated, as proposed by Noble [11]

C =det(M)

Tr(M) + ε. (6)

The small constant ε is used to avoid a singular denominator incase of a zero-rank autocorrelation matrix (M ). A local maximasearch is performed as a final step of the algorithm with the goalof maximizing the value of C for the selected corners. A pointis considered as a corner when the value C is greater than athreshold set to 250.

B. PM Problem and Solutions

Once the 2-D coordinates of the detected corners on the imageplane have been evaluated, the problem is to associate correctlyeach detected corner with its physical corner point on the tankeraircraft, whose position in the TRF (3-D coordinates) is assumedto be known. The general approach is to identify a set of detectedcorners [uj , vj ] to be matched to a subset of estimated cornerpositions [uj , vj ], as described in [12].

1) Projection Equations: The subset [uj , vj ] is simply a pro-jection in the camera plane of the corner P(j ) , using the standard“pinhole” projection model [5], [12], [14]. Specifically, accord-ing to the “pinhole” model, given a corner “j” with coordinatesCP(j ) = [Cxj ,

Cyj ,Czj , 1]T in the CRF frame, its projection into

the image plane can be calculated using the projection equation[

uj

vj

]=

fCxp,j

[ Cyp,j

Czp,j

]= g

(f, CTT (X) · TP(j )

)(7)

where f is the camera focal length, TP(j ) are the components ofthe corner P(j ) in TRF, which are fixed and known “a priori,”and CTT (X) is the transformation matrix between the cameraand the tanker reference frames, which is a function of the

current position and orientation vector X

X = [CxT , CyT , CzT , CψT , CθT , CϕT ]T . (8)

For PM purposes, X is assumed to be known. In fact, thecamera–tanker distance—i.e., the first three elements of X—canbe provided by the tanker and UAV GPS measurements, if GPScoverage is available. Alternatively, the MV-based estimationof the camera–tanker distance at previous time instants can beused as a good approximation of the current distance (assuming afast sampling rate for the MV system). The relative orientationbetween the camera and tanker—i.e., the last three elementsof X—can be obtained from the yaw, pitch, and roll anglemeasurements of both the UAV and tanker, which are providedby conventional gyros. In terms of the linear distance, if thesampling rate of the MV system is sufficiently small, the lastMV estimation of the camera–tanker relative orientation can beused as a good approximation of the current orientation. Thedistance and orientation of the camera in the UAV body framewas assumed to be constant and known.

2) PM Problem: Once the subset [uj , vj ] is available, theproblem of relating the points extracted from the camera mea-surements to the actual features on the tanker can be formalizedin terms of matching the set of points P = {p1 , p2 , . . . , pm}—where pj = [uj , vj ] is the generic “to be matched” point from

the camera—to the set of points P = {p1 , p2 , . . . , pn}, wherepj = [uj , vj ] is the generic point obtained through projectingthe known nominal corners in the camera plane. In general, adegree of similarity between two data sets is defined in terms ofa cost function or a distance function derived on general prin-ciples as geometric proximity, rigidity, and exclusion. The bestmatching is then evaluated as the result of an optimization pro-cess exploring the space of the potential solutions [15], [16]. Adefinition of the PM problem as an assignment problem alongwith an extensive analysis of different matching algorithms wasperformed by some of the authors in a previous effort [17], [18].The algorithm implemented within this effort solves the prob-lem using a heuristic “mutual nearest point” procedure [19] thatuses differences in point positions and differences in featurecolor (Hue) and “area” characteristics. The algorithm is imple-mented as a Matlab S-function written in C, and allows thedefinition of a maximum range of variation for each dimension;these ranges define a hypercube around each corner of the setP . The distance is actually computed only if the point pj —andits area and hue value—lies into one of the hypercubes definedaround each point of the set P ; otherwise, it is automatically setto infinity. The four dimensions are usually weighted before cal-culating the Euclidian 4-D distance between P and P . The finalchoice of “matched” points is then based on a simple proximitycriterion. The interested reader can find more details in [19].

C. PE Algorithm

Following the solution of the matching problem, the infor-mation in the set of points P must be used to derive therigid transformation relating the CRF to TRF [21]. Withinthis study, the Lu–Hager–Mjolsness (LHM) PE Algorithm wasimplemented [23], [24].

MAMMARELLA et al.: MACHINE VISION/GPS INTEGRATION 795

The LHM algorithm formulates the PE problem in terms ofthe minimization of an object–space collinearity error. Specif-ically, given the “observed, detected, and correctly matched”point “j” on the camera plane at the time instant k, with coor-dinates [uj , vj ], let hj (k) be

hj (k) = [uj vj 1 ]T . (9)

Then, an “object–space collinearity error” vector ej —at thetime instant k—can be defined as

ej (k) = (I − Vj (k))C TT (X(k))T P(j ) (10)

where

Vj (k) =

hj (k)hTj (k)

hTj (k)hj (k)

0

0 1

. (11)

The PE problem is then formulated as the problem of mini-mizing the sum of the squared errors

E(X(k)) =m∑

j=1

‖ej (k)‖2 . (12)

The algorithm proceeds by iteratively improving an estimateof the rotation portion of the pose. Next, the algorithm estimatesthe associated translation only when a satisfactory estimate ofthe rotation is found. This is achieved by using the collinearityequations

hj hTj

hTj hj

− 1 0

0 0

CTT

TP(j ) = 0 (13)

where

hj = [ uj vj 1 ]T (14)

and [uj , vj ] is the projection in the camera plane of the pointTP(j ) . It has been shown [23] that the LHM algorithm is globallyconvergent. Furthermore, empirical results suggest that the al-gorithm is also very efficient and usually converges within (fiveto ten) iterations starting from any range of initial conditions.

V. SENSOR MODELING

A. Modeling of the MV Sensor

The MV system can be considered as a smart sensor pro-viding the relative distance between a known object and thecamera. Therefore, a detailed description of the characteristicsof its output signals is critical for the use of this sensor. Themeasurements provided by the MV are affected by a Gaussianwhite noise with a nonzero mean, as demonstrated in [18]. Asummary of the output characteristics is provided in Table I.Being the noises white and Gaussian, only the means (µ) andthe standard deviations (σ) of the errors in the CRF directions(x, y, z) are required for their complete statistical descriptions.

TABLE ISTATISTICAL PARAMETERS OF THE MV-BASED POSITION SENSOR

Fig. 4. Normal probability and PSD in the pitch rate (q) in real and simulatedINS.

B. Modeling of the INS Sensor

Both aircrafts are assumed to be equipped with INS’s that arecapable of providing the velocities and altitudes of the aircraftby measuring its linear accelerations and angular rates. Withinthe developed simulation environment, “realistic” INS outputsare simulated by adding a white Gaussian noise (WGN) to thecorresponding entries of the aircraft state vector. To validate thistype of modeling, the noise within the signals acquired by theINS has been analyzed using the normal probability analysisand the power spectral density (PSD). This allowed assessingwhether such noise could be modeled as white and Gaussian.

The flight data used to validate the modeling of the INS noisewere taken from a recent experimental project involving theflight testing of multiple YF-22 research aircraft models [9].The analysis was performed with a sampling time of 10 Hz forall the aircraft sensors. The results for the pitch rate q are shownFig. 4.

The upper portion of Fig. 4 shows the normal probabilityplot—plotted using the Matlab “normplot” command—of thesimulated noise and of the noise provided by the real sensor. Thepurpose of this plot is to assess whether the data could comefrom a normal distribution. In such a case, the plot is perfectlylinear. For the noise related to the pitch rate channel, the partof the noise close to zero follows a linear trend, implying anormal distribution. Note that due to some outliers, the tails ofthe curve corresponding to the real sensor do not follow thistrend. However, the fact that the trend is followed within the

796 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 6, NOVEMBER 2008

TABLE IIVARIANCE OF THE NOISE OF THE SENSORS

central part of the plot—which represents the majority of thedata—validates that this noise can be modeled as a Gaussianprocess in a certain neighborhood of zero.

A PSD analysis also confirms the hypothesis of white noise.In fact, the lower portion of Fig. 4 shows that the spectrum of thenoise from the real sensor, although not as flat as the spectrumof the simulated noise (shown as a dotted line), is still fairlywell distributed throughout the frequency range. Thus, both thenormal probability and PSD analysis confirm that the noise onthe IMU q channel measurement can be modeled as a whiteGaussian random vector. Similar conclusions can be achievedfor the p and r inertial measurement unit (IMU) channels.

C. Modeling of the Pressure, Nose probe, Gyro, andHeading Sensors

An air-data nose probe—for measuring flow angles and pres-sure data—was installed on the UAV. This sensor provides themeasurements of the velocity (V ), the angle-of-attack (α), andthe sideslip angle (β), while the vertical gyro provides measure-ments for the aircraft pitch and roll angles (θ and ϕ). Withinthis analysis, the heading was approximated with the angle ofthe planar velocity in ERF, that is ψ = atan2(Vy , Vx), whereatan2 is the four-quadrant arctangent function and the veloci-ties are supplied by the GPS unit and are based on carrier-phasewave information. However, the heading can also be calculatedby gyros, magnetic sensors, or by a filtered combination of allthe aforesaid methods.

Following a similar analysis to the one performed for the INS,the noise on the measurements from the aforesaid sensors wasmodeled as white and Gaussian noise (WGN). Table II sum-marizes the results in terms of noise variances for the differentaircraft dynamic variables.

D. Modeling of the GPS Position Sensor

The GPS sensor provides its position (x, y, z) with respectto the ERF. A composition of four different band-limited whitenoises was used to simulate the GPS noise. Specifically, the fournoises have different power and sample times. Three of thesenoise signals are added and filtered with a low-pass filter andthe resulting signal is added to the fourth noise and sampledwith a zero-order-hold. In fact, GPS measurements—in casemore than four satellite signals are received—normally exhibita “short-term” noise with amplitude within 2 to 3 m, as wellas “long-term” trend deviations and “jumps” due to satellitesmotion and occlusions. Therefore, while the first “short-term”noise has been modeled as a white Gaussian noise, the trenddeviations and jumps have been modeled using the other threelower-frequency, filtered, noises.

Fig. 5. Comparison between real and simulated GPS signals.

Fig. 5 shows both the signal from a real GPS receiver(Novatel-OEM4), and the simulated GPS signal.

VI. SENSORS FUSION USING THE EKF

A. EKF Background Theory

The main purpose of the Kalman filter algorithm [27] is to pro-vide optimal estimates of the system dynamics through availablemeasurements assuming “a priori” known statistical models forthe system and measurement noises.

The discrete-time Kalman Filter [27] involves two basic steps.The first step consists in using the system dynamic model to pre-dict the evolution of the state between consecutive measurementinstances. The second step consists in the use of the measure-ments along with the system dynamic model for evaluating theoptimal (Newton-like) correction of the estimated values at thetime of the measurements. The filter characterizes the stochasticdisturbance input through its spectral density matrix and throughthe measurement error by its covariance.

In many applications, the measurement model, the system dy-namics, or both are potentially nonlinear. In these cases, the KFmay not be an optimal estimator. Nonlinear estimation methodsare discussed in [25] and [26]. The EKF retains the KF calcu-lations of the covariance and gain matrices, and it updates thestate estimate using a linear function of the filter residual [27].However, it uses the original nonlinear equations of the systemdynamics for state propagation and output vector calculation.The EKF equations are briefly reviewed next.

Given a generic discrete dynamic system

xk+1 = f(xk , uk , wk )

yk = h(xk , vk ) (15)

MAMMARELLA et al.: MACHINE VISION/GPS INTEGRATION 797

where uk , xk and yk are, respectively, the input, state, and out-put vectors of the dynamic system. wk and vk are white andGaussian noises with the following statistical properties:

E[wk ] = 0

E[wkwTk ] = Wk

E[vk ] = 0

E[vkvTk ] = Vk

E[wjvTk ] = 0. (16)

Furthermore, the initial state x0 is a random variable with thefollowing mean value and covariance matrix:

E[x0 ] = x0

E[(x0 − x0)(x0 − x0)T ] = P0 . (17)

Assuming that f and h are locally differentiable, the followingJacobian matrices are calculated:

Fk =∂f(•)∂xk

Hk =∂h(•)∂xk

. (18)

Under these assumptions, an EKF can be implemented usingthe following equations:

State Estimate Propagation

x−k+1 = f(xk , uk , 0). (19)

Covariance Estimate Propagation

P−k+1 = FkPk−1F

Tk + Wk−1 . (20)

Filter Gain Computation

Kk = P−k HT

k (HkP−k HT

k + Vk )−1 . (21)

State Estimate Update

xk = x−k + Kk (yk − h(x−

k , 0)). (22)

Covariance Estimate Update

Pk = (I − KkHk )P−k . (23)

Note that the EKF does not preserve the optimality propertiesof the linear Kalman filter (LKF). However, its simplicity androbustness are very appealing.

B. Sensors Fusion Using the EKF

The use of the EKF for sensor fusion is well documentedin robotics applications for the fusion of inertial, GPS, andodometer sensors, as described in [28]–[31]. Within this effort,emphasis was placed on the fusion between data from an MV-based sensor system and that from the INS/GPS system. Ingeneral, sensor fusion applications require the output functionyk = h(xk , vk ) of the dynamic system to be adapted to thenumber of sensors that the filter has to combine.

In this case, the output function contains the followingvariables:

yk = [V α β p q r ψ θ ϕ xGPS yGPS

zGPS xMV yMV zMV] (24)

Fig. 6. Scheme of EKF for sensor fusion.

where the subscript GPS indicates measurements from the GPSsystem while the subscript MV indicates measurements fromthe MV system.

The EKF formulation assumes that the measurements are af-fected by a white and Gaussian noise (16). Therefore, the noiseaffecting the variables xGPS , yGPS , and zGPS , were consideredto be white and Gaussian with variances of 0.014, 0.013, and0.022 m2 , respectively. These values were calculated using theMATLAB “var” command on a large set of data from the GPSsensor, simulated as described in Section 4D. The MATLAB“mean” command applied on the same set of data, providedresults under 2% of the range, which validated the zero-meanassumption. Similarly, for the MV-based position sensor, Table Iin Section IV-A indicates that the mean values of the MV posi-tion measurements can be approximated to be zero.

The EKF scheme requires three specific inputs. The first inputis the UAV command vector uk , containing the throttle level andthe deflections of the control surfaces. The second input is thecomplete system output vector defined in (24), which includesdata from the INS/GPS and the MV sensors. The third andlast input is the number of corners used by the PE algorithm,which is critical since the MV system provides reliable estimatesof the relative position vector only if a sufficient number ofcorners (greater than six) are properly detected by the “mutualnearest point” algorithm. Specifically, the entries of Vk relativeto the MV position measurements are multiplied by a factor of1000 when the number of detected corners is lower than therequired amount. Essentially, this causes the exclusion of theMV information from the sensor fusion process.

The output of the EKF is the estimate xk of the system’sstate vector xk , which contains the 12 aircraft state variables.Specifically, the last three variables of the EKF output are theestimates of the aircraft position in the ERF. The values of thesevariables are the results of the sensor fusion between the datasupplied by the two different position sensor systems.

According to the selected state and output variables, the ma-trix Hk in (18) becomes a matrix with dimension 15 × 12 con-taining the derivatives of the outputs with respect to the states.Similarly, the matrix Vkx is a matrix of dimensions 15 × 15,containing all the noise covariances, including the ones fromthe GPS and the MV systems.

Fig. 6 shows the general Simulink scheme of theEKF, including its different component blocks such as the

798 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 6, NOVEMBER 2008

TABLE IIITYPICAL INITIAL STATE VECTOR

“Linearization” block—which performs the calculations in(18)—the “gain computation” block—which calculates (20),(21), and (23)—and the “output update,” which calculates(22). The tuning of the EKF is performed as follows. First,the initial state of the filter is set equal to the state of theUAV system at the time instant when the EKF is switchedON (Table III shows typical values of such an initial state).The matrix P0 is then set to zero. Next, the matrix Wk iskept constant and equal to the identity matrix with dimensions12 × 12. As previously mentioned, the matrix Vk varies as afunction of the corners detected by the MV system. Specif-ically, if the number of corners is greater than six, the ma-trix Vk is a diagonal matrix containing the nine values pro-vided in Table II, the three variances of the GPS measurements:0.014, 0.013, and 0.022 m2 for x, y, and z directions, respec-tively, and the three variances related to the distances measuredby the MV system, provided in Table I. Whenever the numberof detected corners is less than six, then the three variances re-lated to the MV system are multiplied by 1000 so that thesemeasurements are practically discarded.

VII. PERFORMANCE ANALYSIS

The analysis of the closed-loop simulations was performedto validate the performance of the EKF. In this study, the UAVacquires data from all its onboard sensors, which are modeledas described in Section III, and receives data from the tanker,which are prefiltered for noise reduction purposes. The EKFoutput—i.e., the result of the sensor fusion between the MV andGPS data—is used in the docking control laws for guiding theUAV from the “precontact” position to the “contact” positionand for holding position in the defined 3DW once the contactposition has been reached.

Without any loss of generality, the “precontact” position wasassumed to be located 50 m behind and 10 m below the tankeraircraft, while the “contact” position, i.e., the 3DW position,was assumed to be right below the tanker, within the reach ofthe telescopic portion of the refueling boom.

Note that, due to finite camera resolution and due to the factthat objects appear smaller at larger distances, an MV-basedsystem cannot provide reliable results when the tanker–UAVdistance is too large [5], [12], [14]. Thus, MV-based results areinaccurate until approximately 30 s in the simulation.

In Figs. 7–9, the logarithm of the EKF error—defined as theabsolute value of the difference between the actual position andthe EKF-based position—is compared with the MV and GPSnoises, for the x, y, and z axes, respectively. It can be notedthat the error of the EKF output is approximately two ordersof magnitude smaller than the noises of both the GPS and MVsystems.

The accuracy of the EKF-based estimations is particularlyevident in Fig. 10, which shows the components of the EKFerror along the three axes.

Fig. 7. Comparison of errors along the x-axis between the EKF, GPS, andMV systems.

Fig. 8. Comparison of errors along the y-axis between the EKF, GPS, andMV systems.

In the following figures, the performance of the EKF-basedsensor fusion scheme is compared with the performance of an-other “baseline” sensor fusion scheme previously developedby the authors [13], [17], [18]. Specifically, the old “baseline”sensor fusion scheme consisted in using a linear interpolationbetween the distances supplied by the GPS and the MV systemswhen the relative aircraft distance was between two predefinedvalues d2 and d1 . Particularly, within the baseline scheme, theGPS measurements were used when the distance was greaterthan d2 , while the MV distance estimation was used when thedistance was lower than d1 .

MAMMARELLA et al.: MACHINE VISION/GPS INTEGRATION 799

Fig. 9. Comparison of errors along the z-axis between the EKF, GPS, andMV systems.

Fig. 10. Errors in data using the EKF.

The rationale behind the “baseline” sensor fusion scheme isthat, for distances larger than d1 , the MV system could not yieldaccurate estimations; therefore, only GPS data were used. Onthe other side, for distances lower than d2 , the tanker itself couldact a metal shield leading to potentially inaccurate GPS mea-surements; thus, only data from the MV system could be used.Without any loss of generality, the values of 40 and 23 m wereused for the distances d1 and d2 , respectively, in the previousstudies.

Figs. 11 and 12 show the UAV tracking error during theapproach and docking phases. It can be seen that the EKF-basedsensor fusion scheme provides a substantial improvement interms of tracking performance during the UAV docking phase.

Fig. 11. Comparison in the tracking error between the EKF sensor fusionmethod and sensor linear interpolation method.

Fig. 12. Errors in the components of the tracking error.

VIII. ROBUSTNESS ANALYSIS

A robustness study was performed for assessing the robust-ness of the filter to perturbations, such as biases in the θ andϕ signals from the gyro, variations of the initial UAV positionx0, y0, z0, and variations in the entries of the V-matrix asso-ciated with the MV and GPS sensors systems. Specifically, foreach of the aforesaid conditions, a simulation was performed inwhich the parameter was changed, while all the other parametersretained their values.

The average errors for the EKF position output—for eachcoordinate as well as in magnitude—are reported in Table IV foreach simulation. Note that the first row represents the baselinecase in which no parameter was changed, while the last tworows represent the cases in which the three tuning variances forthe MV and GPS sensors were doubled.

Overall, the table indicates that the EKF possesses desirablerobustness characteristics with respect to both limited biases andvariation in some of the tuning parameters.

800 IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART C: APPLICATIONS AND REVIEWS, VOL. 38, NO. 6, NOVEMBER 2008

TABLE IVROBUSTNESS RESULTS

IX. CONCLUSION

This paper describes the use of an “ad hoc” sensor fusionsystem within the problem of MV-based Autonomous AR forUAVs. Specifically, the sensor fusion system is based on the useof the EKF and it has been designed to provide reliable positioninformation through integration of the measurements comingfrom the GPS system and from the MV system, as well as,from other aircraft sensors. The sensor fusion system has beendescribed with details. A closed-loop simulation study using asimulation environment specifically designed for the analysis ofMV-based AR problem was performed. Results show that theproposed sensor fusion system allows an improvement of morethan one order of magnitude in the precision of the positionestimates when compared to a previously used interpolation-based sensor fusion system. Furthermore, the results from sim-ulations studies performed by changing some key tuning pa-rameter suggest that the filter also enjoys desirable robustnesscharacteristics.

REFERENCES

[1] R. Korbly and L. Sensong, “Relative altitudes for automatic docking,”AIAA J. Guid. Control Dyn., vol. 6, no. 3, pp. 213–215, 1983.

[2] A. Dogan, E. Kim, and W. Blake, “Control and simulation of relativemotion for aerial refueling in racetrack maneuvers,” J. Guid. ControlDyn., vol. 30, no. 5, pp. 1551–1557, Sep./Oct. 2007.

[3] S. M. Khanafseh and B. Pervan, “Autonomous airborne refueling of un-manned air vehicles using the global positioning system,” J. Aircraft,vol. 44, no. 5, pp. 1670–1682, Sep./Oct. 2007.

[4] J. Valasek, D. Hughes, J. Kimmett, K. Gunnam, and J. L. Junkins, “Visionbased sensor and navigation system for autonomous aerial refueling,” J.Guid. Control Dyn., vol. 28, no. 5, pp. 979–989, Sep./Oct. 2005.

[5] M. L. Fravolini, A. Ficola, G. Campa, M. R. Napolitano, and B. Seanor,“Modeling and control issues for autonomous aerial refueling for UAVsusing a probe-drogue refueling system,” J. Aerosp. Sci. Technol., vol. 8,no. 7, pp. 611–618, 2004.

[6] J. Doebbler, T. Spaeth, J. Valasek, M. J. Monda, and H. Schaub, “Boomand receptacle autonomous air refueling using a visual pressure snakeoptical sensor,” J. Guid. Control Dyn., vol. 30, no. 6, pp. 1753–1769,Nov./Dec. 2007.

[7] N. K. Philip and M. R. Ananthasayanam, “Relative position and altitudeestimation and control schemes for the final phase of an autonomousdocking mission of spacecraft,” Acta Astron., vol. 52, pp. 511–522,2003.

[8] B. Sinopoli, M. Micheli, G. Donato, and T. J. Koo, “Vision based nav-igation for an unmanned aerial vehicle,” in Proc. 2001 IEEE Int. Conf.Robot. Autom., Seoul, Korea, May, vol. 2, pp. 1757–1764.

[9] G. Campa, Y. Gu, B. Seanor, M. R. Napolitano, L. Pollini, andM. L. Fravolini, “Design and flight testing of nonlinear formation controllaws,” Control Eng. Pract., vol. 15, no. 9, pp. 1077–1092, Sep. 2007.

[10] C. Harris and M. Stephens, “A combined corner and edge detec-tor,” in Proc. 4th Alvey Vis. Conf., Manchester, U.K., 1988, pp. 147–151.

[11] A. Noble, “Finding corners,” Image Vis. Comput. J., vol. 6, no. 2, pp. 121–128, 1988.

[12] C. S. Kenney, B. S. Manjunath, M. Zuliani, G. A. Hewer, and A. Van Nevel,“A condition number for point matching with application to registrationand postregistration error estimation,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 25, no. 11, pp. 1437–1454, Nov. 2003.

[13] G. Campa, M. R. Napolitano, and M. L. Fravolini, “A simulation envi-ronment for machine vision based aerial refueling for UAV,” IEEE Trans.Aerosp. Electron. Syst., vol. 44, no. 4, Oct. 2008.

[14] S. Hutchinson, G. Hager, and P. Corke, “A tutorial on visual servo con-trol,” IEEE Trans. Robot. Autom., vol. 12, no. 5, pp. 651–670, Oct.1996.

[15] F. Pla and J. A. Marchant, “Matching feature points in image sequencesthrough a region-based method,” Comput. Vis. Image Understanding,vol. 66, no. 3, pp. 271–285, Jun. 1997.

[16] A. Branca, E. Stella, and A. Distante, “Feature matching by search-ing maximum clique on high order association graph,” in Proc. 10thInt. Conf. Image Anal. Process. (ICIAP 1999), Venice, Italy, pp. 642–658.

[17] M. L. Fravolini, G. Campa, M. R. Napolitano, and A. Ficola, “Evalu-ation of machine vision algorithms for autonomous aerial refueling forunmanned aerial vehicles,” AIAA J. Aerosp. Comput. Inf. Commun., vol.4, no. 9, Sep. 2007.

[18] G. Campa, M. Mammarella, M. R. Napolitano, M. L. Fravolini, L. Pollini,and B. Stolarik, “A comparison of pose estimation algorithms for machinevision based aerial refueling for UAV,” in Proc. Mediterranean ControlConf. 2006, Ancona, Italy, Jun. 28–30,, pp. 1–6.

[19] M. Mammarella, G. Campa, M. R. Napolitano, M. L. Fravolini,R. Dell’Aquila, V. Brunori, and M. G. Perhinschi. (2008, May 19). Com-parison of point matching algorithms for the UAV aerial refueling problem[Online].

[20] The Mathworks. (2006, Mar.). Image Processing Toolbox version 5.2[Online]. Available: http://www.mathworks.com/access/helpdesk/help/toolbox/images/

[21] R. M. Haralick, H. Joo, C.-N. Lee, X. Zhuang, V. G. Baidya, andM. B. Kim, “Pose estimation from corresponding point data,” IEEE Trans.Syst., Man, Cybern., vol. 19, no. 6, pp. 1426–1446, Nov./Dec. 1989.

[22] W. Wilson, “Visual servo control of robots using Kalman filter estimatesof robot pose relative to work-pieces,” in Visual Servoing, K. Hashimoto,Ed. Singapore: World Scientific, 1994, pp. 71–104.

[23] C. P. Lu, G. D. Hager, and E. Mjolsness, “Fast and globally convergentpose estimation from video images,” IEEE Trans. Pattern Anal. Mach.Intell., vol. 22, no. 6, pp. 610–622, Jun. 2000.

[24] A. Ansar and K. Dniilidis, “Linear pose estimation from points or lines,”in Proc. Eur. Conf. Comput. Vis. (ECCV), vol. 4, A. Heyden et al., Eds.Copenhagen, Denmark, May 2002, pp. 282–296.

[25] G. J. S. Ross, Nonlinear Estimation. New York: Springer-Verlag, 1990.[26] D. D. Denison, Nonlinear Estimation and Classification. New York:

Springer-Verlag, 2003.[27] R. F. Stengel, Optimal Control and Estimation. New York: Dover, 1994.[28] E. J. Krakiwsky, C. B. Harris, and R. V. C. Wong, “A Kalman filter

for integrating dead reckoning, map matching and GPS positioning,” inProc. IEEE PLANS 1988 Position Location Navig. Symp. Rec. Navig. 21stCentury, Orlando, FL, Nov. 29–Dec. 2, pp. 39–46.

MAMMARELLA et al.: MACHINE VISION/GPS INTEGRATION 801

[29] S. Cooper and H. Durrant-Whyte, “A Kalman filter model for GPS nav-igation of land vehicles,” in Proc. IEEE/RSJ/GI Int. Conf. Intell. RobotsSyst., Munich, Germany, Sep. 12–16, 1994, vol. 1, pp. 157–163.

[30] E. Abbott and D. Powell, “Land-vehicle navigation using GPS,” Proc.IEEE, vol. 87, no. 1, pp. 145–162, Jan. 1999.

[31] S. Panzieri, F. Pascucci, and G. Ulivi, “An outdoor navigation systemusing GPS and inertial platform,” IEEE/ASME Trans. Mechatron., vol. 7,no. 2, pp. 134–142, Jun. 2002.

[32] G. A. Addington and J. H. Myatt, “Control-surface deflection effects onthe innovative control effectors (ICE 101) design,” Air Force RecruitingServ., Randolph Air Force Base TX, Air Force Rep. AFRL-VA-WP-TR-2000-3027, Jun. 2000.

[33] B. L. Stevens and F. L. Lewis, Aircraft Control and Simulation. NewYork: Wiley, 1987.

[34] A. R. Albayrak, M. Zupanski, and D. Zupanski, “Maximum likelihoodensemble filter applied to multisensor systems,” Proc. SPIE, vol. 6571,pp. 65710N, Apr. 2007.

[35] D. Loebis, W. Naeem, R. Sutton, J. Chudley, and S. Tetlow, “Soft comput-ing techniques in the design of a navigation, guidance and control systemfor an autonomous underwater vehicle,” Int. J. Adapt. Control SignalProcess., vol. 21, pp. 205–236, 2007.

[36] V. Subramanian and T. F. Burks, “Sensor fusion using fuzzy Kalman filterfor autonomous vehicle guidance,” presented at the ASAE Annu. Meeting,Gainesville, FL, 2006, Paper 063031.

[37] W. Blake and D. R. Gingras, “Comparison of predicted and measuredformation flight interference effect,” presented at the 2001 AIAA Atmos.Flight Mech. Conf., Montreal, QC, Canada, Aug. 2001, AIAA Paper2001-4136.

[38] D. R. Gingras, J. L. Player, and W. Blake, “Static and dynamic wind tunneltesting of air vehicles in close proximity,” presented at the 2001 AIAAAtmos. Flight Mech. Conf., Montreal, QC, Canada, Aug. 2001, Paper2001-4137.

[39] G. Campa. (2003). Airlib, The Aircraft Library [Online]. Available:http://www.mathworks.com/matlabcentral/

[40] M. D. Tandale, R. Bowers, and J. Valasek, “Robust trajectory tracking con-troller for vision based autonomous aerial refueling of unmanned aircraft,”J. Guid. Control Dyn., vol. 29, no. 4, pp. 846–857, Jul./Aug. 2006.

[41] A. Dogan, S. Venkataramanan, and W. Blake, “Modeling of aerodynamiccoupling between aircraft in close proximity,” J. Aircraft, vol. 42, no. 4,pp. 941–955, Jul./Aug. 2005.

Marco Mammarella was born in Milan, Italy. Hereceived the M.S. degree in automation and roboticengineering from the University of Pisa, Pisa, Italy,in 2005. He is currently working toward the Ph.D. de-gree in the Department of Mechanical and AerospaceEngineering, West Virginia University (WVU),Morgantown.

In 2004–2005, he was a visiting student at WVU.His current research interests include machine visionsystems for unmanned aerial vehicles, nonlinear andhybrid control systems, neural networks, and real-

time embedded computing.

Giampiero Campa (M’99) was born in Taranto,Italy. He received the M.S. degree in electronic engi-neering and the Ph.D. degree in robotics and automa-tion from the University of Pisa, Pisa, Italy, in 1996and 2000, respectively.

He was a visiting student at the Industrial ControlCentre, Strathclyde University, and the Departmentof Mechanical and Aerospace Engineering, GeorgiaInstitute of Technology. In 2000, he joined the De-partment of Mechanical and Aerospace Engineering,West Virginia University, Morgantown, where is cur-

rently a Research Assistant Professor. His current research interests includesystems modeling, identification and simulation, nonlinear and hybrid controlsystems, and real-time embedded computing.

Dr. Campa is a member of the IEEE Control Systems Society and the Math-ematical Association of America.

Marcello R. Napolitano was born in PomiglianoD’Arco, Italy. He received the M.S. degree from theUniversity of Naples, Naples, Italy, in 1985, and thePh.D. degree from Oklahoma University, Norman, in1989, both in aeronautical engineering.

In 1990, he joined the Department of Mechanicaland Aerospace Engineering, West Virginia Univer-sity, Morgantown, where is currently a Full Professorand the Director of the Center of Advanced Researchin Autonomous Technologies. His current researchinterests include flight control systems, unmanned

aerial vehicles, fault tolerance, and neural networks.

Mario L. Fravolini was born in Perugia, Italy. Hereceived the Ph.D. degree in electronic engineeringfrom the University of Perugia, Perugia, Italy, in2000.

In 1999, he was with the Control Group, School ofAerospace Engineering, Georgia Institute of Technol-ogy, Atlanta. He has been a Visiting Research Assis-tant Professor in the Department of Mechanical andAerospace Engineering, West Virginia University, forseveral years. He is currently a Research Assistant inthe Department of Electronics and Information Engi-

neering, University of Perugia, where he teaches courses in the area of feedbackcontrol systems. His current research interests include fault diagnosis, intelligentand adaptive control, predictive control, optical feedback, and active control ofstructures.

Yu Gu was born in Anhui, China. He received thePh.D. Degree in aerospace engineering from WestVirginia University, Morgantown, in 2004.

After graduation, he joined the Department of Me-chanical and Aerospace Engineering, West VirginiaUniversity, where he is currently a Research AssistantProfessor. His research interests include adaptive con-trol, fault tolerance, unmanned aerial vehicles, andmultiple agent coordination.

Mario G. Perhinschi received the M.S. degree fromthe Georgia Institute of Technology, Atlanta, in 1994,and the Ph.D. degree from the Polytechnic Universityof Bucharest, Bucharest, Romania, in 1999, both inaerospace engineering.

He is currently an Assistant Professor in the De-partment of Mechanical and Aerospace Engineer-ing, West Virginia University, Morgantown, where heteaches courses on flight modeling and simulation, ar-tificial intelligence, feedback control, and mechatron-ics. His current research interests include modeling

and simulation of aerospace systems, fault-tolerant control systems, parame-ter identification, artificial intelligence techniques (genetic algorithms, fuzzycontrol, neural networks), unmanned autonomous air vehicles, and handlingqualities of fixed and rotary wing aircraft.