bst

12
Comparison of Nonlinear Estimation for Ballistic Missile Tracking Brian Saulson and K.C. Chang Dept. of Systems Engineering & Operations Research School of Information Technology and Engineering George Mason University Fairfax, VA 22030, USA ABSTRACT Approaches towards nonlinear state estimation have been recently advanced to include more accurate and stable alternatives. The Extended Kalman Filter (EKF), the first and most widely used approach (applied as early as the late 1960’s and developed into the early 1980’s), uses potentially unstable derivative-based linearization of nonlinear process and/or measurement dynamics. The Unscented Kalman Filter (UKF), developed after around 1994, approximates a distribution about the mean using a set of calculated sigma points. The Central Difference Filter (CDF), or Divided Difference Filter (DDF), developed after around 1997, uses divided difference approximations of derivatives based on Stirling’s interpolation formula and results in a similar mean, but a different covariance from the EKF and using techniques based on similar principles to those of the UKF. This paper compares the performance of the three approaches above to the problem of Ballistic Missile tracking under various sensor configurations, target dynamics, measurement update / sensor communication rate and measurement noise. The importance of filter stability in some cases is emphasized as the EKF shows possible divergence due to linearization errors and overconfident state covariance while the UKF shows possibly slow convergence due to overly large state covariance approximations. The CDF demonstrates relatively consistent stability, despite its similarities to the UKF. The requirement that the CDF and UKF expected state covariances be positive definite is demonstrated to be unrealistic in a case involving multi-sensor fusion, indicating the necessity for the reportedly more robust and efficient square-root implementation. Strategies for taking advantage of the strengths (and avoiding the weaknesses) of each filter are proposed. Keywords: nonlinear state estimation, extended Kalman filter, unscented Kalman filter, central difference filter, ballistic missile tracking 1. INTRODUCTION Tracking ballistic missiles using infrared (bearing only) and radar (range and bearing) sensors requires a nonlinear filtering approach. The most significant nonlinearity results from the conversion of sensor measurements of range, azimuth and elevation into Cartesian coordinate estimates of the target state. This paper compares the relative performances of various non-linear filtering methods applied to tracking a single ballistic missile target in cases involving infrared and/or radar measurements. For the purposes of this study, detailed discussion, application and performance comparison is limited to the Extended Kalman Filter (EKF), the Unscented Kalman Filter (UKF) ([7][5]) and the Central Difference Filter (CDF) ([6][21]) . The purpose for including the EKF in this study is due to the wide acceptance and frequent application of the EKF. The reason for including the UKF in this study is due to the rapidly growing support for the UKF over the EKF in potentially all applications of nonlinear state estimation. The motivation behind including the CDF in this study is due to its similarity and touted improvement over the UKF in at least some cases. Each filter is applied and assessed under varying sensor configurations, process dynamics, measurement update / sensor communication rate and measurement noise. The variety of case studies included in this paper is intended to demonstrate a more broad and realistic performance comparison as opposed to a single theoretical example. In the cases involving multisensor fusion, the significance of each filter with respect to the fusion process is discussed.

Upload: giri-prasad

Post on 01-Nov-2014

4 views

Category:

Documents


1 download

DESCRIPTION

missile

TRANSCRIPT

Page 1: BST

Compar ison of Nonlinear Estimation for Ballistic Missile Tracking

Brian Saulson and K.C. Chang

Dept. of Systems Engineering & Operations Research School of Information Technology and Engineering

George Mason University Fairfax, VA 22030, USA

ABSTRACT

Approaches towards nonlinear state estimation have been recently advanced to include more accurate and stable alternatives. The Extended Kalman Filter (EKF), the first and most widely used approach (applied as early as the late 1960’s and developed into the early 1980’s), uses potentially unstable derivative-based linearization of nonlinear process and/or measurement dynamics. The Unscented Kalman Filter (UKF), developed after around 1994, approximates a distribution about the mean using a set of calculated sigma points. The Central Difference Filter (CDF), or Divided Difference Filter (DDF), developed after around 1997, uses divided difference approximations of derivatives based on Stirling’s interpolation formula and results in a similar mean, but a different covariance from the EKF and using techniques based on similar principles to those of the UKF. This paper compares the performance of the three approaches above to the problem of Ballistic Missile tracking under various sensor configurations, target dynamics, measurement update / sensor communication rate and measurement noise. The importance of filter stability in some cases is emphasized as the EKF shows possible divergence due to linearization errors and overconfident state covariance while the UKF shows possibly slow convergence due to overly large state covariance approximations. The CDF demonstrates relatively consistent stability, despite its similarities to the UKF. The requirement that the CDF and UKF expected state covariances be positive definite is demonstrated to be unrealistic in a case involving multi-sensor fusion, indicating the necessity for the reportedly more robust and efficient square-root implementation. Strategies for taking advantage of the strengths (and avoiding the weaknesses) of each filter are proposed. Keywords: nonlinear state estimation, extended Kalman filter, unscented Kalman filter, central difference filter, ballistic missile tracking

1. INTRODUCTION

Tracking ballistic missiles using infrared (bearing only) and radar (range and bearing) sensors requires a nonlinear filtering approach. The most significant nonlinearity results from the conversion of sensor measurements of range, azimuth and elevation into Cartesian coordinate estimates of the target state. This paper compares the relative performances of various non-linear filtering methods applied to tracking a single ballistic missile target in cases involving infrared and/or radar measurements. For the purposes of this study, detailed discussion, application and performance comparison is limited to the Extended Kalman Filter (EKF), the Unscented Kalman Filter (UKF) ([7][5]) and the Central Difference Filter (CDF) ([6][21]). The purpose for including the EKF in this study is due to the wide acceptance and frequent application of the EKF. The reason for including the UKF in this study is due to the rapidly growing support for the UKF over the EKF in potentially all applications of nonlinear state estimation. The motivation behind including the CDF in this study is due to its similarity and touted improvement over the UKF in at least some cases. Each filter is applied and assessed under varying sensor configurations, process dynamics, measurement update / sensor communication rate and measurement noise. The variety of case studies included in this paper is intended to demonstrate a more broad and realistic performance comparison as opposed to a single theoretical example. In the cases involving multisensor fusion, the significance of each filter with respect to the fusion process is discussed.

Page 2: BST

2. BACKGROUND

The following provides overview of the EKF, UKF and CDF, which are the main filter methods studied in this paper. 2.1 Extended Kalman Filter (EKF) The EKF linearizes nonlinear dynamic models using partial derivatives and the expected state vector in order to perform Kalman Filtering with a Taylor Series approximation of the time and/or measurement updates. As a result of the functional linearization, the EKF provides the optimal linear, or Linear Minimum Mean Square Error (LMMSE), solution. It always approximates the probability density to be Gaussian and is therefore not well suited to handle non-Gaussian distributions such as bi-modal or heavily skewed cases. Application of the EKF to any specific problem often requires adaptive filtering for consistent and informative estimation to take linearization errors into account[2]. 2.2 Unscented Kalman Filter (UKF) The UKF uses the unscented transformation, or the "scaled unscented transformation" implemented in this study, to generate deterministic sigma points to approximate a random variable’s probability distribution given a true (or assumed) nonlinear function and the prior mean and covariance. The sigma points are chosen based on the prior mean and respective columns of the matrix square root of the prior covariance. The Choleski factorization used to solve for the upper triangular matrix square root of the expected state covariance requires that the covariance matrix be positive definite. Square-root implementations of the UKF perform more efficiently and without the positive definite requirement ([4][21]). The UKF is similar to Particle Filters in that it generates points about the mean estimate, but requires less computational cost due to deterministic sampling of the sigma points as opposed to the Particle Filter’s random “particles.” Unlike Particle Filters, the UKF assumes a Gaussian distribution, but it is able to approximate heavy tailed distributions better than the EKF. For the second-order unscented filter, the estimates of the mean and covariance are accurate to the 2nd order in general and to the 3rd order for Gaussian priors. The second-order unscented filter generally requires at least 2n + 1 sigma points, while the fourth-order unscented filter requires at least 2n2 + 1 sigma points and is able to estimate the mean to the 4th order in general. A reduced sigma point implementation has only the minimum requirement that n+1 sigma points be used[8]. 2.3 Central Difference Filter (CDF) or Divided Difference Filter (DDF) The CDF (or DDF) is similar to the concept behind the UKF, both of which are considered to be among the class of Linear Regression Kalman Filters (LRKF) [2], but based on a multivariable extension of Stirling’s interpolation formula. The CDF performs a better covariance estimate than the UKF by approximating the covariance based on prior covariance as opposed to depending on the covariance of the set of sampled points. Some studies have shown [6] that the first order version (DD1) performs with similar expected error to the EKF, while the second order version (DD2) performs with similar expected error to the UKF.

3. TECHNICAL APPROACH

The following discussion includes the variations on the sensor configurations, target process dynamics, and measurement/sensor communication parameters used in this study. Explanations of the applied estimation/fusion algorithms as well as the filter performance measures are also included. 3.1 Sensor Configurations The sensor configuration in each case is comprised of radar and/or infrared sensors. The first case involves a single radar sensor on a medium Earth orbiting satellite platform. The second case involves a single ground-radar sensor. The third case involves two infrared sensors on separate geosynchronous/geostationary orbiting satellite platforms. The fourth case involves one infrared sensor on a geosynchronous orbiting satellite platform, one infrared sensor on a low Earth orbiting satellite platform (both of which are not used after the boost phase ends at 120 seconds into the simulation), and one ground radar sensor, which begins measuring 60 seconds into the simulation. 3.2 Target Process Dynamics The ballistic missile target process dynamics include a simplified boost phase and a simplified coast phase. In the cases that include the boost through coast phases, a time flag is used to switch from boost to coast at 120 seconds into the flight. The boost phase is represented as a constant acceleration object with zero-mean random Gaussian noise on the

Page 3: BST

state acceleration representing disturbances. The coast phase is represented as basically a constant acceleration case with the acceleration due to the simplified force of gravity. Because the acceleration vector in the three-dimensional scenario is constantly changing direction (always pointing to the center of the Earth), there is a continuous jerk that may be considered to be a slight model mismatch as the filter is set to assume a constant acceleration process. 3.3 Measurement / Sensor Communication Case studies 2, 3 and 4 each include varying a parameter with respect to sensor measurement and fusion. Both fast and slow measurement update rates are presented in Case 2, with Cases 2.1 and 2.2 having 1Hz and 0.25Hz measurement update rates respectively. A similar variation is presented in Case 4 with respect to the sensor fusion rate, with Cases 4.1 and 4.2 having full (1Hz) and quarter (0.25Hz) fusion rates respectively. The measurements are corrupted with additive zero mean Gaussian noise, which is simulated at both low (Case 3.1) and high (Case 3.2) measurement noise values. Case 3 involves only infrared sensors with the low and high noise cases represented respectively by (0.00005)2 and (0.00050)2 square-radian variance on both the azimuth and elevation measurements. The purpose for the variations of the above parameters is to demonstrate the sensitivity of the alternative filters to different conditions. For the purposes of this study the following simple sensor dynamic models were adequately nonlinear and representative. 3.3.1 Radar Model The radar measures the range, azimuth and elevation of the ballistic missile with respect to its position and its coordinate frame, parallel to the ECI (Earth-Centered-Inertial) coordinate frame. Each element of the measurement vector is corrupted by additive zero-mean Gaussian noise with variances represented by Rvar (Range variance), Avar (Azimuth variance), and Evar (Elevation Variance) respectively. Radar measurements are represented using the following model: Dx = (Xtrue (1) - Radar(1)); Dy = (Xtrue (4) - Radar(2)); Dz = (Xtrue (7) - Radar(3));

�����

�����

+

�������

�������

+

++

=

varErandnvarArandnvarRrandn

)2yD 2

xD,zatan2(D

)xD,yatan2(D

2zD2

yD2xD

rZ

The magnitude of the Range, Azimuth and Elevation variances for the additive zero-mean random noise were roughly based on examples of existing radar sensor technology and given 0.01 km, 0.0001 rad, and 0.0001 rad standard deviations about the range, azimuth, and elevation respectively. 3.3.2 Infrared Model The infrared is modeled similar to the radar, but without range measurements, using the following model: Dx = (Xtrue(1) - IR(1)); Dy = (Xtrue (4) - IR(2)); Dz = (Xtrue (7) - IR(3));

Zir = ��

��

+ )D D,atan2(D

)D,atan2(D2y

2xz

xy +

��

��

⋅⋅

var

var

Erandn

Arandn

As mentioned above, it is the Avar and the Evar that are varied from Case 3.1 (low) to 3.2 (high). 3.4 Estimation / Fusion Algor ithms At the single sensor level, the EKF, UKF, and CDF are applied and compared. The trivial converted measurement with no filtering is also applied to demonstrate the improvement of each filter over the measurement. The trivial converted measurement is available in the four cases included in this study because there is always at least a radar or more than one infrared sensor. Otherwise, some cases may not have a trivial converted measurement available. At the multi-sensor level hierarchical Information Matrix fusion with partial feedback (Case 3) and full feedback (Case 4) is applied assuming a common prior, but negligible process noise between the estimates. A centralized super-measurement using intersection of planes (Cases 3 and 4) is used for the trivial converted measurement when there are only two infrared satellites available. Whenever the radar measurement is available (as in Case 4) only the radar measurements are used. 3.4.1 Extended Kalman Filter (EKF) See section 2.1 for a brief description of the EKF. The steps below describe the EKF as implemented for this study. 3.4.1.1 EKF Initialization 1. The radar measurement of Range, Azimuth, and Elevation with additive zero-mean Gaussian noise, Z, is observed.

Page 4: BST

2. The estimate based on the first radar measurement without filtering (converting from spherical to Cartesian and taking into consideration the position of the Radar) is [ ] .z ,z z, ,y ,y y, ,x ,x x,X T���������= 3. The span of the time interval, t, between each time update is set.

4. The assumed process dynamics used for filter time update, F, represents a constant acceleration object in this case same as true initial process, F. The assumed dynamics are slightly mismatched with the coast phase process dynamics (due to the slightly changing vector of gravity). 5. The expected x, y, and z-coordinate acceleration process noise variances set. are ly,respective q and ,q,q zzyyxx ������������

6. The expected process noise covariance, Q, is set. 7. The expected range, azimuth, and elevation measurement noise variances (Rvar, Avar, and Evar respectively) are set. 8. The expected measurement noise covariance, R, is set. 9. The expected state covariance is initialized for this study as: P = diag[5]. 3.4.1.2 EKF Time Update 1(A). The Standard Kalman Filter (SKF) state prediction (time update) is applied due to the assumed linear process dynamics: ( ) ( ). k|kXFk|1kX ⋅=+

*1(B). Evaluate the Jacobians: ( )k)|(kX XX

f(k)kF

=δδ=

State prediction applied using assumed nonlinear function: ( ) ( ) ( )[ ] ku,k|kXk,fk|1kX =+ where u(k) is the kth input. If process dynamics are linear, Step 1(B) would be identical to step 1(A) above. 2. The state prediction covariance is identical for the SKF and the EKF due to the assumed linear process dynamics: Q(k).(k)Fk)|P(k(k)Fk)|1P(k T +⋅⋅=+ 3.4.1.3 EKF Measurement Update 1. The linearized measurement dynamics, H, are based on the partial derivatives of the true measurement

dynamics with respect to the state estimate similar to time update step 1(B): H(k+1) = k).|1(kX XX

1)(kh

+=δ+δ

2. The updated expected residual covariance is calculated identical to the SKF: S(k+1) = R(k+1) + H(k+1)·P(k+1|k)·H(k+1)

T.

3. The updated filter gain is also calculated identical to the SKF: W(k+1) = P(k+1|k)·H(k+1)T·S(k+1)

-1.

4. Another radar measurement, Z(k+1), is observed. 5. The measurement prediction is based on plugging the state prediction into themeasurement dynamic equations similar to the second part of time update step 1(B): k)]. | 1(kX1,h[k k)|1(kZ ++=+

6. The measurement residual, k).|1(kZ– 1) Z(k 1)V(k ++=+

7. The state estimate is updated identical to the SKF: 1).1)·V(k W(k k) | 1(kX 1)k | 1(kX ++++=++

8. The state covariance is updated identical to the SKF: P(k+1 | k+1) = P(k+1 | k) - W(k+1)·S(k+1)·W(k+1)T.

3.4.2 Unscented Kalman Filter (UKF) See section 2.2 for a brief description of the UKF. The steps below describe the UKF as implemented for this study. 3.4.2.1 UKF Initialization Same as the EKF except for the following steps:

1. The dimension of the state vector, L, is set: L = length(X). In this case [ ]�� ��� ��

���������

Lz z z y yy x xx

2. The spread of the sigma points around the mean, a, a sigma point scaling parameter: a = 1 (for default implementation, but smaller values depending on the magnitude of higher order nonlinearities).

3. The prior knowledge of the distribution of Xtrue is incorporated using a higher order scaling parameter, b: b = 2 (optimal for Gaussian distributions). 4. A scalar tuning parameter, k, is set: k = 0 (for default implementation). 5. The scaling parameter, l, is based on the other scaling parameters: ( ) LL�� 2 −κ+⋅= .

Page 5: BST

3.4.2.2 UKF Time Update 1. The state prediction of the point based on the mean is: k). |(k X·F a(:,1)X = 2. The transposed Choleski factorization (assuming a positive definite estimated state covariance matrix) is: Psqrtm = (chol((L+λ)·P))

T;

3. The additional 2·L (18 for a state dimension of 9) sigma point vectors:

2L ,2,L 1,L ifor , L))-i(:,P - k)|(kX·(F 1)i (:,X

L,2, 1, ifor , i))(:,P k)|(kX·(F 1)i (:,X

sqrtma

sqrtma

…++==+

…=+=+

4(A). The Standard Kalman Filter state prediction for linear process dynamics: k). |(k X·F k) | 1(kX =+

*4(B). ( )�+⋅

=��

��

⋅��

���

λ+⋅+⋅�

��

λ+λ

=+1L2

2 iaa i)(:,X

L2

1(:,1)X

L k) | 1(kX

5(A). The SKF state covariance prediction due to assumed linear process dynamics: Q(k). (k)Fk)· |(k)·P(k F k) | 1k P( T +=+ *5(B).

( ) ( ) ( ) ( ) ( ) ( ) .Qk) | 1(kX-i)(:,Xk) | 1(kX-i)(:,XL2

1k) | 1(kX-(:,1)Xk) | 1(kX-(:,1)X-1

L k) | 1P(k

1L2

2 i

T

aa

T

aa2 +

��

��

+⋅+⋅��

���

λ+⋅++⋅+⋅��

���

β+α+�

��

λ+λ=+ �

+⋅

=

3.4.2.3 UKF Measurement Update 1. Another radar measurement, Z(k+1), is observed (see section 4.4.1). 2. 1.2L ,2, 1, ifor , i)](:,X1,h[k i)(:,Z aa +…=+=

3. The measurement prediction: ( )�+⋅

=��

��

⋅��

���

λ+⋅+⋅�

��

λ+λ

=+1L2

2 iaa i)(:,Z

L2

1(:,1)Z

L k) | 1(kZ

4. The updated expected measurement prediction covariance:

( ) ( ) ( ) ( ) ( ) ( ) Rk) | 1(kZ-i)(:,Zk) | 1(kZ-i)(:,ZL2

1k) | 1(kZ-(:,1)Zk) | 1(kZ-(:,1)Z-1

L 1)(kP

1L2

2 i

T

aa

T

aa2

zz +��

��

+⋅+⋅��

���

λ+⋅++⋅+⋅��

���

β+α+�

��

λ+λ

=+ �+⋅

=

5. The expected cross-covariance between the state and measurement:

( ) ( ) ( ) ( ) ( ) ( ) .k) | 1(kZ-i)(:,Zk) | 1(kX-i)(:,XL2

1k) | 1(kZ-(:,1)Zk) | 1(kX-(:,1)X-1

L 1)(kP

1L2

2 i

T

aa

T

aa2

xz �+⋅

=��

��

+⋅+⋅��

���

λ+⋅++⋅+⋅��

���

β+α+�

��

λ+λ

=+

6. The filter gain is calculated identical to the SKF: K = Pxz·(Pzz)-1

.

7. The measurement residual (or innovation): k).|1(kZ– 1) Z(k V ++=

8. The updated state estimate: K·V. k) | 1(kX 1)k | 1(kX ++=++

9. The updated state covariance: P(k+1 | k+1) = P(k+1 | k) - K·Pzz·KT.

3.4.3 Central Difference Filter (CDF) See section 2.3 for a brief description of the CDF. The steps below present a more detailed explanation of the CDF (as implemented for the purposes of this paper). 3.4.3.1 CDF Initialization Same as the EKF except for the following additional steps: 1. The interval step-size for approximation is set: h = 3 (optimal for Gaussian distribution [6]). 2. The state dimension is set: L = length(X). 3. The Choleski factor of the state covariance is calculated: Sx(k | k) = (chol(P(k | k)))

T.

4. The Choleski factor of the process covariance is calculated: Sv = (chol(Q))T.

5. The Choleski factor of the measurement covariance is calculated: Sw = (chol(R))T.

3.4.3.2 CDF Time Update 1(A). The Standard Kalman Filter state prediction (time update) applied for linear process dynamics: k). |(k X·F )k | 1k (X =+ 2(A).The SKF state covariance prediction is applied due to the assumed linear process dynamics:

Page 6: BST

Q(k). (k)Fk)· |(k)·P(k F k) | 1k P( T +=+ *2(B). Calculate matrices containing divided differences:

( )h2

0) u(k), j),(k,Sh-k)|(kXf(-0) u(k), j),(k,Shk)|(kXf( j)(k,S xxxx ⋅

⋅⋅+=

( )h2

j))(k,Sh-0 u(k), k),|(kXf(-j))(k,Sh0 u(k), k),|(kXf( j)(k,S vvxv ⋅

⋅⋅+=

for j = 1,2,…,L. The state prediction covariance: P(k+1 | k) = [Sxx(k) Sxv(k)]·[Sxx(k) Sxv(k)]

T + Q.

3. The Choleski factor of the state prediction covariance is calculated: Sx(k+1) = (chol(P(k+1 | k)))T.

3.4.3.3 CDF Measurement Update 1. The measurement prediction: , k)] | 1(kX1,g[k k)|1(kZ ++=+ where g(x) is the nonlinear measurement function of x. 2. Another radar measurement, Z(k+1), is observed (see section 4.4.1).3.

( )h2

0) u(k), j),1,(kSh-k) | 1(kX(g-0) u(k), j),1,(kShk) | 1(kX(g j)1,(kS xx

zx ⋅+⋅++⋅++

=+

( )h2

j))(k,Sh0 u(k), k), | 1(kX(g-j))(k,Sh0 u(k), k), | 1(kX(g j)1,(kS ww

zw ⋅⋅−+⋅++

=+

for j = 1,2,…,L. Pz(k+1) = [Szx(k) Szw(k)]·[Szx(k) Szw(k)]

T + R.

4. The cross-covariance is calculated: Pxz = Sx(k+1)·Szx(k+1)T.

5. The filter gain is calculated identical to the SKF: K = .-1z·xz PP

6. The measurement residual (innovation): k).|1(kZ– 1) Z(k V ++=

7. The state estimate is updated: K·V. k) | 1(kX 1)k | 1(kX ++=++

8. The state covariance is updated: P(k+1 | k+1) = P(k+1 | k) - K·Pz·KT.

9. The Choleski factor of state covariance: Sx(k+1 | k+1) = (chol(P(k+1 | k+1)))T.

3.4.4 Hierarchical Information Matr ix Fusion Multi-sensor cases (if not applying the trivial measurement conversion) require estimate fusion. A trivial measurement conversion is applied for comparison using the conversion from spherical coordinates to Cartesian (in the cases with radar measurements) or intersection of planes (in cases with two infrared sensors). The super-measurement performed using the intersection of planes between two infrared bearing-only measurements may be considered centralized fusion architecture. The second, and more versatile, fusion architecture presented below hierarchically fuses estimates from each sensor at a global fusion node with full or partial feedback to the local sensors using Information Matrix Fusion[10]. The significance of fusion with respect to the comparison between the EKF, UKF and CDF is in the ability to handle the potentially unstable consequences of a varied fusion rate as well as the necessity for 3-dimensional bearing only tracking. The fusion assumes a common prior, but negligible process noise. The following example demonstrates this application to the case involving two infrared satellites and a ground radar for the times when all three are being fused:

( ) ( )( ) ( ) ( ) ( )( )

( )( ) ). k)|1(kXk|1+kP-1)k|1(kX1)k|1(kP

+1)k|1(kX1)k|1(kP+1)k|1(kX1)k|1(kP (1+k|1+kP1)k|1(kX

;k|1+kP1+k|1+kP1+k|1+kP1)k|1(kP1+k|1+kP

Q;Fk|kPFk|1+kP

global1-

globalRAD1-

RAD

IR21-

IR2IR11-

IR1globalglobal

11-global

1-RAD

1-IR2

1-IR1global

Tglobalglobal

+⋅++⋅++

++⋅++++⋅++⋅=++

−++++=

+⋅⋅=−

In that particular case (Case 4), full feedback of the state mean and covariance is implemented upon which the local estimates are set to the new global estimate. This fusion architecture is applied to the two-Infrared case (Sensor Configuration Case 3) using partial feedback of only the state estimate. 3.5 Filter Per formance Measures Each filter shares the same random number generator seed in MATLAB for each corresponding Monte Carlo trial run. The “position error” represents the average trial absolute estimation position error over time. The “standard deviation

Page 7: BST

from the position error” provides a measure of the expected average deviation from the “position error” over the 100 trials. The “end position error” is the average trial absolute end position estimation error and the “standard deviation about the end position error” provides a measure of confidence in this metric, similar to the two performance measures above. The “speed” is represented by the average trial cpu time using MATLAB. The “memory” is represented as the maximum number of stored elements using MATLAB (using the “whos” function). Each set of results is discussed.

4. PERFORMANCE ANALYSIS The comparison between the EKF, UKF, CDF and the trivial converted measurement is separated by individual case studies after which a final summarization is presented. The following table gives the simulation parameters for Case 1.1:

Table 1: Case 1.1 Simulation Parameters

Sensor Configuration Process Dynamics Communication Rate Measurement Dynamics Estimation Algorithms Medium Orbit Satellite Radar

Simplified Boost Phase

Radar Measurement Updates at 1Hz

Simplified Radar with Low Measurement Noise

EKF, UKF, CDF, Measurement

Figure 1: Case 1.1 Accuracy for Medium Orbit Radar Tracking Boost Phase. Demonstrates nearly identical performance of EKF, UKF, and CDF and the shared improvement over the measurement error, as well as the bounds of position error for each approach.

Table 2: Case 1.1 Estimator Performance Comparison

Estimator

Posit ion Er ror : (Aver age Tr ial

Absolute Est imation Position Err or Over

Time) in km

Standard Deviation fr om the Aver age

Tr ial Absolute Est imation Position Err or Over Time in

km

End Position Er ror (Aver age Absolute End

Position Estimation Err or )

in km

Standard Deviation fr om the Aver age

Absolute End Position Estimation

Err or in km

Average Tr ial Standar d Deviation fr om the Average Absolute

Est imation Position Err or Over Time in km

Speed (Average

CPU Time Per Tr ial) in seconds

M emory (M aximum Number of

Stor ed Elements)

Tr ivial Conver ted M easurement

2.08 0.11 1.9 1.2 1.1 0.08 186

Extended K alman Filter 0.98 0.12 0.82 0.44 0.2 0.15 330Unscented K alman Filter 0.98 0.12 0.82 0.44 0.62 0.89 645Centr al Dif ference Filter 0.99 0.12 0.83 0.43 0.61 0.46 504

The above results show nearly identical performance of the EKF, UKF and CDF in this case. The first order approximation by the EKF is sufficient (due to enough linearity and continuity). The initialized expected state covariance is accurate/small enough for the UKF and CDF to converge quickly.

Table 3: Relative Estimator Difference for Case 1.1

Est imatorRelative A ccur acy: T ime A ver age of T r ial A ver age of A ver age

of EK F, UK F, and CDF Estimat or A bsolute Posi t ion Er r or s M inus A bsolute Estimator Posi t ion Er r or

Standar d Deviation About the Relative

A ccur acy

Chance T hat A bsolute Estimator Posi t ion Er r or is L ess Than the T ime A ver age of the Tr ial A ver age of the EK F,

UK F, and CDF A bsolute Posi t ion Er r or s

Extended K alman Fil ter 0.0002 0.0079 51%Unscent ed K alman Fi lter 0.0002 0.0079 51%Cent r al Dif fer ence Fi lt er -0.0005 0.0159 49%

Page 8: BST

The above table demonstrates a more precise comparison between estimator errors taking advantage of the fact that each handles the same random numbers in the same time-steps. This comparison further emphasizes the nearly identical performance between all three filters showing little statistical difference. Similar comparisons are not provided for the rest of the case studies, because the differences are emphasized as being more qualitative depending on the specific case. In choosing the best filter to suit the above case, other factors must be compared such as the available measurement frequency, resource availability and the filter risk factors concerning value of accuracy, confidence, speed and memory.

Table 4: Case 1.2 Simulation Parameters

Sensor Configuration Process Dynamics Communication Rate Measurement Dynamics Estimation Algorithms Medium Orbit Satellite Radar

Simplified Boost Through Coast Phases

Radar Measurement Updates at 1Hz

Simplified Radar with Low Measurement Noise

EKF, UKF, CDF, Measurement

Table 5: Case 1.2 Estimator Performance Comparison

Estimator

Position Error : (Average Tr ial

Absolute Estimation Position Error Over

Time) in km

Standard Deviation from the Average

Tr ial Absolute Estimation Position Error Over Time in

km

End Position Error (Average Absolute End

Position Estimation Error)

in km

Standard Deviation from the Average

Absolute End Position Estimation

Error in km

Average Trial Standard Deviation from the Average Absolute

Estimation Position Error Over Time in km

Speed (Average

CPU Time Per Trial) in seconds

Memory (Maximum Number of

Stored Elements)

Tr ivial Converted Measurement

1.922 0.051 1.67 0.94 1.1 0.42 187

*Extended Kalman Filter (For 45 out of

100 trials only)1.8 1.5 1.52 0.8 5.1 1.8 331

Unscented Kalman Filter

0.785 0.057 0.68 0.37 0.48 4.5 646Central Difference

Filter0.779 0.066 0.64 0.35 0.48 2.3 505

* Extended Kalman Filter diverges beyond the Chi-squared test bound of 27.87 (denoting the bound of 99.9% confidence) in 55 out of 100 tr ials and resulting in an unbounded estimation error magnitude. A major difference between the performance of the Linear Regression Kalman Filters (UKF and CDF) and the EKF is demonstrated above. The above difference is due to the instability of the EKF’s analytical linearization. The EKF has equivalent performance during the switch from Boost to Coast Phase (at 120 seconds), but becomes unstable later (after around 240 seconds) due to poor linearized approximations. Linearization errors arise from estimation errors due to the dependency of the Jacobian elements on the current state estimate. Such errors lead to sub-optimal performance [5]. The EKF is “only as good as the linearization; if the estimate gets too far off…you get EKF divergence…” [20].

Table 6: Case 2.1 Simulation Parameters

Sensor Configuration Process Dynamics Communication Rate Measurement Dynamics Estimation Algorithms Ground Radar Simplified Boost

Phase Radar Measurement Updates at 1Hz

Simplified Radar with Low Measurement Noise

EKF, UKF, CDF, Measurement

Table 7: Case 2.1 Estimator Performance Comparison

Estimator

Position Error : (Average Tr ial

Absolute Estimation Position Error Over

Time) in km

Standard Deviation from the Average

Tr ial Absolute Estimation Position Error Over Time in

km

End Position Error (Average Absolute End

Posit ion Estimation Error)

in km

Standard Deviation from the Average

Absolute End Position Estimation

Error in km

Average Tr ial Standard Deviation from the Average Absolute

Estimation Position Error Over Time in km

Speed (Average

CPU Time Per Tr ial) in seconds

M emory (Maximum Number of

Stored Elements)

Tr ivial Converted Measurement 0.0193 0.0015 0.043 0.026 0.017 0.069 186Extended Kalman Filter 0.0151 0.0012 0.036 0.02 0.013 0.13 330Unscented K alman Filter 0.0342 0.0014 0.029 0.016 0.11 0.89 645Central Difference Filter 0.0143 0.0015 0.029 0.016 0.011 0.44 585

The results above demonstrate the results of an overly large and poorly proportioned initialized expected state covariance, which leads to overly large and poorly proportioned UKF sigma point sampling. The CDF’s more analytical approximation of covariance suffers less from the poor initial expected state covariance. The EKF approximation is slightly less accurate due to functional discontinuity.

Page 9: BST

Table 8: Case 2.2 Simulation Parameters

Sensor Configuration Process Dynamics Communication Rate Measurement Dynamics Estimation Algorithms Ground Radar Simplified Boost

Phase Radar Measurement Updates at 0.25Hz

Simplified Radar with Low Measurement Noise

EKF, UKF, CDF, Measurement

Table 9: Case 2.2 Estimator Performance Comparison

Estimator

Position Err or: (Average Tr ial

Absolute Estimation Position Er ror Over

Time) in km

Standard Deviation fr om the Average

Tr ial Absolute Estimation Position Err or Over Time in

km

End Position Err or (Average Absolute End

Position Estimation Er ror )

in km

Standard Deviation fr om the Aver age

Absolute End Position Estimation

Err or in km

Average Trial Standard Deviation from the Average Absolute

Estimation Position Er ror Over Time in km

Speed (Aver age

CPU Time Per Trial) in seconds

M emory (M aximum Number of

Stor ed Elements)

Trivial Conver ted M easurement

5.83 0.46 0.045 0.023 6.2 0.045 186

Extended K alman Filter 0.0355 0.0033 0.042 0.023 0.032 0.066 330

* Unscented Kalman Filter 510 240 100 170 570 0.37 645

* * Unscented K alman Filter with State Covariance (P)

Initialized to Process Covariance (Q)

0.0407 0.0032 0.042 0.023 0.044 0.35 645

Centr al Difference Filter 0.052 0.015 0.043 0.024 0.072 0.15 585* All 100 tr ials of the UKF with the same state covariance initialization as the EK F and CDF resulted in estimation er rors of up to at least 300km due to sensitivity to poor ly proportioned and overly large state covariance.* The UKF performs significantly better with a smaller and more accur ately proportioned state covar iance, but the UK F in this case may not be fair ly compar ed to the EKF and CDF shown here. Case 2.2 above demonstrates more significantly that the Linear Regression Kalman Filters (LRKF), such as the UKF and CDF, require problem specific adjustment of scaling parameters. Otherwise overly large and poorly proportioned sampling lead to slow convergence and potentially large magnitudes of error in the case of the UKF, while the CDF quickly resolves error (after 5 to 6 measurement updates). The EKF converges more quickly than CDF and UKF.

Table 10: Case 3.1 Simulation Parameters

Sensor Configuration

Process Dynamics

Communication Rate Measurement Dynamics Estimation Algorithms

Two Geosynchronous Infrared Satellites

Simplified Boost Phase

Infrared Measurement Updates at 1Hz, Information Matrix Fusion with Partial Feedback at 1Hz.

Simplified Radar with Low Measurement Noise (50-microradian standard deviation on Azimuth and Elevation)

EKF, UKF, CDF, Measurement (using Centralized Super-measurement Intersection of Planes)

Table 11: Case 3.1 Estimator Performance Comparison

Estimator

Position Er ror : (Aver age Trial

Absolute Estimation Position Err or Over

Time) in km

Standard Deviation fr om the Aver age

Tr ial Absolute Estimation Position Err or Over Time in

km

End Position Er ror (Aver age Absolute End

Position Estimation Err or)

in km

Standard Deviation fr om the Aver age

Absolute End Position Estimation

Err or in km

Average Tr ial Standar d Deviation fr om the Average Absolute

Estimation Position Err or Over Time in km

Speed (Average

CPU Time Per Tr ial) in seconds

Memory (Maximum Number of

Stor ed Elements)

Trivial Converted Measurement

3.37 0.12 3.4 1.6 1.5 0.12 129

Extended K alman Filter 1.53 0.1 1.26 0.56 0.82 0.39 630Unscented Kalman Filter 1.53 0.1 1.26 0.56 0.82 1.8 1198Centr al Difference Filter 1.5 0.1 1.22 0.53 0.81 0.83 968

Although bearing-only tracking presents more nonlinearity, in this case the satellites are far enough away from the target that the change in bearing angle is very slow. In this case, the EKF and CDF are able to perform sufficient approximation and the UKF performs negligibly better due to 2nd order approximation.

Table 12: Case 3.2 Simulation Parameters

Sensor Configuration

Process Dynamics

Communication Rate Measurement Dynamics Estimation Algorithms

Page 10: BST

Two Geosynchronous Infrared Satellites

Simplified Boost Phase

Infrared Measurement Updates at 1Hz, Information Matrix Fusion with Partial Feedback at 1Hz.

Simplified Radar with High Measurement Noise (500-microradian standard deviation on Azimuth and Elevation)

EKF, UKF, CDF, Measurement (using Centralized Super-measurement Intersection of Planes)

Table 13: Case 3.2 Estimator Performance Comparison

Estimator

Position Er ror : (Aver age Trial

Absolute Estimation Position Err or Over

Time) in km

Standard Deviation fr om the Aver age

Tr ial Absolute Estimation Position Err or Over Time in

km

End Position Er ror (Aver age Absolute End

Position Estimation Err or)

in km

Standard Deviation fr om the Aver age

Absolute End Position Estimation

Err or in km

Average Tr ial Standar d Deviation fr om the Average Absolute

Estimation Position Err or Over Time in km

Speed (Average

CPU Time Per Tr ial) in seconds

Memory (Maximum Number of

Stor ed Elements)

Trivial Converted Measurement

33.7 1.2 34 16 15 0.12 129

Extended K alman Filter 13.5 1.2 9.6 4.6 7.4 0.4 630Unscented Kalman Filter 13.5 1.2 9.6 4.6 7.4 1.7 1198Centr al Difference Filter 13.3 1.2 9.6 4.6 7.3 0.85 968

Each estimator performs under a Gaussian assumption. Comparing Case 3.1 and 3.2, the size of measurement variance does not significantly impact comparative filter performances.

Table 14: Case 4.1 Simulation Parameters

Sensor Configuration Process Dynamics

Communication Rate Measurement Dynamics

Estimation Algorithms

One Geosynchronous Infrared Satellite, One Low Earth Orbit Infrared Satellite, and Ground Radar

Simplified Boost Through Coast Phases

Radar Measurement Updates at 1Hz, Hierarchical Information Matr ix Fusion with Full Feedback at 1Hz.

Simplified Radar with Low Measurement Noise

EKF, UKF, CDF, Measurement (using Centralized Super-measurement Triangulation of Planes) between the two IR sensors for the 1st 60 seconds and using the estimates taken directly from the translated Radar measurements after that.)

Table 15: Case 4.1 Estimator Performance Comparison

Estimator

Position Er ror : (Aver age Trial

Absolute Estimation Position Err or Over

Time) in km

Standard Deviation fr om the Aver age

Tr ial Absolute Estimation Position Err or Over Time in

km

End Position Er ror (Aver age Absolute End

Position Estimation Err or)

in km

Standard Deviation fr om the Aver age

Absolute End Position Estimation

Err or in km

Average Tr ial Standar d Deviation fr om the Average Absolute

Estimation Position Err or Over Time in km

Speed (Average

CPU Time Per Tr ial) in seconds

Memory (Maximum Number of

Stor ed Elements)

Trivial Converted Measurement

0.651 0.041 0.37 0.2 1.5 0.54 218

Extended K alman Filter 0.54 0.1 0.31 0.17 0.78 1.3 729Unscented Kalman Filter 0.51 0.1 0.167 0.099 0.79 7.4 1549Centr al Difference Filter 0.572 0.13 0.1605 0.093 0.94 3.3 1242

The above case is a combination of aspects from sensor configuration cases 2 and 3, but the unique element is the different phases of sensor measurements. The EKF, UKF, and CDF perform nearly identical to each other as sensors are added to and removed from fusion. It is only later in the coast phase that the radar bearing-angle nonlinearity leads to more error in the EKF estimates.

Table 16: Case 4.2 Simulation Parameters

Sensor Configuration Process Dynamics

Communication Rate Measurement Dynamics

Estimation Algorithms

One Geosynchronous Infrared Satellite, One Low Earth Orbit Infrared Satellite, and Ground Radar

Simplified Boost Through Coast Phases

Radar Measurement Updates at 1Hz, Hierarchical Information Matr ix Fusion with Full Feedback at 0.25Hz.

Simplified Radar with Low Measurement Noise

EKF, UKF, CDF, Measurement (using Centralized Super-measurement Triangulation of Planes) between the two IR sensors for the 1st 60 seconds and using the estimates taken directly from the translated Radar measurements after that.)

Page 11: BST

Table 17: Case 4.2 Estimator Performance Comparison

Estimator

Position Error: (Average Tr ial

Absolute Estimation Position Er ror Over

Time) in km

Standard Deviation from the Average

Tr ial Absolute Estimation Position Er ror Over Time in

km

End Position Er ror (Average Absolute End

Position Estimation Er ror )

in km

Standard Deviation from the Average

Absolute End Position Estimation

Error in km

Average Tr ial Standard Deviation from the Average Absolute

Estimation Position Er ror Over Time in km

Speed (Average

CPU Time Per Trial) in seconds

Memory (Maximum Number of

Stored Elements)

Trivial Conver ted Measurement

8.4 1 0.37 0.2 7.3 0.51 218

Extended Kalman Filter 0.515 0.094 0.34 0.18 1.1 1 729* Unscented Kalman Filter

(For 98 out of 100 tr ials only)

0.466 0.093 0.167 0.099 1.1 6.8 1549

* * Central Difference Filter (For 99 out of 100 tr ials

only)0.48 0.098 0.16 0.093 1.1 2.9 1242

* The UK F failed the requirement that the state covariance be positive definite in 2 out of the 100 tr ials.** The CDF failed the requirement that the state covar iance be positive definite in 1 out of the 100 tr ials.

The inability of both the UKF and CDF to complete 100 trial runs due to at least one run resulting in an unmet requirement that the state covariance be positive definite is significant in that it demonstrates the unreliability of this implementation of these filters. The respective square-root implementations may be necessary in cases where the scaling parameters are not properly adjusted. When implemented with partial feedback, the each is able to perform properly in this case due to the state covariance estimate remaining positive definite.

In terms of estimation accuracy, there is negligible difference between the EKF, UKF and CDF in several cases. However, the EKF may diverge due to poor functional linearization and the UKF may result in large errors and slow convergence due to poorly proportioned and overly large covariance matrices while CDF is much less sensitive to this problem. In terms of speed, there is an approximately 1.0 (EKF) : 2.5 (CDF) : 5.0 (UKF) ratio for speed (serial processing only), while the ratio for memory is approximately 1.0 (EKF) : 1.6 (CDF) : 2.0 (UKF) for maximum number of elements stored. The EKF requires that the partial derivative at a particular point is available, causing very low reliability in some cases. The UKF and CDF do not always meet the requirement that the state covariance be positive definite, therefore the Square-Root UKF and CDF should be used instead, which are said to be more efficient and robust.

5. CONCLUSIONS AND POTENTIAL FUTURE WORK The EKF is based on, sometimes unstable, functional approximations through linearization and is computationally less costly in cases requiring serial processing. The UKF is based on linear regression through generating deterministic “sigma points” about the expected state and solving for the weighted sum. The “sigma point” accuracy is heavily dependent on the accuracy of the state covariance approximation, which can lead to unreasonable magnitudes of error and at least short-term instability. The CDF (or DDF) is based on a similar linear regression to the UKF, but is able to more precisely estimate the state covariance resulting in a more accurate set of weighted sample points to estimate from. Ballistic missile tracking from satellite range, although using very nonlinear equations, does not result in large changes of aspect angles during the tracking period, and the linearization of the EKF appears to be sufficient for most applications. When ground radar measurements are of a relatively nearby passing missile, large changes in aspect angle would present significant impacts from the nonlinearity and would be better handled by the Linear Regression Kalman Filters (such as the UKF and CDF). However, optimization of filter parameters with respect to any given application could change the comparative results, because there is room for improvement over the implementation using default filter parameters. Ultimately, risk (or cost) factors such as accuracy, confidence, speed, memory, reliability and expected value (or threat/cost) must be each taken into consideration with respect to available resources and the given problem. Recent work involving the Unscented Kalman Filter applied to each “particle” of a particle filter has been shown to provide more accuracy than the standard sequential Monte Carlo implementations through the availability of a more accurate proposal distribution. Therefore, it is suggested that a more accurate implementation of the Particle Filter may expected from using a proposal distribution based on the results of parallel Central Difference Filters on each “particle.” An additional suggested direction for future work is to take advantage of different approaches to filtering using a Multiple Model or Adaptive Filtering approach. However, instead of only implementing separate process or

Page 12: BST

measurement models, it is suggested that different approaches such as linear regression filters (such as the UKF and the CDF) and linearization filters (such as the EKF) as well as sequential Monte Carlo approaches (such as the PF) be applied in parallel. This may possibly take advantage of each method’s strengths and avoid each method’s major flaws.

REFERENCES

[1] J. R. Van Zandt, “Boost Phase Tracking with an Unscented Filter,” MITRE Corp. 2002. [2] T. Lefebvre, H. Bruyninckx, J. De Schutter, “Kalman Filters for Nonlinear Systems: a Comparison of Performance,” Internal Report 01R033, Kaatholieke Universiteit Leuven, Oct. 2001. [3] T. Lefebvre, H. Bruynickx, J. De Schutter, “Comment on ‘a new method for the nonlinear transformation of means and covariances in filters and estimators’ ,” http://citeseer.nj.nec.com/502099.html, Sep. 2001. [4] R. van der Merwe, E. A. Wan, “The Square-Root Unscented Kalman Filter for State and Parameter Estimation,” International Conference on Acoustics, Speech, and Signal Processing, (Salt Lake City, Utah), May 2001 [5] E. Wan, R. van der Merwe, “The Unscented Kalman Filter for Nonlinear Estimation,” in Proc. Of IEEE Symposium 2000 (AS-SPCC), Lake Louise, Alberta, Canada, Oct. 2000. [6] M. Norgaard, N.K. Poulsen, O. Ravn, “Advances in derivative-free state estimation for non-linear systems,” Technical Report IMM-REP-1998-15 (revised edition), Technical University of Denmark, Denmark, April 2000. [7] S.J. Julier, “The scaled unscented transformation,” http://citeseer.nj.nec.com/julier99scaled.html, Submitted to Elsevier Prepring, January 1999. [8] S.J. Julier, “Reduced sigma point filters for the propagation of means and covariances through nonlinear transformations,” http://citeseer.nj.nec.com/julier98reduced.html, Submitted to Elsevier Prepring, December, 1998. [9] S.J. Julier, J.K. Uhlman, “A new extension of the Kalman filter to nonlinear systems,” The Proceedings of the AeroSense: The 11th International Symposium on Aerospace/Defense Sensing, Simulation and Controls, (Orlando, Florida), 1997. [10] K.C. Chang, “Evaluating Hierarchical Track Fusion with Information Matrix Filter,” Proc. Third International Conference on Multisource-Multisensor Information Fusion, (Paris, France), July 2000. [11] Siouris, G. M., Chen, G., and Wang, J. (1995) “Tracking an Incoming Ballistic Missile Using an Extended Interval Kalman Filter,” IEEE Transactions on Aerospace and Electronic Systems, 33, 1 (Jan. 1997), 232-240. [12] X. Lin, T. Kirubarajan, Y. Bar-Shalom, and Simon Maskell. “Comparison of EKF, pseudomeasurement filter, and particle filter for a bearing-only target tracking problem,” Proceedings of SPIE Vol. #4728 [4728-23] 2002. [13] Simon J. Julier and Jeffrey K. Uhlmann. “A Consistent, Debiased Method for Converting Between Polar and Cartesian Coordinate Systems,” Proceedings of SPIE Vol. #3086 (Paper #: 3086-15) 1997. [14] Simon Maskell and Neil Gordon. “A Tutorial on Particle Filters for On-line Nonlinear/Non-Gaussian Bayesian Tracking,” QinetiQ Ltd 2001. [15] Guanrong Chen, Jianrong Wang, and Leang S. Shieh. “ Interval Kalman Filtering,” IEEE Trans. on Aero. Elect. Sys., Vol. 33, pp. 250-259, 1997. [16] Rudolph van der Merwe, Arnaud Doucet, Nando de Freitas, and Eric Wan. “The Unscented Particle Filter,” in Advances in Neural Information Processing Systems (NIPS13), MIT Press, Eds. T. K. Leen, T. G. Dietterich and V. Tresp, Dec, 2000. [17] Yong Rui and Yunqiang Chen. “Better Proposal Distributions: Object Tracking Using Unscented Particle Filter,” in Proc. IEEE Int'l Conf. on Computer Vision and Pattern Recognition (CVPR'01), Kauai, Hawaii, 2001. [18] Herbert Lin. “Rationalized Speed/altitude Thresholds for ABM Testing,” in Science and Global Security, 1990, Volume 2, pp. 87-101. [19] Yaakov Bar-Shalom, X. Rong Li, and Thiagalingam Kirubarajan. Estimation with Applications to Tracking and Navigation. John Wiley & Sons, INC, New York, 2001. [20] Aaron Lanterman. “Lecture 17: 2/15/02: Variations of the Kalman Filter,” ECE 7251: Signal Detection and Estimation, Georgia Institute of Technology, Spring 2002. (http://users.ece.gatech.edu/~lanterma/ece7251) [21] R. van der Merwe and E. A. Wan. “Efficient Derivative-Free Kalman Filters for Online Learning.” European Symposium on Articifial Neural Networks (ESANN), Bruges, Belgium, April 2001.