references - cern · 200 answers and hints to exercises 1.10. (1.35) can be verified by a direct...

38
References Alfeld, G. and Herzberger, J. (1983): Introduction to Interval Com- putations (Academic, New York) Anderson, B.D.G., Moore, J.B. (1979): Optimal Filtering (Pren- tice-Hall, Englewood Cliffs, NJ) Andrews, A. (1981): "Parallel processing of the Kalman filter", IEEE Proc. Int. Conf. on Para!' Process., pp.216-220 Aoki, M. (1989): Optimization of Stochastic Systems: Topics in Discrete-Time Dynamics (Academic, New York) Astrom, K.J., Eykhoff, P. (1971): "System identification - a sur- vey," Automatica, 7, pp.123-162 Balakrishnan, A.V. (1984,87): Kalman Filtering Theory (Optimiza- tion Software, Inc., New York) Bierman, G.J. (1973): "A comparison of discrete linear filtering algorithms," IEEE Trans. Aero. Elec. Systems, 9, pp.28-37 Bierman, G.J. (1977): Factorization Methods for Discrete Sequen- tial Estimation (Academic, New York) Blahut, R.E. (1985): Fast Algorithms for Digital Signal Processing (Addison-Wesley, Reading, MA) Bozic, S.M. (1979): Digital and Kalman Filtering (Wiley, New York) Brammer, K., Siffiin, G. (1989): Kalman-Bucy Filters (Artech House, Boston) Brown, R.G. and Hwang, P.Y.C. (1992,97): Introduction to Ran- dom Signals and Applied Kalman Filtering (Wiley, New York) Bucy, R.S., Joseph, P.D. (1968): Filtering for Stochastic Processes with Applications to Guidance (Wiley, New York) Burrus, C.S. , Gopinath, R.A. and Guo, H. (1998): Introduction to Wavelets and Wavelet Transfroms: A Primer (Prentice-Hall, Upper Saddle River, NJ) Carlson, N.A. (1973): "Fast triangular formulation of the square root filter," J. ALAA, 11 pp.1259-1263

Upload: others

Post on 17-Mar-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

References

Alfeld, G. and Herzberger, J. (1983): Introduction to Interval Com­putations (Academic, New York)

Anderson, B.D.G., Moore, J.B. (1979): Optimal Filtering (Pren­tice-Hall, Englewood Cliffs, NJ)

Andrews, A. (1981): "Parallel processing of the Kalman filter",IEEE Proc. Int. Conf. on Para!' Process., pp.216-220

Aoki, M. (1989): Optimization of Stochastic Systems: Topics inDiscrete-Time Dynamics (Academic, New York)

Astrom, K.J., Eykhoff, P. (1971): "System identification - a sur­vey," Automatica, 7, pp.123-162

Balakrishnan, A.V. (1984,87): Kalman Filtering Theory (Optimiza­tion Software, Inc., New York)

Bierman, G.J. (1973): "A comparison of discrete linear filteringalgorithms," IEEE Trans. Aero. Elec. Systems, 9, pp.28-37

Bierman, G.J. (1977): Factorization Methods for Discrete Sequen­tial Estimation (Academic, New York)

Blahut, R.E. (1985): Fast Algorithms for Digital Signal Processing(Addison-Wesley, Reading, MA)

Bozic, S.M. (1979): Digital and Kalman Filtering (Wiley, NewYork)

Brammer, K., Siffiin, G. (1989): Kalman-Bucy Filters (ArtechHouse, Boston)

Brown, R.G. and Hwang, P.Y.C. (1992,97): Introduction to Ran­dom Signals and Applied Kalman Filtering (Wiley, New York)

Bucy, R.S., Joseph, P.D. (1968): Filtering for Stochastic Processeswith Applications to Guidance (Wiley, New York)

Burrus, C.S. , Gopinath, R.A. and Guo, H. (1998): Introductionto Wavelets and Wavelet Transfroms: A Primer (Prentice-Hall,Upper Saddle River, NJ)

Carlson, N.A. (1973): "Fast triangular formulation of the squareroot filter," J. ALAA, 11 pp.1259-1263

Page 2: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

192 Kalman Filtering

Catlin, D.E. (1989): Estimation, Control, and the Discrete KalmanFilter (Springer, New York)

Chen, G. (1992): "Convergence analysis for inexact mechanizationof Kalman filtering," IEEE Trans. Aero. Elect. Syst., 28,pp.612-621

Chen, G. (1993): Approximate Kalman Filtering (World Scientific,Singapore)

Chen, G., Chen, G. and Hsu, S.H. (1995): Linear Stochastic ControlSystems (CRC, Boca Raton, FL)

Chen, G., Chui, C.K. (1986): "Design of near-optimal linear digitaltracking filters with colored input," J. Comp. App!. Math., 15,pp.353-370

Chen, G., Wang, J. and Shieh, L.S. (1997): "Interval Kalman fil­tering," IEEE Trans. Aero. Elect. Syst., 33, pp.250-259

Chen, H.F. (1985): Recursive Estimation and Control for Stochas­tic Systems (Wiley, New York)

Chui, C.K. (1984): "Design and analysis of linear prediction­correction digital filters," Linear and Multilinear Algebra, 15,pp.47-69

Chui, C.K. (1997): Wavelets: A Mathematical Tool for Signal Anal­ysis, (SIAM, Philadelphia)

Chui, C.K., Chen, G. (1989): Linear Systems and Optimal Control,Springer Ser. Inf. Sci., Vo!. 18 (Springer, Berlin Heidelberg)

Chui, C.K., Chen, G. (1992,97): Signal Processing and SystemsTheory: Selected Topics, Springer Ser. Inf. Sci., Vo!. 26(Springer, Berlin Heidelberg)

Chui, C.K., Chen, G. and Chui, H.C. (1990): "Modified extendedKalman filtering and a real-time parallel algorithm for systemparameter identification," IEEE Trans. Auto. Control, 35,pp.100-104

Davis, M.H.A. (1977): Linear Estimation and Stochastic Control(Wiley, New York)

Davis, M.H.A., Vinter, R.B. (1985): Stochastic Modeling and Con­trol (Chapman and Hall, New York)

Fleming, W.H., Rishel, R.W. (1975): Deterministic and StochasticOptimal Control (Springer, New York)

Gaston, F.M.F., Irwin, G.W. (1990): "Systolic Kalman filtering:An overview," lEE Proc.-D, 137, pp.235-244

Goodwin, G.C., Sin, K.S. (1984): Adaptive Filtering Predictionand Control (Prentice-Hall, Englewood Cliffs, NJ)

Page 3: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

References 193

Haykin, S. (1986): Adaptive Filter Theory (Prentice-Hall, Engle­wood Cliffs, NJ)

Hong, L., Chen, G. and Chui, C.K. (1998): "A filter-bank-basedKalman filtering technique for wavelet estimation and decom­position of random signals," IEEE Thans. Circ. Syst. (11), 45,pp. 237-241.

Hong, L., Chen, G. and Chui, C.K. (1998): "Real-time simultane­ous estimation and ecomposition of random signals," Multidim.Sys. Sign. Proc., 9, pp. 273-289.

Jazwinski, A.H. (1969): "Adaptive filtering," Automatica, 5,pp.475-485

Jazwinski, A.H. (1970): Stochastic Processes and Filtering Theory(Academic, New York)

Jover, J.M., Kailath, T. (1986): "A parallel architecture for Kalmanfilter measurement update and parameter estimation," Auto­matica, 22, pp.43-57

Kailath, T. (1968): "An innovations approach to least-squares es­timation, part I: linear filtering in additive white noise," IEEETrans. Auto. Contr., 13, pp.646-655

Kailath, T. (1982): Course Notes on Linear Estimation (StanfordUniversity, CA)

Kalman, R.E. (1960): "A new approach to linear filtering and pre­diction problems," Thans. ASME, J. Basic Eng., 82, pp.35-45

Kaiman, R.E. (1963): "New method in Wiener filtering theory,"Proc. Symp. Eng. Appl. Random Function Theory and Prob­ability (Wiley, New York)

Kalman, R.E., Bucy, R.S. (1961): "New results in linear filteringand prediction theory," Thans. ASME J. Basic Eng., 83, pp.95­108

Kumar, P.R., Varaiya, P. (1986): Stochastic Systems: Estimation,Identification, and Adaptive Control (Prentice-Hall, EnglewoodCliffs, NJ)

Kung, H.T. (1982): "Why systolic architectures?" Computer, 15,pp.37-46

Kung, S.Y. (1985): "VLSI arrays processors," IEEE ASSP Maga­zine, 2, pp.4-22

Kushner, H. (1971): Introduction to Stochastic Control (Holt,Rinehart and Winston, Inc., New York)

Lewis, F.L. (1986): Optimal Estimation (WHey, New York)

Page 4: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

194 Kalman Filtering

Lu, M., Qiao, X., Chen, G. (1992): "A parallel square-root algo­rithm for the modified extended Kalman filter," IEEE Trans.Aero. Elect. Syst., 28, pp.153-163

Lu, M., Qiao, X., Chen, G. (1993): "Parallel computation of themodified extended Kalman filter," Int'l J. Comput. Math., 45,pp.69-87

Maybeck, P.S. (1982): Stochastic Models, Estimation, and Control,Vo!' 1,2,3 (Academic, New York)

Mead, C., Conway, L. (1980): Introduction to VLSI systems(Addison-Wesley, Reading, MA)

Mehra, R.K. (1970): "On the identification of variances and adap­tive Kalman filtering," IEEE Trans. Auto. Contr., 15, pp.175­184

Mehra, R.K. (1972): "Approaches to adaptive filtering," IEEETrans. Auto. Contr., 17, pp.693-698

Mendel, J.M. (1987): Lessons in Digital Estimation Theory (Pren­tice-Hall, Englewood Cliffs, New Jersey)

Potter, J.E. (1963): "New statistical formulas," InstrumentationLab., MIT, Space Guidance Analysis Memo. # 40

Probability Group (1975), Institute of Mathematics, AcademiaSinica, China (ed.): Mathematical Methods of Filtering forDiscrete-Time Systems (in Chinese) (Beijing)

Ruymgaart, P.A., Soong, T.T. (1985,88): Mathematics of Kalman­Bucy Filtering, Springer Ser. Inf. Sci., Vo!' 14 (Springer, BerlinHeidelberg)

Shiryayev, A.N. (1984): Probability (Springer-Verlag, New York)Siouris, G., Chen, G. and Wang, J. (1997): "Thacking an incoming

ballistic missile," IEEE Trans. Aero. Elect. Syst., 33, pp.232­240

Sorenson, H.W., ed. (1985): Kalman Filtering: Theory and Appli­cation (IEEE, New York)

Stengel, R.F. (1986): Stochastic Optimal Control: Theory and Ap­plication (Wiley, New York)

Strobach, P. (1990): Linear Prediction Theory: A MathematicalBasis for Adaptive Systems, Springer Ser. Inf. Sci., Vo!' 21(Springer, Berlin Heidelberg)

Wang, E.P. (1972): "Optimal linear recursive filtering methods," J.Mathematics in Practice and Theory (in Chinese), 6, pp.40-50

Wonham, W.M. (1968): "On the separation theorem of stochasticcontrol," SIAM J. Control, 6, pp.312-326

Page 5: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

References 195

Xu, J.H., Bian, G.R., Ni, C.K., Tang, G.X. (1981): State Estima­tion and System Identification (in Chinese) (Beijing)

Young, P. (1984): Recursive Estimation and Time-Series Analysis(Springer, New York)

Page 6: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises

Chapter 1

1.1. Since most of the properties can be verified directlyby using the definition of the trace, we only considertrAB = trBA. Indeed,

1.2.

1.3.

A=[~ ;], B=[~ ~].1.4. There exist unitary matrices P and Q such that

andn n

LA~ ~ LJl~'k=l k=l

Let P = [Pij]nxn and Q = [qij]nxn. Then

Pin + P~n + + P;n = 1 , qrl + q~l + + q~l = 1,

q~2 + q~2 + + q~2 = 1 , ... ,q~n + q~n + + q~n = 1 ,

and

Page 7: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

198 Answers and Hints to Exercises

tr P~lAi + P~2A~+ ... +P~nA;

*

P~lAi + P~2A~* + ... +P~nA~

= (pil + P~l + ... + p;l)Ai + ... + (Pin + P~n + ... + P;n)A;

= Ai + A~ + ... + A~.

Similarly, trBBT = JLi + JL~ + ... + JL~. Hence, trAAT ~ trBBT .

1.5. Denote

100 2

1= e-Y dy.-00

Then, using polar coordinates, we have

1.6. Denote

100 2

I(x) = -00e-xydy.

Then, by Exercise 1.5,

1 100 2I(x) = - e-(VXY) d( yXy) = V1r / x .Vi -00

Hence,

100 2 d Iy2e- Y dy = --I(x)-00 dx x=l

= -!-(~)I = ~V1f.dx x=l 2

1.7. (a) Let p be a unitary matrix so that

R = pTdiag[Al,···, An]P,

Page 8: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 199

and define

Then

E(X) =1:xf (x)dx

=1:~ + y'2p-l diag[ 1/-1>.1> ... ' l/-I>.n ]y)f(x)dx

=~l:f(X)dX

00 00 [Yl]+ const·loo

... loo ~n e-yr . .. e-Y;dYl ... dYn

=l!:·1+0=l!:.

(b) Using the same substitution, we have

Var(X)

=1:(x - ~)(x - ~)T f(x)dx

=1:2R1/

2yyT R 1/

2f(x)dx

= (~~n/2Rl/2{1: ...1: [~rYnYl

2 2 } 1/2. e-Yl ... e-YndYl ... dYn R

= R l/

2 IRl/2 = R.

1.8. All the properties can be easily verified from the defini­tions.

1.9. We have already proved that if Xl and X 2 are indepen­dent then CoV(Xl ,X2 ) = o. Suppose now that R 12 =Cov(Xl , X 2 ) = o. Then R2l = COV(X2 , Xl) = 0 so that

1f(Xl ,X2 ) = /(21l")n 2detRlldetR22

. e-~(Xl-~l)TRll(Xl-~1)e-~(X2-~2)T R 22(X2 -E:2)

= fl(Xl ) . f2(X2 ).

Hence, Xl and X 2 are independent.

Page 9: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

200 Answers and Hints to Exercises

1.10. (1.35) can be verified by a direct computation. First, thefollowing formula may be easily obtained:

This yields, by taking determinants,

and

det [RxxRyx

~Xy] = det [Rxx - RxyR;;R yx ] . detRyyyy

([;J -[~x ]) T [~: ~:r\[;J -[~x ])-y -y

=(x - i!:.)T [Rxx - RxyR;;Ryxr\x - i!:.) + (y - !!:.)TR;;(y -!!:.)'

whereI!:. = l!.x + RxyR;yl(y -l!.y) .

The remaining computational steps are straightforward.1.11. Let Pk = ClWkZk and 0-

2 = E[pr (ClWkCk)-lpk]. Then itcan be easily verified that

FromdFd(Yk) = 2(CIWkCk)Yk - 2Pk = 0,

Yk

and the assumption that the matrix (CJWkCk) is nonsin­gular, we have

1.12.EXk = (Cl R;lCk)-lC~R;lE(Vk - DkUk)

= (cl R;lCk)-lC~R;lE(CkXk + !lk)

=EXk·

Page 10: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 201

Chapter 2

2.1.

Wk- kl_l = VarC~k k-l) = E(fk k-Ifl k-l), , "

= E(Vk-1 - Hk,k-lXk)(Vk-l - Hk,k-lXk) T

] [

Co E~=l iPOiri-1{i_l ]

+Var :

Rk-l Ck-l iPk-l,krk-l{k_1

2.2. For any nonzero vector x, we have x T Ax > 0 and x T Bx ~ 0so that

X T (A + B)x = X T Ax + X T Bx > 0 .

Hence, A + B is positive definite.2.3.

W - 1k,k-l

= E(fk k-lfl k-l), ,

= E(fk-l,k-l - Hk,k-lrk-l{k_l)(fk-l,k-l - Hk,k-lrk-l~k_I)T

= E(fk-l,k-Ifl-l,k-l) + Hk,k-lrk-IE({k_I{~_I)rl-IH~k-1

= Wk"!l,k-l + Hk-l,k-Iq>k-l,krk-IQk-lrl-liPl-l,kH"[-l,k-l'

2.4. Apply Lemma 1.2 to All = Wk"!I,k-I,A22 = Qk~l and

2.5. Using Exercise 2.4, or (2.9), we have

H~k-lWk,k-l

=iPl- 1 kH"[-1 k-lWk-1,k-1, ,

- q>l-l,kH"[-I,k-1Wk,k-IHk,k-l iPk-l,krk-1

· (Qk~l + rl-1q>l-l,kH"[-l,k-lWk-l,k-lHk-l,k-l iPk-1,krk-l)-1

· rl-1iPl-1 kH"[-1 k-lWk-l,k-l, ,

=q>l-l,k{I - H"[-l,k-lWk-l,k-lHk-l,k-l iPk-l,krk-l

· (Qk~l + rl- 1q>l-l,kH"[-l,k-lWk-l,k-lHk-l,k-l iPk-l,krk-l)-l

· rl- 1q>l-l k}H"[-l k-lWk-l,k-l ., ,

Page 11: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

202 Answers and Hints to Exercises

2.6. Using Exercise 2.5, or (2.10), and the identity Hk,k-1Hk-1,k-1~k-1,k, we have

(H~k-1Wk,k-1Hk,k-1)~k,k-1

· (H"!-1,k-1 Wk-1,k-1Hk-1,k-1)-1 H~-1,k-1Wk-1,k-1

= ~l-l,k{I - H~-1,k-1Wk-1,k-1Hk-1,k-1 ~k-1,krk-1

· (Q;;~l + rl-1~l-1,kH~-1,k-1Wk-1,k-1Hk-1,k-1 ~k-1,krk-1)-1

· rl-1~l-1,k}H~-1,k-1Wk-1,k-1

= H~k-1Wk,k-1 .

2.7.

Pk,k-1C~ (CkPk,k-1CJ + Rk)-l

=Pk,k-1C~(R;;l - R;;lCk(Pk~~_l + C~R;;lCk)-lC~R;;l)

=(Pk,k-1 - Pk,k-1C~R;;lCk(Pk~_l + C~R;;lCk)-l)C~R;;l,

=(Pk,k-1 - Pk,k-1C~ (CkPk,k-1C~ + Rk)-l

· (CkPk,k-1C~ + Rk)R;;lCk(Pk~_l + c~R"k1Ck)-1)C"! R"k 1,

=(Pk,k-1 - Pk,k-1C~ (CkPk,k-1C~ + Rk)-l

· (CkPk,k-1C~R"k1Ck + Ck)(Pk,~-l + C~R"k1Ck)-1)C~R"k1

=(Pk,k-1 - Pk,k-1C~ (CkPk,k-1C~ + Rk)-lCkPk,k-1

· (C~R;;lCk + Pk~~-l)(Pk,~-l + c~R"k1Ck)-1)C~R;;l

=(Pk,k-1 - Pk,k-1C~ (CkPk,k-1C~ + Rk)-lCkPk,k-1)C~R"k1

=Pk,kCJ R;;l

=Gk·

2.8.

Pk,k-1

=(H~k-1Wk,k-1Hk,k-1)-1

=(~l-1,k(H"!-1,k-1Wk-1,k-1Hk-1,k-1

- H~-1,k-1Wk-1,k-1Hk-1,k-1 ~k-1,krk-1

· (Q;;~l + rl-1~J-1,kH"!-1,k-1Wk-1,k-1Hk-1,k-1 ~k-1,krk-1)-1

· rl-1~J-1 kH~-l k-1 Wk-1,k-1Hk-1,k-1)~k-1,k)-1, ,

=(~l-1,kPk-.!1,k-1~k-1,k - ~l-1,kPk.!1,k-1 ~k-1,krk-1

· (Q"k~l + rJ-1 ~J-1,kPk.!1,k-1~k-1,krk-1)-1

r T m.T p-1 m. )-1· k-1 '*'k-1,k k-1,k-1 '*'k-1,k

=(~l-1,kPk!1,k-1~k_1,k)-1+ rk-1Qk-1rl-1

=Ak-1Pk-1,k-1Al-1 + rk-1Qk-1rl-1 .

Page 12: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 203

2.9.

E(Xk - Xklk-l)(Xk - Xklk-l) T

=E(Xk - (H~k-lWk,k-lHk,k-l)-lH~k-lWk,k-lVk-l)

o (Xk - (H~k-lWk,k-lHk,k-l)-lH~k-lWk,k-lVk-l) T

=E(Xk - (H~k-lWk,k-lHk,k-l)-lH~k-lWk,k-l

o (Hk,k-lXk + fk,k-l))(Xk - (H~k-lWk,k-lHk,k-l)-l

o H~k-lWk,k-l(Hk,k-lXk + f.k,k-l)) T

=(H~k-lWk,k-lHk,k-l)-lH~k-lWk,k-lE(fk,k-lfr,k-l)Wk,k-l

o Hk,k-l(H~k-lWk,k-lHk,k-l)-l

=(H~k-lWk,k-lHk,k-l)-l

=Pk,k-l o

The derivation of the second identity is similar.2.10. Since

a 2 = Var(xk) = E(axk-l + ~k_l)2

= a2Var(xk_l) + 2aE(xk-l~k-l) + E(~~-l)

= a2a2 + J.-l2 ,

we have

For j = 1, we have

E(XkXk+l) = E(Xk(axk + ~k))

= aVar(xk) + E(Xk~k)

= aa20

For j = 2, we have

E(XkXk+2) = E(Xk(axk+l + ~k+l))

= aE(xkxk+l) + E(Xk + ~k+l)

= aE(xkxk+l)

= a2a 2,

etc. If j is negative, then a similar result can be obtained.By induction, we may conclude that E(XkXk+j) = a1j1 a 2 forall integers j.

Page 13: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

204 Answers and Hints to Exercises

2.11. Using the Kalman filtering equations (2.17), we have

Po,o = Var(xo) = J.L2 ,

Pk,k-l = Pk-l,k-l ,

( )-1 Pk-l,k-l

Gk = Pk,k-l Pk,k-l + Rk = P 2 'k-l,k-l + a

and

()a2Pk-lk-l

Pk,k = 1 - Gk Pk,k-l = 2 p' .a + k-l,k-l

Observe that

Hence,

GPk-l,k-l

k=Pk-l,k-l + 0'2

so that

Xklk = xklk-l + Gk(Vk - xklk-l)

'" J.L2 '"= Xk-llk-l + 2 k 2 (Vk - Xk-llk-l)a + J.L

with xOlo = E(xo) = o. It follows that

for large values of k.2.12.

N'" I" TQN = N L...J(VkVk)k=l

1 1 N-l

= N(VNvl.) + N L(VkVJ)k=l

1 T N-1 '"= N(VNVN) + -yv-QN-l

'" 1 T '"= QN-l + N[(VNVN) - QN-l]

with the initial estimation Ql = vlvT.

Page 14: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 205

2.13. Use superimposition.2.14. Set Xk = [(xl) T ... (xf)T]T for each k, k = 0,1, ... , with Xj = 0

(and Uj = 0) for j < 0, and define

Then, substituting these equations into

yields the required result. Since Xj = 0 and Uj = 0 for j < 0,it is also clear that Xo = o.

Chapter 3

3.1. Let A = BBT where B = [bij ] =1= o. Then trA = trBBT =

Ei,j b;j > O.3.2. By Assumption 2.1, TU is independent ofxo, {o' ... , {j-I' !la '

!l.j -1' since f ~ j. On the other hand,

ej = Cj(Xj - Yj-l)

j-l

= C j (Aj-IXj-1 + rj-IE. 1 - ""'" Pj - l i(CiXi + "7,))-J- ~, -'I,

i=O

j-l j-l

= Boxo +E Bli~i + E B 2i'!1.ii=O i=O

for some constant matrices Ba, B li and B 2i • Hence, (TU, ej)= Oqxq for all f ~ j.

3.3. Combining (3.8) and (3.4), we have

j-l

ej = IIzjll;lzj = IIzjll;lvj - E(llzjll;lCjPj-1,i)vi;i=O

Page 15: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

206 Answers and Hints to Exercises

that is, ej can be expressed in terms of vD, VI,· .. ,Vj. Con­versely, we have

Vo =Zo = Ilzollqeo,

VI =ZI + CIYo = ZI + CIPO,OVo

=llzIllqel + CIPo,ollzollqeo,

that is, Vj can also be expressed in terms of eo, el, ... , ej.Hence, we have

3.4. By Exercise 3.3, we have

i

Vi = LLlell=O

for some q x q constant matrices Ll' f = 0,1,· .. , i, so that

i

(Vi, Zk) = LLl(el, ek)llzkll; = Oqxq,l=O

i = 0,1, ... ,k - 1. Hence, for j = 0,1, ... ,k - 1,

j

= LPj,i(Vi' Zk)i=O

= Onxq.

3.5. Since

Xk = Ak-IXk-1 + rk-l~k_1

= A k - I (Ak-2 X k-2 + rk-2~k_2) + rk-l~k_1

k-I

= Boxo + L BIi~ii=O

for some constant matrices Bo and B Ii and ~k is indepen­dent of Xo and ~i (0::; i ::; k - 1), we have (Xk' ~k) = o. Therest can ·be shown in a similar manner.

Page 16: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 207

3.6. Use superimposition.3.7. Using the formula obtained in Exercise 3.6, we have

{

~klk = dk-llk-l + hWk-l + Gk(Vk - fldk - dk-llk-l - hWk-l)

dOlo == E(do) ,

where Gk is obtained by using the standard algorithm(3.25) with A k == Ck == rk == 1.

3.8. Let

C==[100].

Then the system described in Exercise 3.8 can be decom­posed into three subsystems:

{

Xk+l = Ax~ + r~i~

vk == CXk + 1]k ,

i == 1,2,3, where for each k, Xk and ~k are 3-vectors, Vk and1]k are scalars, Qk a 3 x 3 non-negative definite symmetricmatrix, and Rk > 0 a scalar.

Chapter 4

4.1. Using (4.6), we have

L(Ax+ By, v)

== E(Ax + By) + (Ax + By, v) [Var(v)] -l(v - E(v))

= A{E(x) + (x, v) [Var(v)]-l(v - E(v))}

+B{E(y)+(y, v)[Var(v)]-l(v-E(v))}

== AL(x, v) + BL(y, v).

Page 17: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

208 Answers and Hints to Exercises

4.2. Using (4.6) and the fact that E(a) = a so that

(a, v) = E(a - E(a)) (v - E(v)) = 0,

we have

L(a, v)=E(a)+(a, v)[Var(v)]-l(v-E(v))=a.

4.3. By definition, for a real-valued function j and a matrixA = [aij], dj /dA = [aj /aaji]. Hence,

o= 8~ (trllx - YII~)a

= 8HE((x - E(x)) - H(v - E(v))) T ((x - E(x)) - H(v - E(v)))

a= E 8H((x - E(x)) - H(v - E(v)))T ((x - E(x)) - H(v - E(v)))

= E( -2(x - E(x)) - H(v - E(v))) (v - E(v)) T

= 2(H E(v - E(v)) (v - E(v))T - E(x - E(x)) (v - E(v))T)

=2(Hllvll~-(x, v)).

This gives

so that

x* = E(x) - (x, v) [IIVII~] -1 (E(v) - v).

4.4. Since v k - 2 is a linear combination (with constant matrixcoefficients) of

xa, {a' ... , {k-3' '!la' ... , '!lk-2

which are all uncorrelated with {k-l and '!lk-l' we have

and

Similarly, we can verify the other formulas [where (4.6)may be used].

4.5. The first identity follows from the Kalman gain equation(cf. Theorem 4.1(c) or (4.19)), namely:

Page 18: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 209

so thatGkRk = Pk,k-lCJ - GkCkPk,k-lCJ

= (1 - GkCk)Pk,k-lCJ .

To prove the second equality, we apply (4.18) and (4.17)to obtain

(Xk-l - Xk-llk-l, rk-l~k_l - Kk-l!lk_l)

(Xk-1 - Xk-1Ik-Z - (x#k-l, v#k-1) [lIv#k_IiIZ

] -1V#k-1,

rk-l~k_l - Kk-l!lk_l)

(X#k-1 - (X#k-1, V#k-1) [llv#k-IIIzr\Ck- 1X#k-1 + !lk-1)'

rk-l~k_l - Kk-l!lk_l)

= -(X#k-b V#k-1) [lIv #k-1Il zr\SJ-1fL1 - Rk-1KJ-1)

= Onxn,

in which since Kk-l = rk-ISk-lRk~I' we have

sJ-Irl- 1 - Rk-IKJ-I = Onxn·

4.6. Follow the same procedure in the derivation of Theorem4.1 with the term Vk replaced by Vk - DkUk, and with

xklk-l = L(Ak-IXk-l + Bk-lUk-l + rk-l~k_I' Vk

-1

)

instead of

xklk-I = L(Xk' v k-

1) = L(Ak-IXk-1 + rk-I~k_l' V

k-

1).

4.7. LetWk = -alVk-I + bl Uk-I + CIek-1 + Wk-I ,

Wk-I = -a2Vk-2 + b2Uk-2 + Wk-2 ,

and define Xk = [Wk Wk-I Wk_2]T. Then,

{

Xk+1 = AXk + BUk + rek

Vk = CXk + DUk + D..ek ,

where

A = [=;; ~ n,C=[l 00], D = [bo] and D.. = [co].

Page 19: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

210 Answers and Hints to Exercises

4.8. LetWk = -alVk-l + bl Uk-l + clek-l + Wk-l ,

Wk-l = -a2vk-2 + b2U k-2 + C2 ek-2 + Wk-2 ,

where bj = 0 for j > m and Cj = 0 for j > f, and define

Xk = [Wk Wk-l ... Wk_n+I]T.

Then

{

Xk+l = AXk + BUk + rek

Vk = CXk + DUk + flek ,

where-al 1 0 0-a2 0 1 0

A=

-an-l 0 0 1-an 0 0 0

bl - albo Cl - alcO

B=bm - ambo r= Cl - alcO-am+lbo -al+l

C=[10 0],

Chapter 5

D = [bo], and fl = [co].

5.1. Since v k is a linear combination (with constant matricesas coefficients) of

Xo, !lo' 10, ... , 'lk' io' f!..o' ... , f!..k-l

which are all independent of fik

, we have

(f!..k' vk

) = o.

On the other hand, fik

has zero-mean, so that by (4.6) wehave

L(1!-k' vk

) = E(1!-k) - (1!-k' vk

) [lIvk l1 2r1

(E(vk) - vk

) = o.

Page 20: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 211

5.2. Using Lemma 4.2 with v = V k- I , VI = V k - 2 , V 2 = Vk-I and

# - L( k-2)V k-I - Vk-I - Vk-I, V ,

we have, for x = Vk-I,

L(Vk-l, v k-

I)

= L(Vk-ll vk

- 2) + (V#k-ll V#k-l) [IIV #k_ 1 11 2] -1V#k-l

= L(Vk-l, v k- 2) + Vk-I - L(Vk-l, v k- 2

)

= Vk-I .

The equality L(lk' v k - I ) = 0 can be shown by imitatingthe proof in Exercise 5.1.

5.3. It follows from Lemma 4.2 that

Zk-I - Zk-I

= Zk-I - L(Zk-l, Vk

-I

)

=Zk-l - E(Zk-l) + (Zk-ll v k - 1) [lIvk - 1 11 2] -1 (E(vk- 1) _ v k -

1)

_ [Xk-I] _ [E(Xk-I)]- ~k-I E(~k_l)

+ [(Xk-l, V:=~)] [llvk - 1112]-1 (E(Vk- 1 ) _ vk - 1)({k_I'V )

whose first n-subvector and last p-subvector are, respec­tively, linear combinations (with constant matrices as co­efficients) of

Xo, ~o' f!..o' ... , f!..k-2' !lo' 10, ... , lk-I'

which are all independent of lk. Hence, we have

5.4. The proof is similar to that of Exercise 5.3.5.5. For simplicity, denote

B = [CoVar(xo)cri +RO]-I.

Page 21: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

212 Answers and Hints to Exercises

It follows from (5.16) that

Var(xo - xo)

= Var(xo - E(xo)

- [Var(xo)]cri [CoVar(xo)cri + RO]-I(vO - CoE(xo)))

= Var(xo - E(xo) - [Var(xo)]cri B(Co(xo - E(xo)) + ~o))

= Var((1 - [Var(xo)]criBCo)(xo - E(xo)) - [Var(xo)]C~B!lo)

= (I - [Var(xo)]C~BCo)Var(xo) (I - criBCo[Var(xo)])

+ [Var(xo)]Cri BRoBCo[Var(xo)]

= Var(xo) - [Var(xo)]cri BCo[Var(xo)]

- [Var(xo)]Cri BCo[Var(xo)]

+ [Var(xo)]cri BCo[Var(xo)]C~BCo[Var(xo)]

+ [Var(xo)]Cri BRoBCo[Var(xo)]

= Var(xo) - [Var(xo)]C~BCo[Var(xo)]

- [Var(xo)]Cri BCo[Var(xo)] + [Var(xo)]C~BCo[Var(xo)]

= Var(xo) - [Var(xo)]C~BCo[Var(xo)].

5.6. From ~o == 0, we have

Xl == Aoxo + GI(VI - CIAoxo)

and ~l == 0, so that

X2 == AIXI + G2(V2 - C2AIXI) ,

etc. In general, we have

Xk == Ak-IXk-1 + Gk(Vk - CkAk-IXk-l)

== xklk-l + Gk(Vk - CkXklk-l) .

Denote

and

Then

Page 22: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 213

Pl = ([~o ~o] -Gl[ClAo CIf0]) [Pg,O ~oJ [:f ~]

+[~ ~J= [[ In - Pl,ocl (ClPl'gCl +Rd-lCl ]Pl,o 31]'

and, in general,

Gk = [Pk,k-l C;[ (CkPkt-lC;[ + Rk)-l] ,

Pk = [[ In - Pk,k-lC;[(CkPk,kOlC;[ + Rd-lCk]Pk,k-l 3J.Finally, if we use the unbiased estimate :Ko = E(xo) of xoinstead of the somewhat more superior initial state esti­matei o = E(xo) - [Var(xo)]C~[CoVar(xo)C~ + Ro]-l[CoE(xo) - vo],

and consequently set

Po =E([XO] _ [E(Xo)]) ([xo] _[E(Xo)])T

~o E~o) ~o E~o)

= [var~xo) 30]'then we obtain the Kalman filtering algorithm derived inChapters 2 and 3.

5.7. Let

andHk-l = [ CkAk-l - Nk-lCk-l ].

Starting with (5.17b), namely:

Po = [( [var(xo)]-lo+ COR01CO)-1 0] [Po 0]Qo 0 Qo '

we have

Gl= [~o ~o] [~o ~oJ [f¥~l].([Ho Clfo][~O ~o][f¥~l]+Rl)-l

= [(AoPoH~ +foQofri Cl) (Ho~oH~ +ClfoQofricl +RI) -1]:= [~l]

Page 23: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

o ] [AT 0]Qo rJ 0

214 Answers and Hints to Exercises

and

PI = ([~o ~o] - [~l][Ho GIrO l) [~o

+[~ 3J= [(Ao - G1Ho)PoAJ -;; (I - G1G1)roQorJ

:= [~l 3J.In general, we obtain

Xk = Ak-1Xk-1 + Ok(Vk - Nk-1Vk-1 - Hk-1Xk-1)

Xo = E(xo) - [Var(xo)] cri [CoVar(xo)Cri + Ro]-l[CoE(xo) - vo]

Hk-1 = [ CkAk-1 - Nk-1Ck-1 ]- -- - T - T

Pk = (Ak- 1 - GkHk-1)Pk-1Ak-1 + (1 - GkCk)rk-1Qk-1rk-1- - -T T T

Gk = (Ak-1Pk-1Hk-1 + fk-1Qk-1fk-1Ck ).- - -T T T 1

(Hk-lPk-1Hk-1 + Ckrk-1Qk-1rk-1ck + Rk-1)-

Po = [ [Var(xo)]-l + criRa1Co]-1

k = 1,2,··· .

By omitting the "bar" on Hk, Ok, and Pk , we have (5.21).

5.8. (a)

{Xk+1 = AcXk + ~k

Vk = CcXk ·

(b)

[

var(xo)Po,o = 0

o00]

Var(~o) 0 ,o Var(1Jo)

oo

rk-1

Page 24: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 215

(c) The matrix CJPk,k-l Cc may not be invertible, and theextra estimates ~k and r,k in Xk are needed.

Chapter 6

6.1. Since

and

Xk-l = A n [N6ANcA]-I(CT Vk-n-l + AT CTvk-n

+ ... + (AT)n-IcT Vk-2)

= An[N6ANcA]-I(CTCXk-n-1 + AT C T CAXk-n-1

+ ... + (AT)n-IcT CAn-Ixk_n_1 + nOise)

= A n [N6ANcA]-I[N6ANCA]Xk-n-1 + noise

= AnXk_n_1 + noise,

we have E(Xk-l) = E(AnXk _n_ l ) = E(Xk-I).6.2. Since

we have

Hence,

~A-l(S) = -A-1(S)[:sA(s)]A- 1(s).

6.3. LetP=Udiag[AI,···,An]U- I . Then

P - AminI = Udiag[ Al - Amin,···, An - Amin ]U- I 2: o.

6.4. Let AI,···, An be the eigenvalues of F and J be its Jordancanonical form. Then there exists a nonsingular matrix Usuch that

U-IFU = J =

Page 25: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

216 Answers and Hints to Exercises

with each * being 1 or o. Hence,

A~ * *A~ * *

F k = UJkU- 1 = U

where each * denotes a term whose magnitude is boundedby

p(k)IAmaxl k

with p(k) being a polynomial of k and IAmaxl = max( IAII,· .. ,IAnl ). Since IAmaxl < 1, F k ~ 0 as k ~ 00.

6.5. Since

we have

Hence,

(A+B) (A+B)T =AAT +ABT +BAT +BBT

:S 2(AAT + BBT).

6.6. Since Xk-l = AXk-2 + r~k_2 is a linear combination (withconstant matrices as coefficients) of xO'~o,··· '~k-2 and

Xk-l = AXk-2 + G(Vk-1 - CAXk-2)

= AXk-2 + G(CAXk-2 + cr~k_2 + '!lk-l) - GCAXk-2

is an analogous linear combination of Xo, ~o' ... '~k-2 and'!lk-l' which are uncorrelated with ~k-l and '!lk' the twoidentities follow immediately.

6.7. Since

Pk,k-I C-:Gl - GkCkPk,k-I C-:al=CkCkPk,k-IC"[cl + CkRkCl - CkCkPk,k-I C"[cl

=GkRkGl,

we have

Page 26: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 217

Hence,

Pk,k = (1 - GkC)Pk,k-1

= (1 - GkC )Pk,k-I(1 - GkC )T + GkRGl

= (1 - GkC) (APk_l,k_IAT + rQrT) (1 - GkC)T + GkRGl

= (1 - GkC)APk_l,k_IAT (1 - GkC) T

+ (1 - GkC)rQrT (1 - GkC )T + GkRGl.

6.8. Imitating the proof of Lemma 6.8 and assuming that IAI ~1, where A is an eigenvalue of (1 - GC)A, we arrive at acontradiction to the controllability condition.

6.9. The proof is similar to that of Exercise 6.6.6.10. From

o < (E.-8. E.-8.)- -J -J'-J -J

= (fj' fj) - (fj' ~j) - (~j' fj) + (~j' ~j )

and Theorem 6.2, we have

(fj' ~j) + (~j' fj)

< (fj, fj) + (~j' ~j )

(Xj -Xj +Xj -Xj,Xj -Xj +Xj -Xj) + IIxj -Xjll~

- Ilx· - x·11 2 + (x· - x· x· - x·)- J J n J J' J J

+ (Xj - Xj,Xj - Xj) + 211xj - Xjll;

:S 211 x j - Xj 11; + 311 x j - Xj 11;-+ 5(P- 1 + CT R-IC)-I

as j -+ 00. Hence, Bj = (fj'~j)ATc T are componentwiseuniformly bounded.

6.11. Using Lemmas 1.4, 1.6, 1.7 and 1.10 and Theorem 6.1,and applying Exercise 6.10, we have

tr[PBk-I-i(Gk-i - G) T + (Gk-i - G)BJ_I_iPT ]

:S(n trPBk-I-i(Gk-i - G)T (Gk-i - G)BJ_I_iPT)I/2

+ (n tr(Gk-i - G)BJ_I_ipTPBk-I-i(Gk-i - G) T)I/2

:S(n trppT . trBk-I-iBJ_I_i· tr(Gk-i - G)T (Gk-i - G))1/2

+ (n tr(Gk-i - G) (Gk-i - G)T . trBJ_I_iBk-I-i . trpT p)I/2

=2(n tr(Gk- i - G) (Gk- i - G) T . trBJ_I_iBk-I-i . trFT F)I/2

<C r k +l - i- I I

for some real number rI, 0 < rl < 1, and some positiveconstant C independent of i and k.

Page 27: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

218 Answers and Hints to Exercises

6.12. First, solving the Riccati equation (6.6); that is,

c2p2 + [(1 - a2)r - c2,2q]p - rq,2 = 0,

we obtain

1p = 2c2 {c2

,),2q + (a2- l)r + V[(l - a2 )r - c2')'2q]2 + 4C2')'2 qr }.

Then, the Kalman gain is given by

9 = pcj(c2p + r) .

Chapter 7

7.1. The proof of Lemma 7.1 is constructive. Let A = [aij]nxnand AC = [Rij]nxn. It follows from A = AC(AC)T that

and

i

aii = LR;k,k=1

j

aij = L RikRjk , j =1= i ;k=1

i = 1,2,' .. , n,

i,j = 1,2,···,n.

Hence, it can be easily verified that

i-I 1/2

f ii = (aii - L f;k) , i = 1, 2, ... ,n,k=1j-l

fij = (aij - L fikfjk) / fj j , j = 1,2, ... ,i - 1; i = 2, 3, ... , n,k=1

and

j = i + 1, i + 2, ... , n; i = 1,2, ... , n.

This gives the lower triangular matrix AC. This algorithmis called the Cholesky decomposition. For the generalcase, we can use a (standard) singular value decomposi­tion (8VD) algorithm to find an orthogonal matrix U suchthat

U diag[sl,···,Sr,o, ... ,O]UT =AAT ,

Page 28: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 219

where 1 :::; r :::; n, 81,'" ,8r are singular values (which arepositive numbers) of the non-negative definite and sym­metric matrix AAT, and then set

A = U diag[JS1 ... VS; 0 ... 0], , r" , .

7.2.

L = [~0

n· [J20

o ](a) 2 (b) L = J2/2 m o .-2 J2/2 1.5/m J2]

7.3.(a)

[ l/f l1 0o ]L- 1 = -£21/£11£22 1/£22 o .

-£31/£11£33 + £32£21/£11£22£33 -£32/£22£33 1/£33

(b)

[bl1 0 0

)JL- 1 = b~lb22 0

bn1 bn2 bn3

where

i = 1,2,"" n;i

bij = -£711 L bik£kj,

k=j+1

j = i - 1, i - 2, ... ,1; i = 2,3, ... ,n.

7.4. In the standard Kalman filtering process,

which is a singular matrix. However, its "square-root" is

p1/2 _ [E/~ 0] [E 0]k,k - 0 1 ~ 0 1

which is a nonsingular matrix.

Page 29: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

220 Answers and Hints to Exercises

7.5. Analogous to Exercise 7.1, let A == [aij]nxn and AU == [£ij]nxn.

It follows from A == AU(AU)T that

and

n

aii == L£;k'k=i

n

aij == L fikfjk ,

k=j

i == 1,2.···, n,

j =I i; i, j == 1,2,· .. ,n.

Hence, it can be easily verified that

i == 1,2,· .. , n,

n

f ij = (aij - L fikfjk )/fjj ,

k=j+l

j == i + 1, ... ,n; i == 1,2, ... ,n.

andj == 1,2, ... ,i - 1; i == 2,3, ... ,n.

This gives the upper-triangular matrix AU.7.6. The new formulation is the same as that studied in this

chapter except that every lower triangular matrix withsuperscript c must be replaced by the corresponding uppertriangular matrix with superscript u.

7.7. The new formulation is the same as that given in Section7.3 except that all lower triangular matrix with superscriptc must be replaced by the corresponding upper triangularmatrix with superscript u.

Chapter 8

8.1. (a) Since r 2 ==x2 +y2, we have

. x. y.r == -x + -y,

r r

so that r == v sinB and

r == iJ sinB + vB cosB.

Page 30: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 221

On the other hand, since tane = yjx, we have 8sec20 =(xy - xy)jx2 or

. xy - xy xy - xy vo= 2 20 = 2 = -cosO ,

x sec r r

so that.. . v2

r = a s~nO + -cos20r

and.. (iJr-vi') v·o= r 2 cosO - ;:OsinB

(ar - v2sinO) v2

= r 2 cosB - r 2sinOcosB .

(b)

[

V sinB ]x = f(x) := a sinO + vr

2cos2B .

(ar - v2sinO)cosfJ j r 2 - v2sinfJ cosO j r 2

(c)

xk[l] + hv sin(xk[3])

xk[2] + ha sin(xk[3]) + v2cos2(xk[3])jxk[1]

vcos (xk [3] ) j Xk [1]

(axk[l] - v2sin(xk[3]))cos(xk[3])jxk[l]2-v2sin(Xk [3])COS(Xk [3])/xk[1]2

andVk = [1 0 0 0 ]Xk + TJk ,

where Xk := [xk[l] xk[2] xk[3] xk[4]]T.(d) Use the formulas in (8.8).

8.2. The proof is straightforward.8.3. The proof is straightforward. It can be verified that

8.4. Taking the variances of both sides of the modified "obser­vation equation"

Vo - Co(fJ)E(xo) = Co(O)xo - Co(B)E(xo) + '!lo'

Page 31: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

222 Answers and Hints to Exercises

and using the estimate (vo - Co(B)E(xo))(vo - Co(B)E(xo))Tfor Var(vo - Co(B)E(xo)) on the left-hand side, we have

(vo - Co(B)E(xo))(vo - Co(B)E(xo))T

=Co(B)Var(xo)Co(B)T + Ro.

Hence, (8.13) follows immediately.8.5. Since

E(Vl) = Cl(B)Ao(B)E(xo) ,

taking the variances of both sides of the modified "obser­vation equation"

VI - Cl (B)Ao(B)E(xo)

=Cl(B)(Ao(B)xo - Cl(B)Ao(B)E(xo) + r(B)~o) + '!1.l '

and using the estimate (VI -Cl(B)Ao(B)E(xo))(Vl -Cl(B)Ao(B)·E(xo))T for the variance Var(vl - Cl(B)Ao(B)E(xo)) on theleft-hand side, we have

(VI - Cl (B)Ao(fJ)E(xo))(Vl - Cl (B)Ao(B)E(xo)) T

=Cl(B)Ao(B)Var(xo)A~ (B)Ci (B) + Cl(B)ro(B)Qor~ (B)Ci (B) + RI·

Then (8.14) follows immediately.8.6. Use the formulas in (8.8) directly.8.7. Since fl. is a constant vector, we have Sk := Var(fl.) = 0, so

thatp, = V (x) = [var(xo) 0]

0,0 ar fl. 0 o·

It follows from simple algebra that

Pk,k-l = [~~] and Gk = [~]

where * indicates a constant block in the matrix. Hence,the last equation of (8.15) yields ~klk == ~k-llk-l·

8.8.

p, _ [po 0 ]0,0 - 0 So

Page 32: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 223

where ca is an estimate of Co given by (8.13); that is,

Chapter 9

{

Xk = -(a + /3 - 2)Xk-l - (1 - a)Xk-2 + aVk + (-a + (3)Vk-l

:h = -(a + (3 - 2)Xk-l - (1 - a)Xk-2 + kVk - kVk-l .

2(b) 0 < a < 1 and 0 < /3 < l~a.

9.2. System (9.11) follows from direct algebraic manipulation.9.3. (a)

(b)

[

I-a

<P = -/3lh-,lh2

-()

(1 - a)h (1 - a)h2 /21 - /3 h - /3h12

1 -,Ih 1 -,/2-()Ih -()h2 /2

-sa]-s/3/h-s,lh2

s(l - ())

det[zI - <1>] =z4 + [(a - 3) + /3 + ,/2 - (() - 1)s]z3

+ [(3 - 2a) - /3 +,/2 + (3 - a - /3 -,/2 - 3())s]z2

+ [(a - 1) - (3 - 2a - /3 + ,/2 - 3())s]z + (1 - a - ())s.

- zV(z-s) 2Xl = det[zI _ Ill] {az + h'/2 + (3 - 2a)z + (,/2 - j1 +an,X = zV(z - l)(z - s) {/3 - /3 }/h

2 det[zI _ <1>] z +, ,x = zV{z - 1)2(z - s) Ih2

3 det[zI _ <p] , ,

andw= zV(z-1)3 ().

det[zI - 4.>]

Page 33: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

224 Answers and Hints to Exercises

Xk = alXk-1 + a2 Xk-2 + a3x k-3 + a4Xk-4 + aVk

+ (-2a - sa + ,8 + , /2)Vk-1 + [a - ,8 +,/2+ (2a - ,8 - , /2)S]Vk-2 - (a - ,8 +,/2)SVk-3 ,

Xk = alxk-l + a2x k-2 + a3x k-3a4x k-4 + (,8/h)Vk

- [(2 + s),8/h - ,/h]Vk-1 + [,8/h - ,/h

+ (2,8 - ,)S/h]Vk-2 - [(,8 - ,)S/h]Vk-3,

Xk = alxk-l + a2x k-2 + a3x k-3 + a4x k-4 + (,/h)Vk

- [(2 + ,),/h2]Vk_1 + (1 + 2S)Vk-2 - SVk-3,

Wk = alwk-l + a2Wk-2 + a3Wk-3 + a4Wk-4

+ (,/h2)(Vk - 3Vk-1 + 3Vk-2 - Vk-3),

with the initial conditions X-I = X-I = X-I = Wo = 0,where

al = -a - ,8 -,/2 + (0 - l)s + 3,

a2 = 2a + {3 - ,/2 + (a + ,8h + ,/2 + 38 - 3)s - 3 ,a3 = -a + (-2a - ,8 +,/2 - 38 + 3)s + 1 ,

and

a4 = (a + 8 - l)s.

(d) The verification is straightforward.9.4. The verifications are tedious but elementary.9.5. Study (9.19) and (9.20). We must have ap,av,aa 2: 0, am >

0, and p > o.9.6. The equations can be obtained by elementary algebraic

manipulation.9.7. Only algebraic manipulation is required.

Chapter 10

10.1. For (1) and (4), let * E {+,-, .,/}. Then

X * Y = {x * ylx E X,y E Y}

= {y * xly E Y,x E x}=y*x.

The others can be verified in a similar manner. As to part(c) of (7), without loss of generality, we may only consider

Page 34: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Answers and Hints to Exercises 225

the situation where both x 2 0 and y 2 0 in X = [;f, x] andy = [y, y], and then discuss different cases of ± 2 0, z :::; 0,and ±--Z < o.

10.2. It is straightforward to verify all the formulas by defini­tion. For instance, for part (j.1), we have

A1(BC) = [~AI(i,j) [~BjlClk]]

~ [~~AI(i,j)BjlClk]

= [t [tA1(i,j)Bjl] Clk]l=l J=l

= (A1B)C.

10.3. See: Alefeld, G. and Herzberger, J. (1983).10.4. Similar to Exercise 1.10.10.5. Observe that the filtering results for a boundary system

and any of its neighboring system will be inter-crossingfrom time to time.

10.6. See: Siouris, G., Chen, G. and Wang, J. (1997).

Chapter 11

11.1.1 2-t2

2 3-t + 3t --2

1 2 9-t -3t+­2 2

o1 3-t6

1 3 2 2- - t + 2t - 2t + -2 3

1 3 2 22- t - 4t + lOt - -2 3

1 3 2 32- - t + 2t - 8t + -6 3

o

O:::;t<l

1:::;t<2

2:::;t<3

otherwise.

O:::;t<l

1::;t<2

2:::;t<3

3:::;t<4

otherwise.

Page 35: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

226 Answers and Hints to Exercises

11.2.;j;;,(w) = C-i:-iW)n= e-inw/2 (si~1f2))n.

11.3. Simple graphs.11.4. Straightforward algebraic operations.11.5. Straightforward algebraic operations.

Page 36: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

Subject Index

adaptive Kalman filtering 182

noise-adaptive filter 183

adaptive system

identification 113,115

affine model 49

a - {3 tracker 140

a - {3 - , tracker 136

a - {3 - , - () tracker 141,180

algorithm for real-time

application 105

angular displacement 111,129

ARMA (autoregressive

moving-average) process 31

ARMAX (autoregressive

moving-average model

with exogeneous inputs) 66

attracting point 111

augmented matrix 189

augmented system 76

azimuthal angular error 47

Bayes formula 12

Cholesky factorization 103

colored noise (sequence

or process) 67,76,141

conditional probability 12

controllability matrix 85

controllable linear system 85

correlated system and

measurement noise processes 49

covariance 13

Cramer's rule 132

decoupling formulas 131

decoupling of filtering

equation 131

Descartes rule of signs 140

determinant preliminaries 1

deterministic input sequence 20

digital filtering process 23

digital prediction process 23

digital smoothing estimate 178

digital smoothing process 23

elevational angular error 47

estimate 16

least-squares optimal

estimate 17

linear estimate 17

minimum trace variance

estimate 52

minimum variance

estimate 17,37,50

optimal estimate 17

optimal estimate

operator 53

unbiased estimate 17,50

event 8

simple event 8

expectation 9

conditional expectation 14

Page 37: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

228 Subject Index

extended Kalman filter 108,110,115

FIR system 184

Gaussian white noise sequence 15,117

geometric convergence 88

IIR system 185

independent random variables 14

innovations sequence 35

inverse z-transform 133

Jordan canonical form 5,7

joint probability

distribution (function) 10

Kalman filter 20,23,33

extended Kalman filter 108,110,115

interval Kalman filter 154

limiting Kalman filter 77,78

modified extended Kalman filter 118

steady-state Kalman filter 77,136

wavelet Kalman filter 164

Kalman-Bucy filter 185

Kalman filtering equation

(algorithm, or process)

23,27,28,38,42,57,64,72-74,76,108

Kalman gain matrix 23

Kalman smoother 178

least-squares preliminaries 15

limiting (or steady-state)

Kalman filter 78

limiting Kalman gain matrix 78

linear deterministic/ stochastic

system 20,42,63,143,185

linear regulator problem 186

linear state-space (stochastic) system

21,33,67,78,182,187

LU decomposition 188

marginal probability

density function 10

matrix inversion lemma 3

matrix Riccati equation 79,94,132,134

matrix Schwarz inequality 2,17

minimum variance estimate 17,37,50

modified extended Kalman filter 118

moment 10

nonlinear model (system) 108

non-negative definite matrix 1

normal distribution 9

normal white noise sequence 15

observability matrix 79

observable linear system 79

optimal estimate 17

asymptotically optimal estimate 90

optimal estimate operator 53

least-squares optimal estimate 17

optimal prediction 23,184

optimal weight matrix 16

optimality criterion 21

outcome 8

parallel processing 189

parameter identification 115

adaptive parameter

identification algorithm 116

positive definite matrix 1

positive suqare-root matrix 16

prediction-correction 23,25,31,39,78

probablity preliminaries 8

probablity density function 8

conditional probability

density function 12

Page 38: References - CERN · 200 Answers and Hints to Exercises 1.10. (1.35) can be verified by a direct computation. First, the following formula may be easily obtained: This yields, by

joint probability

density function 11

Gaussian (or normal) probability

density function -9,11

probability distribution 8,10

function 8

joint probability

distribution (function) 10

radar tracking model

(or system) 46,47,61,181

random sequence 15

randon signal 170

random variable 8

independent random variables 13

uncorrelated random variables 13

random vector 10

range 47,111

real-time application 61,73,93,105

real-time estimation/decomposition 170

real-time tracking 42,73,93,134,139

sample space 8

satellite orbit estimation 111

Schur complement technique 189

Schwarz inequality 2

matrix Schwarz inequality 2,17

vector Schwarz inequality 2

separation principle 187

sequential algorithm 97

Subject Index 229

square-root algorithm 97,103

square-root matrix 16,103

steady-state (or limiting)

Kalman filter 78

stochastic optimal control 186

suboptimal filter 136

systolic array 188

implementation 188

Taylor approximation 47,122

trace 5

uncorrelated random variables 13

variance 10

conditional variance 14

wavelets 164

weight matrix 15

optimal weight matrix 16

white noise sequence (process)

15,21,130

Gaussian (or normal) white

noise sequence 15,130

zero-mean Gaussian white

noise sequence 21

Wiener filter 184

z-transform 132

inverse z-transform 133