directional sensitivity of continuous least-squares state estimators

7
Systems & Control Letters 59 (2010) 571–577 Contents lists available at ScienceDirect Systems & Control Letters journal homepage: www.elsevier.com/locate/sysconle Directional sensitivity of continuous least-squares state estimators Alexander Medvedev * , Hannu T. Toivonen Information Technology, Uppsala University, P. O. Box 337, SE-751 05, Sweden Department of Information Technologies, Faculty of Technology, Åbo Akademi, FIN-20520 Åbo, Finland article info Article history: Received 5 October 2009 Received in revised form 2 July 2010 Accepted 2 July 2010 Available online 7 August 2010 Keywords: Linear systems Observers Time delays Sensitivity abstract Least-squares state estimators present an alternative to Luenberger observers and yield an exact (deadbeat) estimate of the state vector of a dynamic system as the optimal solution to a least-squares problem in some vector or functional space. For instance, the conventional L 2 -optimal finite-memory observer is shown to be a special case of the continuous least-squares state estimator. Sensitivity of these estimators to structured uncertainty in the system matrix of the plant is studied using the Fréchet derivative. It is shown that the state estimation error caused by the plant model’s mismatch is proportional to the Fréchet derivative of the symbol of the parameterization operator used for the estimator implementation, evaluated for the nominal value of the system matrix. For the special case of state estimation in a single-tone continuous oscillator, the crucial impact of the parameterization operator choice on the state estimator sensitivity to the plant model’s uncertainty is investigated in detail. © 2010 Elsevier B.V. All rights reserved. 1. Introduction Least-squares state estimators present an alternative to Lu- enberger observers and yield an estimate of the state of a dy- namic system as a solution to a least-squares problem in some suitable space. In this way, the very approach guarantees opti- mality of the produced estimate by minimizing a quadratic loss function. Finite-dimensional vector spaces [1] as well as infinite- dimensional spaces (Hilbert, Banach) [2] have been considered in deterministic and stochastic frameworks. To make practical use of a least-squares state estimator, information about the system’s inputs and outputs should be collected at a finite time interval. Therefore, such observers are often termed finite memory (or finite impulse response) observers or filters, depending on whether a deterministic or stochastic framework is considered. Besides, in a noise-free case of an observable linear perfectly known system, the least-squares estimate is exact, i.e. the estimation error converges to zero after a finite time has elapsed and thus exhibits deadbeat performance. In a stochastic framework, this property is equivalent to finite-time unbiasedness of the estimate, [3,4]. Similar to their infinite memory counterparts, least-squares state estimators appear e.g. in decoupling and disturbance rejec- tion controllers [5], fault detection and isolation [6], and control Parts of this paper have been presented at the 47th IEEE Conference on Decision and Control, Cancun, Mexico, Dec. 9–11, 2008 * Corresponding author at: Information Technology, Uppsala University, P. O. Box 337, SE-751 05, Sweden. E-mail addresses: [email protected], [email protected] (A. Medvedev), [email protected] (H.T. Toivonen). of time-delay systems [7]. In contrast to the Luenberger observer and the Kalman filter, the least-squares state estimators are used to prevent estimate divergence due to process uncertainty by means of finite observer memory instead of robust feedback. This is also similar to the moving horizon approach to state estimation where limited memory is used to control the computational complexity of the underlying optimization problem, [8]. Since the pioneering work of Gilchrist [1], state estimators for continuous systems converging in finite time have been re- discovered many times with the most recent contributions to the field found in [9,10]. A further investigation of the approach suggested in [10] revealed that it is equivalent to a well-known gramian-based formula, see [11,12]. As mentioned above, all least-squares state estimators are stable due to their finite memory and optimal, by design, in the sense of a quadratic criterium. However, sensitivity of least- squares state estimators is seldom addressed. A special case of least-squares state estimation in a harmonic oscillator is analyzed in [13] demonstrating that observer robustness against oscillation frequency variation can be unacceptably low, especially when time delays are chosen for the state estimator’s implementation. In continuous time, the theory of least-squares state estimation based on the notion of a pseudodifferential operator [14] provides the necessary and sufficient existence conditions for the observers. It also stipulates that the observer’s properties are mainly defined by the pseudodifferential operator chosen for the observer’s parameterization. Three particular operators have been treated so far. The differ- ential operator is the classical one, see [15]. The time delay operator is though a more reasonable choice since it can be implemented in 0167-6911/$ – see front matter © 2010 Elsevier B.V. All rights reserved. doi:10.1016/j.sysconle.2010.07.001

Upload: alexander-medvedev

Post on 26-Jun-2016

218 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Directional sensitivity of continuous least-squares state estimators

Systems & Control Letters 59 (2010) 571–577

Contents lists available at ScienceDirect

Systems & Control Letters

journal homepage: www.elsevier.com/locate/sysconle

Directional sensitivity of continuous least-squares state estimatorsI

Alexander Medvedev ∗, Hannu T. ToivonenInformation Technology, Uppsala University, P. O. Box 337, SE-751 05, SwedenDepartment of Information Technologies, Faculty of Technology, Åbo Akademi, FIN-20520 Åbo, Finland

a r t i c l e i n f o

Article history:Received 5 October 2009Received in revised form2 July 2010Accepted 2 July 2010Available online 7 August 2010

Keywords:Linear systemsObserversTime delaysSensitivity

a b s t r a c t

Least-squares state estimators present an alternative to Luenberger observers and yield an exact(deadbeat) estimate of the state vector of a dynamic system as the optimal solution to a least-squaresproblem in some vector or functional space. For instance, the conventional L2-optimal finite-memoryobserver is shown to be a special case of the continuous least-squares state estimator. Sensitivityof these estimators to structured uncertainty in the system matrix of the plant is studied using theFréchet derivative. It is shown that the state estimation error caused by the plant model’s mismatchis proportional to the Fréchet derivative of the symbol of the parameterization operator used for theestimator implementation, evaluated for the nominal value of the system matrix. For the special case ofstate estimation in a single-tone continuous oscillator, the crucial impact of the parameterization operatorchoice on the state estimator sensitivity to the plant model’s uncertainty is investigated in detail.

© 2010 Elsevier B.V. All rights reserved.

1. Introduction

Least-squares state estimators present an alternative to Lu-enberger observers and yield an estimate of the state of a dy-namic system as a solution to a least-squares problem in somesuitable space. In this way, the very approach guarantees opti-mality of the produced estimate by minimizing a quadratic lossfunction. Finite-dimensional vector spaces [1] as well as infinite-dimensional spaces (Hilbert, Banach) [2] have been considered indeterministic and stochastic frameworks.To make practical use of a least-squares state estimator,

information about the system’s inputs and outputs should becollected at a finite time interval. Therefore, such observersare often termed finite memory (or finite impulse response)observers or filters, depending on whether a deterministic orstochastic framework is considered. Besides, in a noise-free caseof an observable linear perfectly known system, the least-squaresestimate is exact, i.e. the estimation error converges to zero after afinite time has elapsed and thus exhibits deadbeat performance. Ina stochastic framework, this property is equivalent to finite-timeunbiasedness of the estimate, [3,4].Similar to their infinite memory counterparts, least-squares

state estimators appear e.g. in decoupling and disturbance rejec-tion controllers [5], fault detection and isolation [6], and control

I Parts of this paper have been presented at the 47th IEEE Conference on Decisionand Control, Cancun, Mexico, Dec. 9–11, 2008∗ Corresponding author at: Information Technology, Uppsala University, P. O. Box337, SE-751 05, Sweden.E-mail addresses: [email protected], [email protected] (A. Medvedev),

[email protected] (H.T. Toivonen).

0167-6911/$ – see front matter© 2010 Elsevier B.V. All rights reserved.doi:10.1016/j.sysconle.2010.07.001

of time-delay systems [7]. In contrast to the Luenberger observerand the Kalman filter, the least-squares state estimators are used toprevent estimate divergence due to process uncertainty by meansof finite observer memory instead of robust feedback. This is alsosimilar to the moving horizon approach to state estimation wherelimited memory is used to control the computational complexityof the underlying optimization problem, [8].Since the pioneering work of Gilchrist [1], state estimators

for continuous systems converging in finite time have been re-discovered many times with the most recent contributions tothe field found in [9,10]. A further investigation of the approachsuggested in [10] revealed that it is equivalent to a well-knowngramian-based formula, see [11,12].As mentioned above, all least-squares state estimators are

stable due to their finite memory and optimal, by design, inthe sense of a quadratic criterium. However, sensitivity of least-squares state estimators is seldom addressed. A special case ofleast-squares state estimation in a harmonic oscillator is analyzedin [13] demonstrating that observer robustness against oscillationfrequency variation can be unacceptably low, especiallywhen timedelays are chosen for the state estimator’s implementation.In continuous time, the theory of least-squares state estimation

based on the notion of a pseudodifferential operator [14] providesthe necessary and sufficient existence conditions for the observers.It also stipulates that the observer’s properties are mainly definedby the pseudodifferential operator chosen for the observer’sparameterization.Three particular operators have been treated so far. The differ-

ential operator is the classical one, see [15]. The timedelay operatoris though a more reasonable choice since it can be implemented in

Page 2: Directional sensitivity of continuous least-squares state estimators

572 A. Medvedev, H.T. Toivonen / Systems & Control Letters 59 (2010) 571–577

practice, see [16,17]. The third operator, the sliding-window con-volution operator, has been found to possess some beneficial prop-erties when it comes to disturbance attenuation, [6,7].The fact that the least-squares state estimator is structurally

stable does not necessarily guarantee good robustness propertiesof the state estimate. The lack of feedback in least-squares stateestimators makes them vulnerable to parameter uncertainty. Atthe same time, the dynamic complexity of the least-squaresobservers can be easily increased, thanks to their filter-bankstructure, which property is presumably helpful in shapingsensitivity functions.In this paper, it is shown that parameter sensitivity properties

of the least-squares observers can suitably be analyzed via matrixfunction sensitivity theory, see [18–20], based on the Fréchetderivative. The paper is composed as follows. First the notionof continuous least-squares state estimation is revisited. It isproved that the finite-memory L2-optimal observer is a specialcase of least-squares observers. The sensitivity of the least-squaresobserver estimation error to structured perturbations in the plant’ssystem matrix is then studied in a common formalism. Next,Fréchet derivatives for the relevant parameterization operators areevaluated to get an insight into how free design parameters canbe utilized for achieving higher observer robustness. For instance,it is shown that the finite-memory L2-optimal observer exhibitshigh parameter sensitivity for stable systems. Furthermore, thegeneral theory is applied to the case of state estimation of asingle-tone continuous oscillator, yielding recommendations onhow parameters in two suitable operators have to be selectedto achieve low state estimate sensitivity to oscillator frequencyvariations.

2. Continuous least-squares state estimator

In this section, results necessary for further exposition on con-tinuous least-squares state estimators originally presented in [14]are summarized. Only systems without exogenous signals aretreated in the paper for simplicity of notation, although general-ization to regular linear time-invariant systems is straightforward.Consider the autonomous system

x(t) = Ax(t)y(t) = Cx(t) (1)

where x(t) ∈ Rn is the state vector with the initial value x(0) = x0,y(t) ∈ R` is the output vector.Let the operator (P·)(λ; t), depending on parameter λ ∈ Λ be

defined via the Laplace inversion integral

(Pv)(λ; t) =12π j

∫ c+∞

c−∞p(λ, s)V (s)est ds (2)

where V (s) = (Lv)(s) (i.e. the Laplace transform of v(t)), c isa suitable real constant and Λ is a nonempty finite real set ofcardinality k. In the context of pseudodifferential operators, p(λ, s)is called the symbol of the operator (P·)(λ; t).Suppose that p(λ, s) is analytic on the spectrum of A denoted

σ(A) = {µ1, . . . , µn}. Along the lines of [14], consider now thefollowing observer

x(t) = V−1∑λi∈Λ

p(λi, A)TCT (Py)(λi; t) (3)

where

V =∑λi∈Λ

p(λi, A)TCTCp(λi, A).

When certain assumptions imposed an exponential decay of theoperator symbol at infinity are fulfilled and all invariant subspaces

Table 1Pseudodifferential operators used in state estimation and control.

Operator Symbol References

(Pdu)(λ; t) = dλ

dtλ u pd(λ, s) = sk e.g. [15](Pτu)(λ; t) = u(t − λ) pτ (λ, s) = e−λs [16,17](Pcu)(λ; t) =

∫ tt−τ e

λ(t−θ)u(θ) dθ pc(λ, s) = 1−e(λ−τ)ss−λ [6,7]

of A are preserved under the transformation p(λ, A), observer (3)possesses a deadbeat performance, in the sense that e(t) = x(t)−x(t) ≡ 0; t ≥ τ for any x0. Pseudodifferential operators thathave so far been used in least-squares observation and controlare summarized in Table 1. Furthermore, the minimal value ofk = card(Λ) for which the observer always exists is equal ton, see [14,6]. The state estimate x(t) is optimal in the sense ofminimizing the loss function

J(x; t) =∑λi∈Λ

‖(Py)(λi; t)− Cp(λi, A)x(t)‖2

i.e.x(t) = argmin

x(t)J(x; t).

Naturally, deadbeat performance (or finite impulse response)in continuous time can only be achieved by means of an infinite-dimensional p(λ, ·), which is of course a complication.As demonstrated in [6], there is always a state-space realization

of (1) such that the gramian matrix V is equal to a unit matrix.Thus, the least-squares observer can be, without loss of generality,written as

x(t) =∑λi∈Λ

p(λi, A)TCT (Py)(λi; t). (4)

In the sequel, the system’s equations are assumed to be in a V-balanced realization, if not stated otherwise.

2.1. L2[t, t − τ ]-optimal observer

Assume now that system (1) is balanced with respect to thegramian matrix

Wτ =

∫ τ

0exp(−AT θ)CTC exp(−Aθ) dθ

i.e. the state vector x is chosen so that Wτ = I . This is obviouslypossible whenever (1) is observable. Consider now the lossfunction

Jτ (x; t) =∫ τ

0‖y(t − θ)− C exp(−Aθ)x(t)‖2 dθ (5)

where x(t) is an estimate of the state vector of (1) and ‖ · ‖ is theEuclidian vector norm. Thus Jτ (t) is the normon L2[t, t−τ ] squaredand evaluated at each t over a sliding window of width τ .

Proposition 1. The observer

xH(t) =∫ t

t−τexp(AT (θ − t))CTy(θ) dθ (6)

satisfies

xH(t) = argminx(t)Jτ (x; t)

with the minimum value of the loss function

Jτ (xH; t) = ‖x(t)‖2 − ‖xH(t)‖2 (7)

and yields a deadbeat state estimation performance, i.e.

xH(t) = x(t), ∀t ≥ τ .

Proof. The proof follows by a standard least-squares argument,i.e. taking the first derivative of the loss function with respect to x

Page 3: Directional sensitivity of continuous least-squares state estimators

A. Medvedev, H.T. Toivonen / Systems & Control Letters 59 (2010) 571–577 573

and equating it to zero, i.e.

∇xJτ (x; t) = 0.

A direct substitution of xH in (5)

Jτ (xH; t) =∫ τ

0‖y(t − θ)‖2 dθ − ‖xH(t)‖2 = ‖x(t)‖2 − ‖xH(t)‖2.

Deadbeat performance of the observer follows by the equality∫ t

t−τexp(AT (θ − t))CTy(θ) dθ = x(t).

This also implies that the minimum value of the loss function (7) iszero for t ≥ τ . �

Observer (6) is a well-known construct and conventionallyused in textbooks for discussing observability and estimation ofinitial conditions in (1). However, it is usually derived in an adhoc manner and without any relation to least-squares problems,e.g. [15], p. 616. Compared to (3), the design of observer (6) doesnot involve any free parameters except for the timehorizon τ whilethe former offers the choice of p(·, A) andΛ as degrees of freedom.Thus the L2 optimality of (6) can be used to obtain an insight intohow the performance of observer (3) can be improved by design.For future reference, it is instructive to emphasize at this stage

the relationship between (6) and (3) by rewriting (6) as a sum.Using Buchheim’s formula [21], for distinct µi of multiplicity si ≥1,∑i si = n, there exist n constant matricesMij such that

exp (−A(t − θ)) =∑

µi∈σ(A)

si−1∑j=0

(t − θ)je−µi(t−θ)Mij.

Thus, L2[t, t − τ ]-optimal observer (6) can be generally written as

xH(t) =∑

µi∈σ(A)

si−1∑j=0

MTij CT∫ t

t−τ(t − θ)je−µi(t−θ)y(θ) dθ.

It can be seen that the optimal solution involves finite-memoryconvolution operators with the kernel function whose parametersare defined by the spectrum of A. The operator

(Pkc u)(λ; t) =∫ t

t−τ(t − θ)ke−λ(t−θ)u(θ) dθ

is a pseudodifferential operator with the symbol

pkc(λ, s) =1

(λ− s)k+1(e(λ−s)τ gk(s)+ (−1)k+1k!

)gk(s) = (λ− s)kτ k − k(λ− s)k−1τ k−1

+ k(k− 1)(λ− s)k−2τ k−2 · · · + (−1)kk!.

The following theorem establishes the equivalence betweenL2-optimal observer (6) and least-squares state estimator (3).

Proposition 2. For the case of distinct eigenvalues of A (i.e. ∀i, si =1), xH is equivalent to a least-squares state estimator so that

xH(t) = V−1c

∑λi∈Λ

pc(λi, A)TCT (Pcy)(λi; t) (8)

where

Vc =∑λi∈Λ

pc(λi, A)TCTCpc(λi, A)

withΛ ≡ σ(−A).

Proof. Invoking the Sylvester formula [21] valid for functions ofdiagonalizable matrices, xH can be written in the form of (3)

xH(t) =∑

µi∈σ(A)

MTi CT∫ t

t−τe−µi(t−θ)y(θ) dθ

=

∑µi∈σ(A)

MTi CT (Pcy)(−µi; t)

whereMi, i = 1, . . . , n are Frobenius covariants of A

Mi =n∏j=1j6=i

1µi − µj

(A− µjI).

Least-squares state estimator (3) evaluated for p(·, A) ≡ pc(·, A)andΛ ≡ σ(−A) is given by

xLS(t) = V−1c

∑µi∈σ(A)

pc(−µi, A)TCT (Pcy)(−µi; t).

For Frobenius covariants, it holds

MiMj ={0, i 6= jMi, i = j. (9)

Using the Sylvester formula once again, one obtains

pc(−µi, A) =∑

µj∈σ(A)

pc(−µi, µj)Mj.

Now, to prove the equivalence stated in the Theorem, it suffices todemonstrate that∑µj∈σ(A)

pc(−µi, µj)CMjV−1c = CMi.

Recall that that system (1) is balanced with respect to the gramianmatrix

Wτ =

∫ τ

0exp(−AT θ)CTC exp(−Aθ) dθ = I.

Substituting the matrix exponentials in the expression for thegramian with their representation via the Sylvester formula

exp(−AT θ) =∑

µj∈σ(A)

exp(−µjτ)Mj,

performing the integration, multiplying by Mi from the left andtaking into account (9) yields the sought result. �

Notably, for nilpotent systemmatrices, the finite-memory con-volution operators are reduced to pure integration over a slidingwindow, i.e. (Pc ·)(λ; t)|λ=0, exactly as it turns out in the evaluationof derivatives by the deadbeat estimation suggested in [11,12].

3. Sensitivity to systemmatrix uncertainty

A natural way of investigating the sensitivity properties of (4) isto take advantage of the Fréchet derivative that is conventionallyapplied in matrix function sensitivity theory [18–21]. Indeed,a model’s uncertainty in the system matrix A gives rise toa corresponding perturbation of p(·, A) in (4). Achieving lowsensitivity of p(·, A)with respect to a certain class of perturbationsof A by choosing appropriate Λ yields local robustness of theleast-squares state estimator against this particular class ofperturbations.Let a matrix function p(A) be continuously differentiable at A in

the sense that there exists a linear matrix operator Lp(·) such thatfor any matrix E

limδ→0

p(A+ δE)− p(A)δ

= Lp(A, E). (10)

Then Lp(E) is the Fréchet derivative of p(·) at A in the matrixdirection E.Denote the Fréchet derivative of p(λ, A) in the direction E as

Lp(λ; A, E).

Page 4: Directional sensitivity of continuous least-squares state estimators

574 A. Medvedev, H.T. Toivonen / Systems & Control Letters 59 (2010) 571–577

Proposition 3. Assume that the system matrix in (1) is perturbedaccording to A = A0 + δE, ‖E‖ = 1 and least-squares stateestimator (4) is evaluated for the nominal value of the state matrixA0. Then, provided that p(λ, ·) is still analytical at σ(A) for all λ ∈ Λ,the state estimation error e(t) = x(t)− x(t) is given by

e(t) =

(∑λi∈Λ

p(λi, A0)TCTC(Lp(λi; A0, E)+ O(δ2)I)

)x(t). (11)

Proof. Since p(λ, ·) is still analytical at σ(A) then, due to Lemma 1in [14], the following relationship holds for the perturbed system

(Py)(λi; t) = Cp(λi, A)x(t).

Substituting the equation above into (4) yields

x(t) =

(∑λi∈Λ

p(λi, A)TCTCp(λi, A0 + δE)

)x(t).

From [22], it is known that

p(λi, A0 + δE) = p(λi, A0)+ Lp(λi; A0, E)+ O(δ2).

Now the result of the theorem directly follows by evaluation of theestimation error e. �

Proposition 3 justifies the first-order approximation

e(t) ≈

(∑λi∈Λ

p(λi, A0)TCTCLp(λi; A0, E)

)x(t). (12)

Apparently, in order to desensitize the observer against systemmatrix uncertainty, the matrix function p(λi, A0) should be small,measured in some suitable norm, and satisfy certain smoothnessconditions in the perturbationmatrix direction E so that its Fréchetderivative is small, too.

4. Evaluation of Fréchet derivatives

In this section, Fréchet derivatives of the matrix functionsarising in least-squares state estimation are evaluated.In the literature, three operators used for continuous least-

squares observer design can be found, namely the differentialoperator, the continuous time-delay operator and a finite memoryconvolution operator, see [14,7]. Though the differential operatorcannot be used in practice for the observer’s implementationwithout some sort of additional filtering, it is taken intoconsideration for reference.An apparent difficulty in constructing operators for least-

squares state estimation in continuous time is the necessity offinite memory, i.e. the operator’s impulse response has to beequal to zero starting from some finite time. In any other case,exact (deadbeat) state estimation is impossible. For instance, in[23], the integral operator pi(λ, s) = 1/sλ is utilized for initialcondition estimation. Since the integral operator possesses aninfinite impulse response, any attempt to use it for least-squaresestimation of the current value of the plant state vector would notbe successful. However, choosing a finite integration horizon leadsto a viable solution, [10].The differential operator pd(λ, s) = sλ where λ = 1, 2, . . .

evaluated at a matrix A gives rise to pd(λ, A) = Aλ. Accordingto [18], the Fréchet derivative of pd(A) isλ = 1 : Lpd(A, E) = E (13)

λ ≥ 2 : Lpd(A, E) =∑j+k=λ−1j,k≥0

AjEAk.

Another classical choice of implementation operator is thecontinuous time delay operator pτ (λ, s) = eλs where λ < 0.The corresponding matrix function is the matrix exponential

pτ (λ, A) = eλA whose Fréchet derivative is also provided in [18]

Lpτ (A, E) =∫ 1

0e(1−θ)AλEeθAλ dθ. (14)

An operator that combines integrating behavior with a finitememory is the finite-memory convolution operator

pc(λ, s) =1− e(λ−τ)s

s− λ. (15)

Letting λ = 0 reduces pc(λ, ·) to finite-memory integration,utilized for instance in [10]. Being evaluated for amatrix argument,the finite-memory convolution operator yields

pc(λ, A) = (I − exp((λ− τ)A))(A− λI)−1. (16)The Fréchet derivative of pc is not readily available and has to bederived separately.

Proposition 4. The Fréchet derivative of pc(λ, A) in the matrixdirection E is given by

Lpc (A, E) =((λ− τ)

∫ 1

0exp((1− θ)(λ− τ)A)E

× exp(θ(λ− τ)A) dθ + pc(λ, A)E)(λI − A)−1. (17)

Proof. The matrix function pc(λ, A) can be written as

pc(λ, A) = (A− λI)−1 − exp((λ− τ)A)(A− λI)−1

and the Fréchet derivative can be evaluated separately for each ofthe two terms. Notice now that for any matrix X

(X + δE)−1 = X−1 − δX−1EX−1 + O(δ2)

and from the definition of the Fréchet derivative

L 1s−λ(A, E) = −(A− λI)−1E(A− λI)−1.

Then using Dyson’s expansion [18]

exp ((λ− τ)(A+ δE))− exp ((λ− τ)A) = δ(λ− τ)

×

∫ 1

0exp ((1− θ)(λ− τ)A) E exp (θ(λ− τ)(A+ δE)) dθ

one arrives to

L e(λ−τ)ss−λ

(A, E) = (λ− τ)∫ 1

0exp((1− θ)(λ− τ)A)

× E exp(θ(λ− τ)A) dθ(A− λI)−1

− exp((λ− τ)A)(A− λI)−1E(A− λI)−1.

Putting together these expressions andmaking use of (16) leads tothe sought result. �

5. Least-squares state estimator of a harmonic oscillator

Consider now a single-tone harmonic oscillator, i.e. au-tonomous dynamic system

x(t) = Fx(t)y(t) = Cx(t) (18)

with

F =[0 ω2

−1 0

]; C =

[1 1

]. (19)

The choice of this particular example is motivated by the ease ofits generalization to the case of multiple frequencies as well as itssimplicity and practical importance in engineering applications.

Proposition 5 ([13]). Assume that for any λ ∈ Λ, the operator(P·)(λ; t) satisfies Assumption 1. Then, for plant (18) and (19), the

Page 5: Directional sensitivity of continuous least-squares state estimators

A. Medvedev, H.T. Toivonen / Systems & Control Letters 59 (2010) 571–577 575

observer

x(t) = UV−1F

∑λ∈Λ

[p(λ, jω)p(λ,−jω)

](Py)(λ; t) (20)

where

U =

−jω1− jω

jω1+ jω

11− jω

11+ jω

;VF =

∑λ∈Λ

[p2(λ, jω) p(λ, jω)p(λ,−jω)

p(λ, jω)p(λ,−jω) p2(λ,−jω)

]yields the estimate x(t) = x(t), t > τ if and only if there is no c ∈ Csuch that

p(λ, jω) = cp(λ,−jω), ∀λ ∈ Λ. (21)

Since only ω can vary in F , it is reasonable to choose thefollowing perturbation direction

E =[0 10 0

].

Denote p′(s, λ) = dp(s, λ)/ds.

Proposition 6. The Fréchet derivative of p(λ, F) in the direction E is

Lp(F , E) =12

∑s=±jω

p′(s, λ)

[1

−1s

][−1s1]

and its Euclidean norm ‖Lp(F , E)‖2 is

‖Lp(F , E)‖22 =12

((1+

1ω4

)α2(ω, λ)+

2ω2β2(ω, λ)

+

∣∣∣∣1− 1ω2

∣∣∣∣ |β(ω, λ)|×

√(1ω2+ 1

)2α2(ω, λ)+

4ω2β2(ω, λ)

where

p′(jω, λ) = α(ω, λ)+ jβ(ω, λ).

Proof. The expression for Lp(F , E) follows by a straightforwardapplication of definition (10) and the Residue Theorem forevaluation of p(λ, F) and p(λ, F + δE). The norm of Lp(F , E) isobtained from the largest root of

det(γ I − LTp(F , E)Lp(F , E)) = 0. �

5.1. Time delay operator

The time delay operator is the simplest one and widely used forimplementation of least-squares state estimators. Givenp′τ (λ, jω) = −λ (cos(ωλ)− j sin(ωλ))the norm of the Fréchet derivative is

‖Lpτ (F , E)‖22 =

λ2

2

((1−

1ω2

)2cos2(ωλ)+

2ω2

+

∣∣∣∣ 1ω2 − 1∣∣∣∣ | cos2(ωλ) |

×

√(1−

1ω2

)2cos2(ωλ)+

4ω2

. (22)

Clearly, the problem of global minimization of ‖Lpτ (F , E)‖2 withrespect to the time delay duration λ has only a trivial and nofeasible solution, i.e.argmin

λ‖Lpτ (F , E)‖2 = 0.

Despite this, (22) still provides an important insight into how thetime delay value influences the directional sensitivity of thematrixoperator. Indeed, one can notice that

‖Lpτ (F , E)‖22 =

λ2

2bτ (λ)

where bτ (λ) is periodic and bounded from below and above.Besides, bτ (λ) achieves minima when cos(λω) = 0 and maximawhen cos2(λω) = 1. This observation results in the followingbounds, see Fig. 1λ

ω≤ ‖Lpτ (F , E)‖2 ≤

λ

ω2for ω < 1

λ

ω≤ ‖Lpτ (F , E)‖2 ≤ λ for ω > 1.

Therefore, to achieve low sensitivity to frequency variations, thetime delays in the estimator and the quantities cos2(λω) shouldbe kept as small as possible. As is well known, small time delays inleast-squares observers lead to drastic estimation error transientsand should be avoided. Thus, the only option left here is tominimize cos2(λω). In Fig. 2, the performance of two designsfor least-squares observer (3) based on the time delay operator(Pd·)(λ; t) is depicted. In one case, the time delays are selectedto be Λ = {0, 1.848} so that cos(1.848ω0) = −1 and norm ofthe Fréchet derivative (22) achieves one of its maxima. The otherdesign is characterized by a longer time delayΛ = {0, 2.5} but farfrom the sensitivity maxima. Notice that at the sensitivity minima,the observer becomes singular and detV = 0. Signal L2 norm ofthe state estimation error over the simulation interval of tmax =50 s is computed for each value of the frequency perturbation1ω in (18) and plotted as a function of perturbation. Clearly, thefirst design (upper curve) has high sensitivity, especially when theoscillation frequency is overestimated. Apparently, the observer’sperformance continues to improve over the nominal case forpositive values of 1ω but this is a superficial phenomenon due tothe cancelation of initial transients by the estimation error causedby the perturbation. The second design (lower curve) exhibitsbetter robustness and moderate performance reduction both forunderestimated and overestimated oscillation frequency.

5.2. Finite memory convolution operator

For τ > 0, in this case

p(λ, s) = pc(λ, s) =1− e(λ−s)τ

s− λ.

Notice here that the sign of λ does not influence the stability ofp(λ, s) due to its finite memory. Actually, it is easy to see that thepole of p(λ, s) is canceled yielding an infinite series representationof the operator

p(λ, s) =∞∑i=1

(λ− s)i−1τ i

i!.

Furthermore,

Re p(λ, jω) =1

λ2 + ω2

(ωeλτ sin(ωτ)+ λ(eλτ cos(ωτ)− 1)

)Im p(λ, jω) =

1λ2 + ω2

(ω(eλτ cos(ωτ)− 1)− λeλτ sin(ωτ)

).

Assume first that λ < 0. In order to obtain a satisfactory transientperformance in the observer, τ should not be small. Therefore,eλτ � 1 and the following approximations apply

Page 6: Directional sensitivity of continuous least-squares state estimators

576 A. Medvedev, H.T. Toivonen / Systems & Control Letters 59 (2010) 571–577

Norm of the Frechet derivative for the time delay operator

λ

0

1

2

3

4

0 0.5 1 1.5 2 2.5 3 3.5 4

λ

0 0.5 1 1.5 2 2.5 3 3.5 40

2

4

6

8

10

ω=1.7

ω=0.7

Fig. 1. ‖Lpτ (F , E)‖2 for the time delay operator and its bounds. Upper plot forω > 1, lower plot for ω < 1. Solid line — ‖Lpτ (F , E)‖2 . Dashed line — λ/ω. Dash-dotted line — λ/ω2 . Dotted line — λ.

|e| 2

0

10

20

30

40

50

60

70

80

90

100

–15 –10 –5 0 5 10 15

Δω, %

Estimation error signal norm, ω0=1.7, tmax=50s

λ=2.5 (lower curve)

λ=1.848 (upper curve)

Fig. 2. Signal norm L2[0, tmax] of state estimation error evaluated for two observerdesigns based on the time delay operator pd(λ, ·) as a function of frequencyperturbation1ω in the single tone oscillator. First design –Λ = {0, 1.848}, (uppercurve); second design –Λ = {0, 2.5}, (lower curve).

Re pc(λ, jω) ≈ −λ

λ2 + ω2

Im pc(λ, jω) ≈ −ω

λ2 + ω2.

Clearly, the gain of p(λ, ·) is small for high frequencies ω and largein absolute values parameters λ. The derivative of the operatorsymbol is

p′c(λ, s) =(1− (λ− s)τ ) e(λ−s)τ − 1

(s− λ)2.

Under the same approximation assumptions as before, it followsthat

‖Lpc (F , E)‖22 ≈

12(ω2 + λ2)4

((1+

1ω4

)(ω2 − λ2)2

+ 2λ2 − ωλ∣∣∣∣1− 1

ω2

∣∣∣∣√(1+

1ω2

)2(ω2 − λ2)2 + 4λ2

and the norm of the Fréchet derivative as well as the sensitivityof the corresponding matrix function can be brought down by

Norm of Frechet derivative for the finite memory convolution operator

λ

0

0.2

0.4

0.6

0.8

1

1.2

1.4

–3 –2.5 –2 –1.5 –1 –0.5 0

normapproximation

τ=2ω=1.7

Fig. 3. ‖Lpc (F , E)‖2 for the time finite memory convolution operator (solid line)and its approximation (dashed line). The approximation is better for λ � 0. Notethat the approximation is not a bound.

choosing λ� 0, see Fig. 3. Notice also that ‖Lpc (F , E)‖2 continuesto rise for positive values of λ why only λ < 0 have to beconsidered. As mentioned before, the stability of the observer ispreserved even for λ > 0 thanks to the finite memory of theoperator. Recall now that for L2-optimal observer (6) written inthe form of the least squares estimator as in (8), it is required thatΛ ≡ −σ(A). This implies that for stable matrices A, the L2-optimalobserver will exhibit high sensitivity for perturbations in the statematrix A.

6. Conclusions

The sensitivity of least-squares state estimators to structureduncertainty in the system matrix of the plant is studied usingthe Fréchet derivative. The well-known finite-memory L2-optimalstate estimator is proved to be a special case of least-squaresstate estimators. The design degree of freedom offered by theparameterization operator in the least-squares state estimator isinvestigated. It is shown that the state estimation error caused bythe plantmodelmismatch is proportional to the Fréchet derivativeof the symbol of the parameterization operator evaluated for thenominal value of the system matrix. The particular case of stateestimation in a single-tone continuous oscillator is treated in detailas an illustration. When time delays are used for an observer’simplementation, the sensitivity of the observer in the direction ofperturbation rises almost linearly with the time delay duration. Incontrast, the observer’s sensitivity decreases when a continuousfinite-memory convolution operator with a significant decay rateof the kernel function is chosen for the design.

Acknowledgement

The first author is partly supported by the project ‘‘Compu-tationally Demanding Real-Time Applications on Multicore Plat-forms’’ funded by the Swedish Foundation for Strategic Research,2009–2013.

References

[1] James D. Gilchrist, n-observability for linear systems, IEEE Transactions onAutomatic Control 11 (3) (1966) 388–395.

[2] Stanislav Fuksa, Witold Byrski, General approach to linear optimal estimatorof finite number of parameters, IEEE Transactions on Automatic Control 29(1984) 470.

[3] WookHyunKwon,OhKyuKwon, FIR filters and recursive forms for continuoustime-invariant state space models, IEEE Transactions on Automatic Control 32(4) (1987) 352–356.

Page 7: Directional sensitivity of continuous least-squares state estimators

A. Medvedev, H.T. Toivonen / Systems & Control Letters 59 (2010) 571–577 577

[4] Wook Hyun Kwon, Pyung Soo Kim, Soo Hee Han, A receding horizonunbiased FIR filter for discrete-time state space models, Automatica 38 (2002)545–551.

[5] A. Medvedev, G. Hillerström, An external model control system, In Control-Theory and Advanced Technology 10 (4) (1995) 1643–1665. Part 4.

[6] A. Medvedev, State estimation and fault detection by a bank of continuousfinite-memory filters, International Journal of Control (1998) 499–517.

[7] A. Medvedev, Disturbance attenuation in finite spectrum assignment con-trollers, Automatica 33 (6) (1997) 1163–1168.

[8] C.V. Rao, J.B. Rawlings, J. Lee, Constrained linear state estimation — a movinghorizon approach, Automatica 37 (10) (2001) 1619–1628.

[9] Robert Engel, Gerhard Kreisselmeier, A continuous-time observer thatconverges in finite time, IEEE Transactions on Automatic Control 47 (7) (2002)1202–1204.

[10] Michel Fliess, Hebertt Sira-Ramirez, State reconstructors: a possible alterna-tive to asymptotic observers and Kalman filters, in: Proceedings of CESA, Lille,France, 2003.

[11] Johann Reger, Jerome Jouffroy, Algebraische ableitungsschätzung im kontextder rekonstruierbarkeit, at-Automatisierungstechnik 56 (6) (2008) 324–331.

[12] Johann Reger, Jerome Jouffroy, On algebraic time-derivative estimation anddeadbeat state reconstruction, in: 48th IEEE Conference on Decision andControl, Shanghai, China, 2009.

[13] M. Bask, A. Medvedev, Analysis of least-squares state estimators for aharmonic oscillator, in: IEEE Conference on Decision and Control, CDC2000,Sydney, Australia, 2000, pp. 1800–1804.

[14] A. Medvedev, Continuous least-squares observers with applications, IEEETransactions on Automatic Control 41 (Oct.) (1996) 1530–1536.

[15] Thomas Kailath, Linear Systems, Prentice-Hall, 1980.[16] D.H. Chyung, State variable reconstruction, International Journal of Control 40

(1984) 955–963.[17] A. Medvedev, H. Toivonen, Feedforward time-delay structures in state

estimation. Finitememory smoothing and continuous deadbeat observers, IEEProceedings-D 141 (2) (1994) 121–129.

[18] Rajendra Bhatia, Matrix Analysis, Springer, New York, 1997.[19] Charles Kenney, Alan J. Laub, Condition estimates for matrix functions, SIAM

Journal on Matrix Analysis and Applications 10 (2) (1989) 191–209.[20] C.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, 1985.[21] C.A. Horn, C.R. Johnson, Topics inMatrix Analysis, Cambridge University Press,

1991.[22] Roy Mathias, Condition estimation for matrix functions via the Schur

decomposition, SIAM Journal on Matrix Analysis and Applications 16 (2)(1995) 565–578.

[23] Z. Ding, Change detection based on initial state estimation, System & ControlLetters 21 (1993) 337–345.