012
TRANSCRIPT
4th International Workshop on Reliable Engineering Computing (REC 2010)Edited by Michael Beer, Rafi L. Muhanna and Robert L. MullenCopyright © 2010 Professional Activities Centre, National University of Singapore.ISBN: 978-981-08-5118-7. Published by Research Publishing Services.doi:10.3850/978-981-08-5118-7 012
101
Recursive Least-Squares Estimation
in Case of Interval Observation Data
H. Kutterer
1), and I. Neumann
2)
1)
Geodetic Institute, Leibniz University Hannover,
D-30167 Hannover, Germany, [email protected] 2)
Institute of Geodesy - Geodetic Laboratory, University FAF Munich,
D-85579 Neubiberg, Germany, [email protected]
Abstract: In the engineering sciences, observation uncertainty often consists of two main types: random
variability due to uncontrollable external effects, and imprecision due to remaining systematic errors in the
data. Interval mathematics is well-suited to treat this second type of uncertainty in, e. g., interval-
mathematical extensions of the least-squares estimation procedure if the set-theoretical overestimation is
avoided (Schön and Kutterer, 2005). Overestimation means that the true range of parameter values
representing both a mean value and imprecision is only quantified by rough, meaningless upper bounds. If
recursively formulated estimation algorithms are used for better efficiency, overestimation becomes a key
problem. This is the case in state-space estimation which is relevant in real-time applications and which is
essentially based on recursions. Hence, overestimation has to be analyzed thoroughly to minimize its impact
on the range of the estimated parameters. This paper is based on previous work (Kutterer and Neumann,
2009) which is extended regarding the particular modeling of the interval uncertainty of the observations.
Besides a naïve approach, observation imprecision models using physically meaningful influence
parameters are considered; see, e. g., Schön and Kutterer (2006). The impact of possible overestimation due
to the respective models is rigorously avoided. In addition, the recursion algorithm is reformulated yielding
an increased efficiency. In order to illustrate and discuss the theoretical results a damped harmonic
oscillation is presented as a typical recursive estimation example in Geodesy.
Keywords: Interval mathematics, imprecision, recursive parameter estimation, overestimation, least-
squares, damped harmonic oscillation.
1. Introduction
State-space estimation is an important task in many engineering disciplines. It is typically based on a
compact recursive reformulation of the classical least-squares estimation of the parameters which describe
the system state. This reformulation reflects the optimal combination of the most recent parameter estimate
and of newly available observation data; it is equivalent to a least-squares parameter estimation which uses
all available data. However, through the recursive formulation it allows a more efficient update of the
estimated values which makes it well-suited for real-time applications. Conventionally, the real-time
capability of a process or algorithm, respectively, means that the results are available without any delay
when they are required within the process.
In a system-theoretical framework also physical knowledge about the dynamic system state can be
available in terms of a system of differential equations. In this case a state-space filter such as the well-
102 4th International Workshop on Reliable Engineering Computing (REC 2010)
H. Kutterer, and I. Neumann
known Kalman filter is used which extends the concept of state-space estimation as it combines predicted
system information from the solution of the set of differential equations and additional, newly available
observation data (Gelb, 1974). As a special case of state-space filtering, state-space estimation considers the
same parameter vector through all recursion steps; nevertheless the estimated values will vary. Moreover,
time is not the relevant quantity but the observation index. This allows some convenient features such as the
efficient elimination of observation data which are considered as outliers. In any case, the state-space can
comprise parameters which are system-immanent and not directly observable.
It is common practice to assess the uncertainty of the observation data in a stochastic framework, only.
This means that the observation errors are modeled as random variables and vectors, respectively. This type
of uncertainty is called random variability. Classical models in parameter estimation refer to expectation
vectors and variance-covariance matrices as first and second moments of the random distribution of the
observation. Other approaches based on the Maximum-Likelihood estimation take the complete random
distribution into account. In case of non-normal distribution numerical approximation techniques such as
Monte-Carlo sampling procedures are applied for the derivation of the densities of the estimated parameters
as well as of derived quantities and measures (Koch, 2007).
However, there are more sources of uncertainty in the data than just random errors. Actually, depending
on the particular application unknown deterministic effects can introduce a significant level of uncertainty.
Such effects are also known as systematic errors which are typically reduced or even eliminated by a
mixture of different techniques if an adequate observation configuration was implemented: (i) modification
of the observation values using physical or geometrical correction models, (ii) linear combinations of the
original observations such as observation differences which can reduce synchronization errors or
atmospherically induced run-time differences in distance observations, (iii) dedicated parameterization of
the effect in the observation equations. Since none of these techniques is rigorously capable to eliminate an
unknown deterministic effect completely or to determine its value, this effect has to be modeled
accordingly. Here, interval mathematics is used as theoretical background introducing intervals and interval
vectors as additional uncertain quantities. This second type of uncertainty is called imprecision.
The joint assessment of random variability and imprecision of observation data in least-squares
estimation has been treated in a number of publications. However, the consideration of recursive state-space
estimation has to treat the overestimation problem of interval-mathematical evaluations in a more
elaborated way than in classical estimation. Overestimation is caused by, e. g., (hidden) dependencies
between interval quantities and it is visible in interval-mathematical properties like, e. g., sub-distributivity.
A further problem is caused by the interval inclusion of the range of values of a linear mapping of a vector
consisting of interval data which usually generates additional values; see, e. g., Schön and Kutterer (2005)
for a discussion of the two- and three-dimensional case. Since recursive formulations particularly exploit
such dependencies for the sake of a compact and efficient notation a significant overestimation is expected.
This study is based on previous work on the interval and fuzzy extension of the Kalman filter (Kutterer
and Neumann, 2009). Here, two main differences have to be mentioned. First, the approach is simplified as
the system-state parameters are considered as static quantities which do not change with time (or forces).
Second, the efficiency of the derivation of the measures of the imprecision of the estimated parameters is
increased due to a new formulation. The uncertainty of the observation data is formulated in a
comprehensive way referring to physically meaningful deterministic influence parameters.
The paper is organized as follows. In Section 2 least-squares parameter estimation is reviewed whereas
in Section 3 the recursive formulation is introduced and discussed. In Section 4 the applied model of
imprecision is motivated and described. Section 5 provides the interval formulation of the interval-
4th International Workshop on Reliable Engineering Computing (REC 2010) 103
Recursive Least-Squares Estimation in Case of Interval Observation Data
mathematical extension of recursive least-squares state-space estimation. In Section 6 the recursive
estimation of state-space parameters based on the observation of a damped harmonic oscillation is discussed
as an illustrative example. Section 7 concludes the paper.
2. Least-Squares Parameter Estimation in Linear Models
Recursive least-squares state-space estimation is based on the reformulation of the least-squares estimation
using all available observation data; see, e. g., Koch (1999). The model with observation equations is
considered in the following. It is a typical linear model which is also known as Gauss-Markov model. It
consists of a functional part
E l Ax (1)
which relates the expectation vector E(l) of the 1n -dimensional vector l of the observations with a linear
combination of the unknown 1u -dimensional vector of the parameters x with .n u The n u -
dimensional matrix A is called configuration matrix or design matrix, respectively. Note that the matrix A
can be either column-regular or column-singular. The difference r n u (or r n u d in case of
column-singular models with d the rank deficiency) is called redundancy; it quantifies the degree of over-
determination of the linear estimation model.
In case of an originally non-linear model a linearization based on a multidimensional Taylor series
expansion of the 1n vector-valued function f is derived as
0
0
0 0
0 0
E
E
x
x
fl f x f x x x
x
fl f x x x
x
which yields a fully analogous representation to Eq. (1) if the “ ” sign is neglected:
0
0 0, with : , : , : .Ex
fl A x l l f x x x x A
x (2)
For the sake of a simpler representation only the linear case according to Eq. (1) is discussed in the
following. Typically, the functional model part is given through the residual equations
with .Ev A x l v l l (3)
The Gauss-Markov model also comprises a second model part which refers to uncertainty in terms of
the regular variance-covariance matrix (vcm) of the observations ll
and residuals vv
, respectively, as
2 2 1
0 0l Q Pll vv llV (4)
with the (theoretical) variance of the unit weight 2
0 , the cofactor matrix of the observations llQ and the
weight matrix of the observations 1.P Qll
The unknown vector of parameters is estimated based on the principle of weighted least-squares via the
normal equations systems
ˆT TA PA x A P l (5)
104 4th International Workshop on Reliable Engineering Computing (REC 2010)
H. Kutterer, and I. Neumann
as
1
ˆ T Tx A PA A Pl (6)
for a column-regular design matrix A. In case of a column-singular design matrix a generalized matrix
inverse is used leading to
ˆ .T Tx A PA A Pl (7)
The cofactor matrix and the vcm of the estimated parameters are derived by the law of variance propagation
as
1
2
ˆˆ ˆˆ ˆˆ0 and ,Q A PA QT
xx xx xx (8)
and
2
ˆˆ ˆˆ ˆˆ0 and ,T
xx xx xxQ A PA Q (9)
respectively. Note that there are several other quantities of interest such as the estimated vectors of
observations l̂ and residuals v̂ , the corresponding cofactor matrices and vcms, and the estimated value of
the variance of the unit weight
2
0
ˆ ˆˆ .
T
r
v Pv (10)
Due to the restricted space these quantities are not treated in this paper. The discussion is limited to the
recursive estimation of the parameter vector and on the determination of its vcm.
3. Recursive Parameter Estimation in Linear Models
The idea behind recursive parameter estimation is the optimal combination of the most recent estimated
parameter vector and of observation data which were not included in the previous estimation due to, e. g.,
their later availability. This is a typical situation in continuously operating monitoring systems where the
state of the considered object is observed repeatedly in defined intervals. The set of parameter vector
components can be understood as state-space representation. With each newly incoming set of observations
the estimated state of the object is updated as a basis for further analysis and possibly required decisions
such as, e. g., in alarm systems. Note that the algorithms presented here just rely on the indices of the
observation data which are not necessarily related with time. Hence, by introducing negative weights it is
also possible to eliminate observation data from the estimation which is required in case of erroneous data.
This combination is considered as optimal in the meaning of the least-squares principle. Thus, the
required equations are derived from the equations given in Section 2. The observation vector is separated
into two parts, the first one containing the set of all old observations 1i
l and the second one containing the
new observations i
l . The residual vector v, the design matrix A and the weight matrix P are divided into
corresponding parts according to
1 1 1 1
,
i i i i
i i i i
v A l P 0x P
v A l 0 P (11)
4th International Workshop on Reliable Engineering Computing (REC 2010) 105
Recursive Least-Squares Estimation in Case of Interval Observation Data
where the old and the new observation vectors are considered as uncorrelated which leads to the 0 matrices
at the off-diagonal blocks of P. The least-squares solution ˆ ix of the parameter vector can be obtained
using Eq. (6) or (7), respectively. Note that the upper indices in brackets indicate the recursion step.
The recursion algorithm requires the solution 1ˆ i
x and its cofactor matrix 1
ˆ ˆ
i
xxQ which are assumed to
be derived in the previous recursion step. The existence of this solution is guaranteed in general since an
initial solution 0x̂ can always be derived – at least from a first consistent set of observations
0l with
0 0dim 1 1 .n ul In the following, only column-regular design matrices are assumed which
yield regular normal equations matrices. Note that comparable equations can be derived for column-
singular matrices.
Application of the least-squares principle on Eq. (11) leads to the extended normal equations system
1 1 1 1 1 1ˆ
T T T Ti i i i i i i i i i i i i
A P A A P A x A P l A P l (12)
and hence to the new, updated vector of estimated parameters
1
1 1 1 1 1 1ˆT T T T
i i i i i i i i i i i i ix A P A A P A A P l A P l (13)
which is based on all available observation information. The corresponding cofactor matrix consequently
reads as
1
1 1 1
ˆ ˆ .T T
i i i i i i i
xxQ A P A A P A (14)
The recursion is introduced through the matrix identity according to, e. g., Koch (1999, p. 37),
1 11 1
A BD C A AB D CAB CA (15)
which allows to reformulate Eq. (14) and thus Eq. (13). This yields the updated vector of estimated
parameters
1
1 1
ˆˆˆ ˆx x Q A Q w
Ti i i i i i
xx ww (16)
with
1
1 1 1
ˆ ˆ ˆ ˆ ˆ ˆ ˆ ˆ ,Q Q Q A Q A QT
i i i i i i i
xx xx xx ww xx (17)
1
ˆ ˆ ,Q Q A Q AT
i i i i i
ww ll xx (18)
1ˆ .
i i i iw l A x (19)
The vector i
w quantifies the discrepancy between the new observations and the observation values
which can be predicted from the available parameter values 1ˆ i
x . In total, Eq. (16) to Eq. (19) are very
compact as they avoid calculate the inverse of the normal equations matrix in total. Instead, the inverse of
the cofactor matrix i
wwQ is needed which has the same dimension as the number of new observations i
l . If
the number of new observations in each step is rather small, the recursion sequence is quite efficient and
hence well-suited for real-time applications. Computation time can be saved additionally if the matrix
product 1
ˆˆ
Ti i
xxQ A is stored in an auxiliary matrix. In order to summarize the derivations in this section it
106 4th International Workshop on Reliable Engineering Computing (REC 2010)
H. Kutterer, and I. Neumann
can be stated that the recursive formulation of least-squares estimation in a linear model is equivalent to the
“all-at-once” estimation using the completely available observation data. The obtained algorithm is
thoroughly based on the efficient update of a matrix inverse.
Besides the recursive update of least-squares estimates in sequential observation procedures, recursive
elimination of incorrect observations from the estimation process is possible as well. In combination, the
two techniques can be applied for polynomial filtering as a generalization of the moving-average technique.
4. Observation Imprecision
The algorithm for recursive parameter estimation derived in Section 3 relies on observation uncertainty of
random variability type only. If, however, imprecision has to be taken into account, the estimation
equations have to be extended in a proper way; see, e. g., Kutterer and Neumann (2009) for the Kalman
filter. The starting point is the reinterpretation of the observation vector l as
0 0
0 0 0with : , :s s
g gl y g s y g s s s y G s y y g s G
s s (20)
and with y the random vector of originally obtained observations which have to be reduced regarding
physical or geometrical effects. These reductions are considered as additive; they are described as a
function g of basic influence parameters s such as temperature or air pressure. The numerical values 0s of
these influence parameters are based on, e. g., actual observation, long-term experience, convention,
experts’ opinion or just rough estimates. As the values of the parameters s are fixed through all calculations
their influence on the estimation is deterministic. Remaining deviations are to be expected; this effect is
comprised in the linear approximation G s of the relation between the basic influence parameters and the
observation values which are used in the model.
Eq. (20) allows the separate introduction of random variability and imprecision. Random variability is
associated with the random vector y , mainly through the vcm yy ll . Imprecision refers to s and is
modeled by means of a real interval vector s with , ,r r rs s s 0 s and rs the interval radius as
a measure of imprecision. Note that the term in brackets denotes the interval representation with lower and
upper bounds whereas the term in angle brackets denotes the midpoint-radius representation. From the
viewpoint of applications it is reasonable to assume ms 0 for the interval midpoint since justified
knowledge about any deviation would imply more refined corrections leading to the consequent validity of
the assumption. This separation allows the identification of the corrected observation values y with the
mean point of the interval vector ml y and of the remaining deterministic errors with s which are
bounded by s .
The total range of the observation vector l with respect to s is given as
.l l y G s s s (21)
This convex polyhedron is generally a true subset of the interval vector , ,m r rl l l y G s . The
operator applied to a matrix converts the matrix coefficients to their absolute values. Due to the
4th International Workshop on Reliable Engineering Computing (REC 2010) 107
Recursive Least-Squares Estimation in Case of Interval Observation Data
construction procedure the interval vector l represents the closest interval inclusion of l which is exact
component by component. More information on intervals, interval vectors, arithmetic rules, etc., can be
found in standard textbooks such as Alefeld and Herzberger (1983) or Jaulin et al. (2000).
For a better understanding of possible models for the basic influence parameters three examples for Eq.
(20) are given here which are also relevant for the application example in Section 6. One possibility is the
modeling of an individual additive parameter for each observation in terms of
1 1 1
2 2 2
1 0 0
0 1 0.
0 0 1n n n
l y s
l y s
l y s
(22)
An alternative is the modeling of one common additive parameter as an unknown observation offset as
1 1
2 2
1
1.
1n n
l y
l ys
l y
(23)
As a second alternative a common multiplicative parameter can be modeled describing the effect of an
unknown drift with time t or step index i as
1 1 1 0
2 2 2 0
0
.
n n n
l y t t
l y t ts
l y t t
(24)
It is also possible to refer the multiplicative parameter to the magnitude of the observed value yi which can
be required in case of an unknown scale factor in distance observations with respect to a reference length
such as
1 1 1
2 2 2.
n n n
l y y
l y ys
l y y
(25)
In addition, all models can be composed for joint use. Many other models can be meaningful; see, e. g.,
Schön and Kutterer (2006) for a study on a refined interval modeling of observation and parameter
uncertainty in GPS (Global Positioning System) data analysis.
Note that the modeling of observation imprecision in terms of real intervals can be extended to fuzzy
numbers and intervals, respectively, in a straightforward manner. It is well-known that due to the convexity
of fuzzy intervals the respective -cuts can be identified as real intervals; see, e. g., Möller and Beer (2004).
The technique of -cut discretization exploits this property. For this reason the present discussion can easily
be seen as a special case of a fuzzy approach which is discussed here as an interval approach for the sake of
simplicity but without loss of generality.
108 4th International Workshop on Reliable Engineering Computing (REC 2010)
H. Kutterer, and I. Neumann
5. Interval Extension of Recursive Estimation
If recursive estimation as introduced in Section 3 is applied to interval observation data as defined in
Section 4, overestimation is the key problem which has to be solved. Overestimation arises from several
causes. A first one was indicated in the discussion of Eq. (21) since the range of values of a linear mapping
, ,z F x x x (26)
is a convex polyhedron in general but usually not an interval vector. Hence, interval mathematics is not
closed with respect to a linear mapping. Moreover, the sub-distributivity property
MF x M F x (27)
holds which reflects lacking associativity in case of matrix multiplications and interval vectors. Finally,
already for single intervals the range of values can be overestimated such as, e. g.,
2 , 2 0,0 ,r rx x x x y y x x x x (28)
in case of dependencies between the intervals. This shows that the naïve application of the fundamental
rules of interval arithmetic is not a proper way for evaluating the range of parameter values in recursive
estimation since it is crucial to avoid any possible cause of overestimation. Actually, the tightest interval
inclusion of the actual range of values is always given as
0
, .z z z MFx x x z MF x (29)
In case of Eq. (28) this yields the correct range of values
0
1 11 1 1 1 0 0 0,0 .
1 1z x x x x x z z (30)
If recursive least-squares estimation is considered as described by Eq. (16) to Eq. (19) the extension is
straightforward for the interval midpoints ˆmx and
ml which yields
1
1 1
ˆ ˆˆ ˆx x Q A Q w
Ti i i i i i
m m xx ww m (31)
in a compact and efficient representation with
1ˆi i i i
m m mw l A x (32)
which is possible because of the symmetry of the intervals with respect to the midpoints.
However, for the calculation of the interval radius ˆrx an alternative method is required because in Eq.
(31) overestimation occurs since the true range of values
1
1 1 1 1 1
ˆ ˆˆ ˆ ˆ ˆ ˆ ˆ ˆ, ,x x x Q A Q l A x l l x x x
Ti i i i i i i i i i i i i i
xx ww (33)
is a convex polyhedron which is included by a interval vector. Through this inclusion the set ˆ ix is
enlarged and the additional values are taken into account in the next recursion step. Thus, the effect of
overestimation accumulates very quickly. This problem is overcome effectively if the recursion is resolved
by referring the recursion equations to the complete set of observations which are available at a respective
recursion step. In order to explain and reduce the effect of dependencies the observations on their part are
referred to the original, independent values of the basic influence parameters s.
4th International Workshop on Reliable Engineering Computing (REC 2010) 109
Recursive Least-Squares Estimation in Case of Interval Observation Data
An equivalent result is available if Eq. (13) and Eq. (14) are directly used. This is possible because of
the formal identity of the least-squares solution presented in Section 2 which uses all observations at once
and the recursive solution given in Section 3. Starting with
1 1 1
ˆˆˆ
T Ti i i i i i i i
xxx Q A P l A P l (34)
the recursion is only needed for the update of the cofactor matrix ˆ ˆ
i
xxQ . If for all recursion steps an identical
vector s is assumed, Eq. (34) can be rewritten as
1 1 1 1
ˆˆˆ
T Ti i i i i i i i i i
xxx Q A P y G s A P y G s (35)
using Eq. (20). Note that matrix i
G relates the new observations in the i-th recursion step with the constant
vector of basic influence parameters whereas matrix 1i
G recursively compiles the respective matrices G
of all previous steps. Reordering of Eq. (35) yields
1 1 1 1 1 1
ˆˆˆ
T T T Ti i i i i i i i i i i i i i
xxx Q A P y A P y A P G s A P G s (36)
and
1 1 1 1 1 1
ˆˆ ˆˆˆ
T T T Ti i i i i i i i i i i i i i i
xx xxx Q A P y A P y Q A P G A P G s (37)
and finally
1 1 1
ˆˆˆ ˆ .
T Ti i i i i i i i i
m xxx x Q A P G A P G s (38)
Thus, the interval vector radius of the estimated parameters in the i-th recursion step is efficiently derived as
1 1 1
ˆˆˆ
T Ti i i i i i i i
r xx rx Q A P G A P G s (39)
or
ˆˆˆ ,
i i i
r xx rx Q M s (40)
respectively, with the recursively calculated matrix
1 1 1
: .T T
i i i i i i iM A P G A P G (41)
6. Application example
Recursive estimation is always relevant if the values of the estimated parameters are needed in real-time or
if the available data storage is limited. In order to demonstrate efficient recursive estimation in case of both
data random variability and imprecision using interval mathematics, the observation of a damped harmonic
oscillation is presented and discussed exemplarily. The principal observation configuration is shown in
Figure 1. The mathematical model is defined as
110 4th International Workshop on Reliable Engineering Computing (REC 2010)
H. Kutterer, and I. Neumann
0
0
2exp sin
( ) spring length at time
oscillation amplitude
oscillation phase
damping parameter
oscillation period
offset parameter
y t y A t tT
y t t
A
T
y
(42)
with the approximately known parameters A, , , T, and y0 which have to be estimated from the
observations iy at discrete times it . The parameter T is functionally related with the spring constant .
Figure 1. Spring-damping model.
The values of the parameters chosen for the simulations in this section are presented in Table I;
denotes the constant individual standard deviation of all single observations iy . The resulting oscillation is
shown in Figure 2. It is identical for all following three simulations.
Table I. A priori values for the simulation of a damped harmonic oscillation
A T y0
1 - /10 0.01 10 0 0.001
Figure 2. Damped spring oscillation observed with 100 points over 10 periods.
4th International Workshop on Reliable Engineering Computing (REC 2010) 111
Recursive Least-Squares Estimation in Case of Interval Observation Data
Table II gives the numerical parameters of the imprecision models for the three simulations of the
damped harmonic oscillation. The simulations were calculated based on recursive estimation with interval
data. The differences between the simulations lie in the modeling of imprecision. Model I assumes
individual interval radii of identical size for all observations; cf. Eq. (22). Model II assumes two interval
components which are common for all observations: an (additive) offset reflecting the uncertainty about the
zero reference (cf. Eq. (23)) and a (multiplicative) factor proportional to the observed spring length y which
refers to the epistemic uncertainty with respect to an etalon or a different length reference (cf. Eq. (25)).
There are no individual terms as in Model I. Model III is based on Model II but comprises an additional
(multiplicative) factor proportional to time t which represents a drift; cf. Eq. (24).
Table II. Imprecision models for the simulations
Model I Individual imprecision terms for all observations 410rs
Model II Two common imprecision terms for all observations, no individual terms:
additive term 310rs and term proportional to spring length y 410rs
Model III As imprecision model II with an additional factor prop. to time t: 410rs
Figure 3 shows the results for the recursive estimation using Model I, Figure 4 for Model II and
Figure 5 for Model III. In each case the first ten epochs were combined for the estimation of the initial
solution of the recursion. 100 observations were used in total; they are indicated in Figure 2. Based on the
initial solution the next observation was introduced to the estimation, and the estimated parameters and their
cofactor matrix were updated using Eq. (31), Eq. (32), Eq. (17) and Eq. (18). The interval radii of the
estimated parameters were calculated using Eq. (39). Random noise was added to each observation value
as indicated in Table I; for all three simulation the same noisy observation data were used. In all three
figures the recursively estimated parameters are indicated by light gray diamonds. The standard deviations
of the estimated parameters are shown with dark gray diamonds for all epochs symmetric to 0. The interval
radii of the estimated parameters are shown with black diamonds symmetric to 0.
All figures show the decrease of the standard deviations of all estimated parameters tending towards
zero with increasing number of observations and epochs, respectively. Like the estimated parameters these
values are identical for all three simulations since the model for the standard deviations of the observations
was identical as well. Thus, they confirm the general expectation of successively improved information
about the non-observable system state information.
In contrast to the decrease of the standard deviations there are several effects which reflect the
systematic, deterministic character of the modeled imprecision terms. All given values are exact component
by component as explained in Section 5 meaning that they represent the correct range of values. In Figure 3
the imprecision of the damping parameter and of the oscillation period are reduced when more observations
are available. However, this does not hold for the amplitude, the phase and the offset parameter. Due to the
individually modeled observation interval radii there is a remaining epistemic uncertainty: 4ˆ 2.5 10 ,rA
4 40,
ˆ ˆ2.5 10 , 1 10r ry .
Looking at Figure 4, the situation changes completely. Phase, damping and period do not suffer
from the modeled interval data uncertainty which reflects two systematic effects: unknown offset and scale
112 4th International Workshop on Reliable Engineering Computing (REC 2010)
H. Kutterer, and I. Neumann
of the observation of the spring length. Obviously, this type of uncertainty is eliminated in total already in
the initial solution of these three parameters. The modeled imprecision is absorbed by the amplitude 4ˆ 1 10rA and by the offset 3
0,ˆ 1 10ry . A possible explanation is that these two parameters
represent absolute information whereas for the other three parameters only relative information is required.
In such a case effect differences are relevant which are eliminated in case of identical effect magnitudes. Of
course a similar effect also occurs in case of Model I where observation differences lead to identical
interval radii different from 0.
This reasoning is also supported by the results shown in Figure 5. Here, an additional scale
imprecision component is modeled with respect to time. Hence, the observation interval radii are
additionally increasing linearly with time. This is directly propagated to the offset imprecision. All other
parameter imprecision measures show periodic effects which indicate that depending on the particular time
of estimation with respect to the completeness of a period the modeled systematic components are more or
less eliminated – or not. Obviously, for the determination of the parameters there are better and worse
conditions which have to be known when imprecision is considered according to Model III. In any case, the
presented methods provide a mathematical and algorithmic framework which allows adequate decisions.
7. Conclusions
Recursive parameter estimation based on the least-squares principle is an important task in the engineering
disciplines if, e. g., the parameters have to be estimated and updated in real time, respectively. Although
well known and well established as a classical estimation technique, problems arise in case of recursively
propagating data uncertainty comprising both effects of random variability and imprecision due to
remaining systematic errors. Imprecise data can be effectively modeled using real intervals. Hence, if
intervals are given for the original observations, the determination of the corresponding intervals of the
estimated parameters can be considered as the task to calculate the range of values. If standard interval-
mathematical rules are applied, the problem of overestimation is relevant. It can be overcome if the
computations are referred to independent basis influence parameters which are assumed to cause
imprecision.
In this paper a method was introduced which allows recursive estimation using interval data in a very
efficient way. The actual observation values are used as midpoints of symmetric intervals. Hence, this
yields the same results as in classical least-squares estimation. For the computation of the interval radii of
the estimated parameters the recursion based on the observation data is resolved. Instead, all observation
intervals are introduced simultaneously to the algorithm. The recursion is referred to the update of the
cofactor matrix of the estimated parameters and of a matrix product. Both can be achieved efficiently so
that the final derivation of the interval radii is straightforward. The method was demonstrated using the
application of a damped harmonic oscillation. Based on three simulation runs with different models of data
imprecision the ways of propagating random variability and imprecision were shown and discussed.
Future work has to extend the presented algorithm to state-space filtering such as the Kalman filter.
Although such an extension is already available through the resolution of the recursion it is far from being
efficient. Besides, the use of asymmetric observation intervals has to be considered which is more
appropriate than symmetric intervals for a realistic modeling of systematic errors due to, e. g., atmospheric
refraction.
4th International Workshop on Reliable Engineering Computing (REC 2010) 113
Recursive Least-Squares Estimation in Case of Interval Observation Data
Figure 3. Uncertainty propagation for the five model parameters over 100 epochs – imprecision model I (black: imprecision
measures, dark gray: standard deviations, light gray: parameter variations due to simulated observation values)
114 4th International Workshop on Reliable Engineering Computing (REC 2010)
H. Kutterer, and I. Neumann
Figure 4. Uncertainty propagation for the five model parameters over 100 epochs – imprecision model II (black: imprecision
measures, dark gray: standard deviations, light gray: parameter variations due to simulated observation values)
4th International Workshop on Reliable Engineering Computing (REC 2010) 115
Recursive Least-Squares Estimation in Case of Interval Observation Data
Figure 5. Uncertainty propagation for the five model parameters over 100 epochs – imprecision model III (black: imprecision
measures, dark gray: standard deviations, light gray: parameter variations due to simulated observation values)
116 4th International Workshop on Reliable Engineering Computing (REC 2010)
H. Kutterer, and I. Neumann
References
Alefeld, G. and J. Herzberger. Introduction to Interval Computations. Academic Press, Boston, San Diego & New York, 1983.
Gelb, A. Applied Optimal Estimation. MIT Press, Cambridge, MA, 1974
Jaulin, L., E. Walter, O. Didrit and M. Kieffer: Applied Interval Analysis. Springer, Berlin, 2000.
Koch, K. R. Parameter Estimation and Hypothesis Testing in Linear Models. Springer, Berlin & New York, 1999.
Koch, K. R. Introduction to Bayesian Statistics. Springer, Berlin, 2007.
Möller, B. and M. Beer. Fuzzy Randomness. Springer, Berlin & New York, 2004.
Kutterer, H. and I. Neumann. Fuzzy extensions in state-space filtering. Proc. ICOSSAR 2009, Taylor and Francis Group, London,
ISBN 978-0-415-47557-0, 1268-1275, 2009.
Schön, S. and H. Kutterer. Using zonotopes for overestimation-free interval least-squares -some geodetic applications-. Reliable
Computing 11(2):137-155, 2005.
Schön, S. and H. Kutterer. Uncertainty in GPS networks due to remaining systematic errors: the interval approach. Journal of
Geodesy. 80(3):150-162, 2006.