model based approach for fault detection and …...this thesis aims at using particle filter...
Post on 27-Nov-2020
8 Views
Preview:
TRANSCRIPT
MODEL BASED APPROACH FOR FAULT DETECTION AND PREDICTION USING PARTICLE FILTERS
A THESIS SUBMITTED TO THE GRADUATE DMSION OF THE UNIVERSITY OF HAWAI'I IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCIENCE
IN
ELECTRICAL ENGINEERING
MAY 2008
By Manisha Mishra
Thesis Committee:
Vassilis L. Synnos, Chairperson N. T. Gaarder
TepDobry
We certify that we have read this thesis and that, in our opinion, it is satis
factory in scope and quality as a thesis for the degree of Master of Science
in Electrical Engineering.
ii
This work is dedicated to my parents
Prof. G.C Mishra and Mrs.Geetanjali Mishra
iii
Acknowledgements
I am deeply thankful to Prof. Vassilis Synnos, Professor, University of Hawaii
at Manoa, for the help and support rendered by him throughout the Masters program .
Without his valuable guidance and support this project could not have been completed. I
thank the committee members Prof. Tep Dobry and Prof. N. T. Gaarder for their valuable
suggestion and co-operation.
Special thanks to Estefan Ortiz, Xudong Wang and Ashish Babbar for their help
and suggestions.
I would also like to thank my lab mates Jin Wei, Siddharth Gujral, Hui Ou and
Anand Sharma. I also thank my friends Shruti Tiwari and Ramya Rajagopalan for their
constant moral support.
This thesis is dedicated to my parents who have been the constant source of in
spiration and encouragement.
Sincerely,
Manisha Mishra
iv
Acknowledgements
lJst of Tables
List of Figures
Abstract
1 Introdnction 1.1 Motivation..... 1.2 Research Objective 1.3 Method Overview . 1.4 Contributions . . .
Contents
Iv
vii
viii
1
2 2 3 4 5
2 Recnrsive Bayesian Solution 8 2.1 Problem Statement .. . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Recursive Bayesian Fonnulations. . . . . . . . . . . . . . . . . . 10
2.2.1 Prediction State . . . . . . . . . . . . . . . . . . . . . . . 10 2.2.2 Update State ........................ II
3 Sequential Monte Carlo Methods: Particle Fllters 13 3.1 Particle Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 Sequential Importance Sampling . . . . . . . . . . . . . . . . . . . . 14 3.3 Degeneracy Problem . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.4 Sequential Importance Resampling . . . . . . . . . . . . . . . . . . . . .. 23
4 Fault Detection and PredIction using Particle Fllters 4.1 Fault Detection Algorithm ..... . . . . . . . . . . . . 4.2 Fault Prediction using Particle Filters .......... .
5 Simulation Results 5.1 VTOL aircraft model . . . . . . . . . . . . . . . . . . . .
5.1.1 Simulation results of VTOL model. . . . . . . . .
v
25 . . . . . . . .. 26
29
38 38 40
5.2 DC motor. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.2.1 Simulation Results of DC molor model .......... 47
6 Conclusion 56 6.1 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 56 6.2 Future work . . . . 57
BibUography 58
vi
List of Tables
2.1 State Space model parameters .................. . 2.2 Measurement model parameters ................. .
3.1 Sequential Importance Sampling . . 3.2 Sequential Importance ResampJing .
8 9
21 24
4.1 Detection Algorithm ........ . . . . . . . . . . . . . . . . 27 4.2 Parameter Specification . . . . . . . . . . . . . . . . . . . . . . . 32
5.1 VTOL model parameters . . . . . . . . . . . . . . . . . . . . . . 39 5.2 DC motor model parameters . . . . . . . . . . . . . . . . . . . . 46
vii
List of Figures
2.1 Hidden Markov model of first order. . . . . . . . . . . . . . . . . . . . .. 9
3.1 Particle Filter - Samplesl Particles from the prior PDF. ........... 17 3.2 Particle Filter - prior PDF expressed as a sum of particles and the corre-
sponding weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 18 3.3 Particle Filter - Recursive Weight Expression. . . . . . . . . . . • . . . .. 19 3.4 Particle Filter - Estimation of the posterior PDF using the prior information. 20
4.1 Fault Detection and Prediction Block Diagram. ..... . . . . . . . . .. 25 4.2 Model States under nominal condition. . . . . . . . . . . . . . . . . . . .. 33 4.3 State Estimation using SIR algorithm. . . . . . . . . . . . . . . . . . . .. 34 4.4 Fault State Estimation. ............... . . . . . . . . . . . .. 35 4.5 Negative Likelihood Plot. . . . . . . . . . . . . . . . . . . . . . . . . . .. 36 4.6 Missed Alarm Rate Plot. ........................... 37
5.1 VTOL State and Observation. ..................... 40 5.2 VTOL State Estimation (Nominal Condition). . . . . . . . . . . . . . 41 5.3 VTOL Fault State Estimation. ..................... 42 5.4 VTOL Negative Likelihood Plot. . . . . . . . . . . . . . . . . . . . . 43 5.5 VTOL Threshold Estimation. . . . . . . . . . . . . . . . . . . . • . . 44 5.6 DC Motor Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 5.7 DC Motor States. . . . . . . . . . . . . . . . . . . . . . . . . . . 47 5.8 DC Motor State Estimates. . . . . . . . . . . . . . . . . . . . . . . . .. 48 5.9 DC Motor Negative Likelihood Plot. . . . . . . . . . . . . . . . . . . . .. 49 5.10 Lower and Upper Threshold Bounds for fault detection in DC motor model. 50 5.1I DC Motor model Fault State plot. ...................... 51 5.12 DC Motor model State Estimates under fault condition. . . . . . . • . . .. 52 5.13 DC Motor Negative Log likelihood Plot. . . . . . . . . . . . . . . . . . .. 53 5.14 P- Step ahead prediction of angular speed of DC Motor model. . . . . . . . 54
viii
ABSTRACT
Fault detection and failure prediction for nonlinear non-Gaussian systems is an
important issue both from the economic and safety point of view. Most of the fault detection
techniques assume the system model to be linear and the noise to be Gaussian. These lin
earization assumptions tend to suffer form poor detection and imprecise prediction. Also,
they may lead to false alarms which would incur unnecessary economic expenditure.
This thesis aims at using particle filter approach for fault detection and failure
prediction in nonlinear non-Gaussian systems. A major advantage of this approach is that
the complete probability distribution information of the state estimates from particle filter
is utilized for fault detection and failure prediction.
Particle filtering methods represent and recursively generate an approximation of
the posterior state probability density function. They are Sequential Monte Carlo Methods
based on point mass representation of probability densities, which have been applied to the
Vertical Take Off and Landing (VTOL) aircraft model and DC motor model in this thesis.
Two variants of particle filters: Sequential Importance Sampling Algorithm and Sequential
Importance Resampling Algorithm have been studied. Sequential Importance Sampling
algorithm suffers from degeneracy problem because of which Sequential Importance Re
sampling technique is preferred. The system is represented in state space format and the
estimates are made according to the Sequential Importance Resampling algorithm. The de
cision rule for fault detection is evaluated using the likelihood of the estimation parameter
over a sliding window. The threshold values for fault detection are set using a heuristic
approach. A fault is said to be detected if the likelihood exceeds the expected threshold
value. A p-step ahead prediction is done for the DC motor model after the fault has been
detected, which is utilized to determine the remaining useful life of the model.
I
Chapter 1
Introduction
Fault deteCtion and failure prediction is a crucial task both from economic and
safety point of view. For nonlinear non-Gaussian systems, fault deteCtion as well as failure
prediction is a conundrum, as the existing algorithms are meant for linear Gaussian systems
only. Thus, while applying these algorithms for the nonlinear systems certain assumptions
are made which do not give accurate results. Various algorithms have been developed for
nonlinear state estimation and one such approach is the particle filter method, which has
been adopted for fault detection and failure prediction in this thesis.
1.1 Motivation
In the quest for better and smarter devices, a crucial area of focus is on the con
tinuous improvement and maintenance of system performance. The increasing complexity
of sophisticated computer-controlled systems has made fault detection and failure predic
tion an inevitable component of system operation. A fault is any unwanted deviation in a
system's parameter which degrades or interrupts the system's performance. The corrective
actions - like part replacements, incur unnecessary economic loss. Detection of fault for
dynamic systems has become a key issue as the economic impact of reliability related mat
ter is steadily increasing. Moreover, it is an area of major concern where the apprehension
to cost effective operation of critical assets is escalating. Fault detection involves the find
ing of fault in a system which is indeed a critical task in any health management system
[2],[16].
2
Failure occurs when a fault reaches a critical level and causes breakdown of the
system components. These faults/failures are inevitable in any system due to the complex
dynamics and life limited components which make up a system. Thus, to attain economic
success, both from financial and safety point of view, there is a growing realization to
improve the maintenance strategy of the equipment to avoid system failures. For long-tenn
successful growth, significant amount of expertise and time are needed to come up with
system specific proactive services especially in predicting the fault within the system [2].
Fault prediction is a naturaJ extension to the fault detection problem. It airus to characterize
the evolution of the detected incipient failure condition, thereby allowing the estimation of
the remaining useful life for the affected subsystems or components [16].
1.2 Research Objective
Fault detection and failure prediction algorithms find a wide range of application
in many areas for example: electro-mechanical systems, continuous-time manufacturing
processes, structural damage analysis and hydraulic systems. UsuaJJy these systems are
highly complex, nonlinear and affected by large-grain uncertainty. The detection and pre
diction tasks can be particularly difficult when the system under study is operating under
real-time conditions. Most of the approaches currently available in the reliability arena
involve intensive computations and involves processing of large amounts of historical data
[I]. Also, the results obtained do not necessarily include any infonnation about the physics
of the system. Thus. there is very littIe scope left for on-line updates in predicting the
remaining useful life when the system is affected by a time varying fault.
Learning paradigms, which are useful in the control field, are scarcely applied for
prognostic purposes, thereby limiting the implementation of automated corrective schemes.
Also, most of the algorithms are not capable of switching rapidly from fault detection to
fault prediction module. The research work in this thesis intends to establish a general
outline to deal with the problems of real-time fault detection and failure prediction through
the utilization of particle filtering techniques. Particle filtering is an emerging methodology
for sequential signal processing for state estimation and is appropriate in the case when the
system is nonlinear or in the presence of non-Gaussian process/observation noise [2]. To
3
accomplish state estimation for fault detection and failure prediction in a nonlinear non
Gaussian environment, this research has been divided in to two phases.
The first phase is to implement an on-line particle-filtering-based framework for
fault detection in VTOL model (Vertical Take Off and Landing) [18],[19] and DC motor
model [20]. This approach intends to pinpoint the presence of a fault condition in a real
time scenario. It is assumed that a set of measurements and a characterization of the system
behavior under nominal operational conditions are available. Also, it is expected that this
scheme will provide the means for a swift transition between the fault detection and failure
prediction applications.
The second research phase focuses on the use of a particle-filtering based frame
work for on-line failure prediction in DC motor model [20]. This implementation will
statistically estimate the remaining useful life of the motor affected by a fault condition
after it has been detected. Thus, it estimates the probability density function of the subsys
tern's remaining useful life. The outcome of the prediction module. indicates the duration
for which the system can operate before it fails completely.
1.3 Method Overview
Particle filtering is a set of powerful and versatile methods for sequential signal
processing. They are also known as Sequential Monte Carlo methods. These simulation
based algorithms find a wide range of application in the field of statistics and signal process
ing. Founded on the concept of Sequential Importance Sampling and the use of Bayesian
theory, particle filtering is very suitable in the case where the system is nonlinear or in
the presence of non-Gaussian process or observation noise, where the non-linear nature is
significant under fault conditions [2].
The system model is assumed to be known in probabi1istic form. A discrete time
formulation of the problem is done as it is suitable and widespread [I]. This thesis focuses
on the state space approach to model a dynamic system as it is convenient for handling
multivariate data. The state vector and the measurement vector enclose all information
which describes the system under investigation. The system model equations are used to
represent the evolution of the system state with time, and the measurements are assumed
4
to be available at discrete times. The measurement vector represents noisy observations,
which are related to the state vector through the measurement model equation. The crux
of the particle filter method is to approximate the conditional state probability distribution
by a set of random samples, known as particles, with associated weights. For estimating
the state values and to make inference about a dynamic system, at least two models are
required: the State evolution model, which represents the evolution of state with time; the
Measurement Model representing the noisy measurements with the state model [1],[9].
The probabilistic state space formulation and the requirement of updating the
information on receipt are idea1ly suited for Bayesian approach. Particle filter generates and
updates the random samples using a recursive strategy. Such a recursive filter essentially
consists of two stages: prediction and update. The prediction stage uses the system model
to predict the state PDF forward for consecutive time intervals. As the state is usually
subjected to unknown disturbances (like noise) it results in deformed prediction values,
thereby spreading the state PDF. The update operation is used to modify the prediction
PDF by using the latest measurement Updating the knowledge about the target state in the
presence of extra information from the new data is accomplished using Bayes's theorem
[1],[16].
Fault detection is done by comparing the actual system behavior, known from the
measurements, with the predicted behavior by the model. The difference between the actual
system behavior and the predicted behavior is termed as residual. Under fault conditions
the residual is large and the likelihood of the particles takes a small value. The negative log
likelihood plot indicates the fault as soon as it crosses the threshold value. Fault prediction
is done using the weight distribution after the detection of fault The remaining useful life,
that is the time for which the system will function before complete failure, is determined
when the prediction results exceeds the threshold values for complete failure.
1.4 Contributions
Most of the state estimation problems of nonlinear, non-Gaussian systems require
the evaluation of states which change over time using a sequence of noisy measurements
made on the system. Bayesian methods provide a good technique for solving the dynamic
5
state estimation problems [6]. For nonlinear systems. various linearization techniques have
been used. Several state estimation methods like Kalman filter approach, optimal nonlinear
filters. Grid based methods have been used for fault detection and failure prediction. The
Kalman filter approach assumes that the posterior density at every time step is Gaussian and
hence parameterized by a mean and covariance. If the above assumptions do not hold then
Kalman filter does not provide an optimal solution. The Grid based method provide the
optimal recursion of the filter density if the state space is discrete and consists of a finite
number of states. The optimal solution exists only if the assumptions hold true [1].[4].
The suboptimal methods like Extended Kalman Filter. Grid based methods are rarely used
because of computational complexity. Usua1ly these techniques suffer from poor detection
and imprecise prediction. Also. most of the efficient fault detection algorithms do not
have a unified approach that can perform the transition from fault detection results to fault
prediction modules [2].
Particle Filters provide a novel approach for state estimation in nonlinear. non
Gaussian systems. It provides a precise framework for dynamic state estimation problems
as it intends to construct the posterior Probability Density Function (PDF) of the state
based on all available information. including the set of received measurements. It utilizes
the complete probability distribution information of the state estimates for fault detection
and prediction. It efficiently determines the fault in real time given the state space model
of the system and the real time measurements. The optimal course of action can be de
termined from this PDF as it includes the current information about the state vector. In
case of linear Gaussian estimation problem. the required PDF remains Gaussian for every
iteration of the filter. Also. the problem becomes easier as the mean and the covariance
of the distribution propagates in time according to the filter relation. However. in case of
nonlinear non-Gaussian problem. there is no general analytic expression for the required
PDF [6]. This probability density function embodies all available statistical information.
which forms the solution to the estimation problem. Some problems require an estimate
every time a measurement is received. In such cases. a recursive filter is a convenient so
lution. Particle filters are computationaIly more feasible as they use a recursive technique
for state estimation. With recursive filtering approach the received data can be processed
sequentiaIly rather than as a batch. Thus. it is not necessary to store the complete data set
6
nor is it required to reprocess existing data when a new measurement becomes available.
Also, particle filters allow information from different sources to be fused in a principled
manner.
In Chapter 2 the state estimation problem is solved using Recursive Bayesian for
mulation. Assuming the prior information about the measurements to be given the posterior
PDF is calculated using predict and update stage [9]. The Recursive Bayesian integrals are
intractable and thus particle filtering approach is used to solve them.
In Chapter 3 particle filter approach is described and the two variants of it:
Sequential Importance Sampling and Sequential Importance Resampling have been dis
cussed. Chapter 4 comprises of the fault detection and failure prediction using particle
filters. The overshoot of the negative likelihood of the model parameter beyond the thresh
old value indicates the fault in the system. The PDF, at the instance when the fault is
detected, is used for p-step ahead prediction of the fault system. The remaining useful life
is determined when the prediction results exceeds the threshold values for complete failure.
In chapter 5 the particle filler approach for state estimation is implemented on the Vertical
Take off and Landing model and the DC motor model. The fault detection and prediction
results have been described. Chapter 6 concludes the work of the thesis and mentions about
the future work.
7
Chapter 2
Recursive Bayesian Solution
The state estimation problem for nonlinear non-Gaussian models is solved using
the recursive Bayesian formulation. The posterior probability density function is calculated
via prediction and update stage. The prior measurement and PDF are assumed to be given.
As the Bayesian integrals are intractable various optimal and suboptimal methods are used
to find the solution to the integra1s. One of the suboptimal techniques is the particle filtering
method [I).
2.1 Problem Statement
The estimation problems require the state evolution model and the measurement
model equations to be known in probabilistic form [1),[16).
Let the state space model be given as:
where the symbols are listed in Table 2.1.
Symbol
Table 2.1: State Space model parameters
Description Nonlinear function of the state and the noise sequence
State at time instant k Process noise sequence
8
(2.1.1)
Measurement model is given as:
(2.1.2)
where the symbols are listed in Table 2.2. The measurements until time k are known which
Table 2.2: Measurement model parameters
Symbol Description Zk Measurement at time instant k hk Nonlinear function of measurement noise and the state n Measurement noise sequence
is expressed as:
Zl:k = {Zi,i = 1, ... ,k} (2.1.3)
The initial PDF p{xoIZo) == p{xo) of the state vector is also available which is
called the prior. The PDF at time instant k - 1, P{Xk-llzl:k-J). is also available. The aim
is to find the posterior PDF p{xklzl:k).
Z(t-1J Z(H)
Figure 2.1: Hidden Markov model of first order.
In Figure 2.1. x{t) represents the state of the system at time instant t. The mea
surements or the observation values are denoted as z{t). At any time instance t the state
depends only on the value of the previous state t - 1 only. which is the Markov property.
9
The above state evolution model describes a Hidden Markov Model of first order.
The states are hidden and the measurements are available to us. The estimates are further
used to calculate the likelihood of the measurement values. The likelihood is a measure
of the error variance, which in tum indicates the fault in the system. The remaining useful
life can be calculated once the fault has been detected. The weight distribution at the time
when the fault occurs is observed and is used to calculate the p-step ahead prediction of
the fault system. The remaining useful life of the system can be calculated by estimating
the time within which the system parameters will exceed the predefined threshold values
for failure. The measurement values are not available for the prediction purpose. Thus,
the remaining useful life defines span of time for which the system will function, after the
fault has been detected till complete failure. Thus, to obtain the above objective a recursive
Bayesain approach is used for the state estimation in this thesis.
2.2 Recursive Bayesian Formulations
The dynamic state estimation problem within a Bayesian general formulation
aims at nonlinear filtering procedure by generating an estimate of the probability density
function of the state based on the set of received measurements. A recursive strategy is
used to update the estimation result since such an estimate for the state vector is required
almost every time the measurement data are received. This strategy will avoid the problem
of enormous data storage and the time to recalculate the whole state trajectory in time can
also be avoided [1],[6],[9].
To solve the estimation problem we use the state space model and the measure
ment model. The prior probability density function at time k - 1 is assumed to be known.
The posterior density at time instant k can be calculated in two stages: predict state and
update state [6],[9].
2.2.1 Prediction State
The prediction state involves using the state space model to obtain the prior prob
ability density function at time instant k using the Chapman Kolmogorov's equation [I]:
10
(2.2.1)
The above integral uses the fact that the state evolution process is a first order Markov
process and thus the state probability given the prior states and the measurements can be
approximated as [1]:
(2.2.2)
The prediction integral in (2.1.1) can be calculated easily as the first term is defined by the
state evolution model and the second term is available to us as prior information.
2.2.2 Update State
At time instant k the measurement z( k) becomes available and the priors at each
time instance are updated according to the Bayes's rule:
(2.2.3)
where the denominator is the normalizing constant given as:
(2.2.4)
here p(zklxk) is the likelihood function. The measurement at k is used to modify the prior
so as to estimate the posterior density of the current state. The solution to the above recur
sive integral is intractable and cannot be determined analytically. Hence various optimal
and suboptimal Bayesian methods are used to approximate it. Some of the approximate
Bayesian solutions are:
I) Optimal Algorithms:
a) Kalman Filters
b) Grid Based Methods
2) Suboptimal Algorithms:
a) Extended Kalman Filters
b) Approximate Grid based methods
c) Particle Filters
11
Kalman Filters assumes that the posterior density is Gaussian at every time step
and hence, parameterized by a mean and covariance [1]. However, if this assumption does
not hole true then Kalman filters cannot give the best results. Also, Grid based methods
can perform well under certain assumptions only. Thus, the suboptimal algorithms are
used when the assumptions do not hold true. The Extended Kalman filters use Taylor
series expansion for the nonlinear function and are difficult to implement because of the
complexity. The approximate Grid based methods are expensive to implement because of
the high computation cost. Sequential Monte Carlo Methods are also used to approximate
the solution of the intractable recursive Bayesian integrals. Particle filter, a Sequential
Monte Carlo method, is one such technique which approximates the posterior PDF and
represents it as a weighted sum of sample points called particles. This thesis focuses on the
use of particle filter technique for finding the solution to the intractable Bayesian integrals.
12
Chapter 3
Sequential Monte Carlo Methods:
Particle Filters
Particle filter approach is a Sequential Monte Carlo method for solving the re
cursive Bayesian integrals. These simulation based methods sample from a sequence of
probability distributions and approximate the distribution using random samples called par
ticles. These particles are propagated over time using importance sampling and resampling
mechanisms. As the number of particles goes to infinity the convergence of this parti
cle approximation towards the sequence of probability distributions can be ensured under
very weak assumptions [I). However, a restricted number of particles is preferred so as to
avoid unnecessary computation. Two algorithms: Sequential Importance Sampling (SIS)
and Sequential Importance Resampling (SIR) have been described in this chapter. Particle
filtering is an emerging and prevailing methodology for sequential signal processing which
finds a wide range of application in various fields.
3.1 Particle Filters
Particle filtering is founded on the concept of Sequential Importance Sampling
(SIS) and uses the notion of Bayesian theory. It is very suitable in the case where the system
is nonlinear or in the presence of non-Gaussian process/observation noise as in engines, gas
turbines and gearboxes, where the nonlinear nature and ambiguity of the rotating machine
is siguificant while opemting under fault conditions [1),[5).
13
The particle filter approach allows infonnation from multiple measurement sources
to be fused in a principled manner, which is an attribute of decisive significance for fault
detection/diagnostic purposes. The underlying principle of the methodology is to repre
sent the required posterior density function by a set of random samples with associated
weights and to compute the estimates based on these samples and weights. As the number
of samples becomes large, this Monte Carlo characterization becomes an equivalent rep
resentation to the usual functional description of the posterior PDF. Particles can be easily
generated and recursively updated given a nonlinear process model, a measurement model,
a set of available measurements and an initial estimation for the state probability density
function. 1YPes of Sequential Monte Carlo methods discussed in this thesis are: Sequential
Importance Sampling (SIS) and Sequential Importance Resampling (SIR)[1],[6],[7].
3.2 Sequential Importance Sampling
The Sequential Importance Sampling is also known as the bootstrap filtering or
condensation algorithm. It is a technique for implementing a recursive Bayesian filter by
Monte Carlo simulations [1],[10].
Let {X~'k' WHf.:l denote a random measure which characterizes the posterior
PDF where x is the set of support points and w are the weights. Also, the states till time
instant k are available. The weights are nonnalized such that l1!:1 wt = 1. The posterior
density can then be approximated as:
N,
P(XO,k!Zhk) "" L w1d(xO,k - X~'k) (3.2.1) i=l
The weights are chosen according to Importance Sampling technique. Impor
tance sampling aims to sample the distribution in the region of importance in order to
achieve computational efficiency. The essence of Importance sampling is visible in high
dimensional space where the data is relatively sparse in the whole data set It chooses a
proposal distribution instead of the true distribution, which is hard to sample [4]. Thus, it
states that suppose p(x) is proportional to 1T(Xi) is a probability density from which it is
difficult to draw samples but for which 1T(xi)can be calculated, up to the proportionality
14
[1]. In addition let Xi ~ q(.) i = 1.2 •... ,N be the samples that are easily generated from a
proposal q(.) known as the importance density. The importance density is a kernel function
which is used to express the importance weights in a recursive form. The weights can thus
be represented as [1].[4]:
(3.2.2)
where
q(x') : Importance Density. The posterior density becomes:
N. p(X) .,. L W'O(X - Xi) (3.2.3)
1=1
If the samples x&'k were drawn from an importance density q(4klzl,k) then the weights
can be defined as:
i p(x~'klzl'k) wk(X('I) q xo,k ZI,k
If the importance density can be chosen such that it factorizes as ;
Then a recursive form for calcuIating the posterior density can be calculated:
The weight update equations can be calculated as:
i p(zklxi,)p(xi,Ixi, l)p(x~'k-1lzl'k-l) Wk (X q(xi,14k-l' Zhk)q(xb,k_llzl,k-l)
15
(3.2.4)
(3.2.5)
(3.2.6)
(3.2.7)
(3.2.8)
(3.2.9)
(3.2. 10)
It is assllmed that:
(3 .2. 11)
as it is a first order Markov process. Thus the weights become:
(3.2. 12)
and the posterior density becomes:
N.
p(xklzlk) "" L wtO(Xk - xU (3.2. 13) i= 1
The choice of the impon ance density function is critical in the performance of the
particle filter scheme and hence, it should be considered as a parameter in the filter design.
The importance density must be chosen in sllch a way that it minimizes the variance of
weight with the evolution of time.
16
Particles
,
r\: Observation
Actual /itQtJa I I I I I I I , I I I
T-l T T+1
Figure 3.1: Particle Filter - Samples! Particles from the prior PDF.
In the Figure 3.1 [13] the prior information is assumed to be given till time instant
T. The model state function and the measurement function are plotted with respect to time.
The prior PDF is also given to us. Particles, which are possible realization of the state of the
process, are chosen randonJly. The number of particles should be chosen cautiously so as to
avoid unnecessary computations. Also, reducing the sample number may lead to incorrect
estimates. Thus, in choosing the sample size there is a trade off between the computation
cost and the correct estimates.
17
I(x)/1I(.)
-Obslirv$n
T-l T T+l
Figure 3.2: Particle Filter - prior PDF expressed as a sum of particles and the corresponding weights.
In the Figure 3.2 [13] each of the randomly chosen particle is associated with a
weight The particle along with the weights represents the sampled version of the prior
PDF. The weights are calculated using an importance density function such that the vari
ance of the weights with time is minimized.
18
Obsel'Wllion
T-J T T+J
Figure 3.3: Particle Filter - Recursive Weight Expression.
In the Figure 3.3 [13] the weights are calculated for the next time instant using
a recursive method. The weights are evaluated according to the importance density. The
propagation of the weights with time is analyzed. The recursive strategy helps to reduce
the computational cost and the data storage cost.
19
/('1)0('1)
N
p(~/z",)= ~4 .. (4J Iol
, , ~
, , , " , ,
T·}
N
p(xo:!+ll ZIli+l) = L w~ "r,!... (dx&l+l) f..!
• • • • •
• • • •
Observation
T T+}
Figure 3.4: Particle Filter· Estimation of the posterior PDF using the prior information.
In the Figure 3.4 [13) the estimate of the PDF parameters is made using the
model equations. The actual measurement values get distorted due to noise and thus the
estimates need to be updated. The update step is used to update the prediction according
to the measurements received. The pseudo code for Sequential Importance Sampling is
described in Table 3.1
20
Table 3.1: Sequential Importance Sampling
I.Initialization- Assume a set of N random samples (particles) {xLI: i-I, ... , N}
from the conditional PDF: P(Xk_Ilza_l)
2. Prediction- Sample N values {V1-1: i = 1, ... ,N}
from the PDF of process noise Vk-I
Generate new swann points{xt1k_1 : i = 1, ... , N} usingA1k_1 = f (xL-I, v1_1)
3.Update- On receipt of the measurement z;; assign each X~lk_l a weight The weight is calculated as:
i _ p(zkl;t:~/k_l) wk - N :d 2:;el P( •• I '1.-1)
such that 2:~ I wL = 1 Thus define a discrete distribution that assigns probability massw1 to element xLlk-l
1\ _ N i( .) P(Xk!Zk) - 2:._1 wk5 Xk - xklk I
21
3.3 Degeneracy Problem
One of the major difficulties that is encountered in the implementation of SIS
particle filters is the degeneracy problem in the particle population. The degeneracy phe
nomenon consists of the fact that, as the algorithm evolves in time. the weight variances
increase and the importance weight distribution becomes progressively more skewed, at
the point where (after a few iterations) all but one particle will have a negligible weight
[11.[81.[101.[171. Thus. with time. the estimation of the target distribution becomes very
poor and computational resources used to update the particles also becomes trivial. The
degeneracy in the particles is related to the variance of the importance weights and thus.
we measure it using an estimate of the effective sample size Ne" :
Ne/l = 1 + Var(wr) (3.3.1)
The above expression cannot be evaluated exactly so an estimate is used by taking the
inverse of the sum of squared weights:
(3.3.2)
Whenever estimated Neff is less than N. it indicates severe degeneracy. The degeneracy
problem can be solved as follows: Good choice of importance density and ResampJing
technique. These techniques are further explained below:
Good Choice of Importance Density: The importance density which minimizes
the variance of the importance weights is chosen, to limit the degeneracy problem [81.[91.[14].
The weight variance can be given as:
This variance become zero when:
(3.3.4)
22
Thus the importance density is chosen as the priors
(3.3.5)
The weight equations reduce to:
(3.3.6)
This is usually the common choice of the importance density as it is simple to implement
Resampling Technique: The second method by which the effect of degeneracy
can be reduced is to use resampling technique whenever a significant degeneracy is ob
served. The basic idea of resampling is to eliminate particles which have small weights
and to concentrate on particles with large weights [1],[8]. The resampling step involves
generating a new set of samples by resampling N s times from an approximate discrete
representation of p(xlz) given by:
N.
p(xklzl,k) "" L w18(xk - xi,) (3.3.7) 1=1
Pr(x~· = xi) = wf. (3.3.8)
3.4 Sequential Importance ResampUng
Sequential Importance Resampling algorithm incorporates the resample step which
helps to reduce the degeneracy problem. The SIR algorithm is used for state estimation in
VTOL model and DC motor model. The pseudo code for Sequential Importance Resam
piing algorithm is described in Table 3.2.
23
Table 3.2: Sequential Importance Resampling
I.Initialization- Assume a set of N random samples (particles) {Xi._l : i - 1, ... , N}
from the conditional PDF:
p(Xk-llzl,k-l) 2. Prediction- Sample N values
{V1_1: i = 1, ... ,N} from the PDF of process noise Vk-l Generate new swann points{4Ik_l : i = 1, ... , N} using:xi.1k_l = 1(4-1' vtl)
3.Update- On receipt of the measurement zZ assign each 4 lk-l a weight The weight is calculated as:
,_ p{zZlzLlk_l) wk - N .? E j =, *.1 >,>_,)
such that E~l wI. = 1 Thus define a discrete distribution that assigns probability masswi. to element 4 lk-l
P(xklzk) = ~l WkO(Xk - 4 Ik-l)
According to Bayes's theorem:q(xklxi._l' Zk) = p(xkI4_1) The weights are proportional to the likelihood function and as N tends to infinity the discrete distribution converges weakly to the true posterior distribution.
4.Resampling - Resample independently N times from the discrete distribution
p(xklzk) = p(xklxk-t. Zk) oc p(zklxk)P(Xklxk-l) The resulting particles
{xi.: i = 1, ... ,N} which satisfies
Pr{4 = x{lk-l} = wi. form an approximate sample from PDF p(xklzk)
5.Repeat step-2 to step-4 at each time k 6.P step ahead prediction: From the approximation of filtering distribution
p(xk+Plzk) = J p(xklzk)[n~~+l p(XjIXj-tlldxk,k+p-l A( I) N i ( . ) P Xk+p Zk = Ei=l wkO xk+p - xl.+plk+P-l
= E~l wI. J p(Xk+llx{lk+l) n;~+2p(xjlxj-l)dxk+l'k+P-l
24
Chapter 4
Fault Detection and Prediction using
Particle Filters
In this chapter the particle filter is used for state estimation. fault detection and
failure prediction. The state estimation is done using the Sequential Importance Resam
piing technique. The fault detection is done using the log likelihood estimate of the model
parameters. Under fault conditions the error is large and the likelihood takes a small value.
The negative log likelihood plot indicates the fault in the system as soon as it shoots above
the threshold value. The threshold is set by heuristic method. Fault prediction is done us
ing the weight distribution after the detection of fault. The remaining useful life. that is the
duration for which the system will function before complete failure. is determined when
the prediction result exceeds the threshold value for complete failure.
The block diagram representing the fault detection and prediction module is given
below:
- ... .... """ -- ....... -, -_. ..... ...., -.
j
- . ."" ..... 'O'd>1Li& """"" ddmS'I'rtaII -"
Figure 4.1: Fault Detection and Prediction Block Diagram.
25
4.1 Fault Detection Algorithm
In case of fault detection and identification in linear systems the faults are usually
modeled as additive terms in the state space model. This approach is usually applicable for
the sensor and actuator faults. However, in case of faults in nonlinear systems an alternative
approach is followed. The fault is modeled as the changes in the system parameters which
are reflected by the changes in the state transition function or the measurement model
function. Also, using the complete PDF of the system state for fault detection is more
advantageous than using the approximations. The detection procedure essentially uses the
particle filter for state estimation. The crux of the method is the decision rule for fault
detection, which is constructed using the likelihood of the system parameter, computed by
utilizing the complete state PDF information represented by a swarm of particles from the
particle filter. The state estimates are made using the sate evolution model [3]. Prediction
stage:
(4.1.1)
These estimated state estimates are used to compute the estimated measurement values:
(4.1.2)
The measurements till time instant k are assumed to be given
(4.1.3)
The residual or the innovation is given as:
(4.1.4)
The likelihood of the parameter of the model can be expressed by using the conditional
distribution:
(4.1.5)
26
Table 4.1: Detection Algorithm
I.Stateprediction: A setofN samples {x;;(i): 1,2, ... ,N} are genemted as in the particle filter prediction step.
2. Likelihood evaluation: The residuals of the prior particles are evaluated with the help of output measurement equation and the measurement z(k) received. These residuals are used to calculate the likelihood of the particles:
lk(i) = p(Zk!Xk{i), Ii) where (i = 1,2, ...... , N) 3.Fault detection: The predicted samples can be considered as the samples from PDF
p(Xj!Zj_b Ii) thus the individual terms in decision function can be evaluated through Monte Carlo integmtion:
Lj{Ii)~{xj!Zj_b Ii) = f p(Zt!Xj, li)p{Xj!Zj_b Ii)dxj "" k E i=IP(zjlxj(i),Ii) = k E;:'1 1j(i)
4.State update: The weights qk(i) for the samples {Xk : i = 1,2, ... , N} are calculated as in the particle filter update step:
qk(i) = Nlt~19) • (i = 1.2 .... ,N) 5.Resample: samples {Xk: i = 1,2, ... ,N}
are obtained from resampling as in the particle filter resample step.
6. The steps are repeated recursively for each time k.
k
= IIp(zjlzl, .... ,Zj-b li ) ;=1
k
= II p(ZjIZj-b 0) j=1
(4.1.6)
(4.1.7)
The decision function for fault detection is chosen as the sum of log-likelihood of system
pammeter.O. of the system model over a 'sliding window' with width M which is computed
as:
k
dk = L In(p(Zj!Zj_b Ii)) (4.1.8) i=k-M+l
The computation of the decision function in the above expression is carried out by reusing
the likelihood of each prior particle computed during particle filtering. The detection algo
rithm using particle filters is summarized in the Table 4.1.
27
The likelihood of the prior particles calculates a measure of how well the particle
filter, which is based on the nominal system model, can predict the output Zk by looking at
the likelihood of Zk given. If the system model is fault free, the prediction error should be
small, resulting only from the intrinsic uncertainty in the system model and the likelihood
lk(i) = p(zklxk{i), (J) will take a relatively large value [3]. However, if a fanIt is present in
the system, the prediction error will increase. This will include errors due to the fact that
the particle filter is based on an incorrect system model and the likelihood lk( i) will have
a relatively small value. When the measurement noise is additive and zero-mean Gaussian
with covariance matrix Vi., the output equation of the system is expressed as:
Zk = hk(Xk) + Vk
then the likelihood lk(i) is computed as:
p(Zklxk(i» = 1 exp{-~(rii»Tl-k-lrii)} V(21l")m[det(Vk)] 2
where ri') = Zk - hk(Xj,(i» is the prediction error based on the ith particle.
(4.1.9)
(4.1.10)
The critical term in the above equation is: (ri'»TVk-1ri'), which is the square of
the prediction error normalized by its covariance, based on the ith particle. If the critical
term is large which implies likelihood lk(i) takes a small value, it can be concluded that
the error is due to a system fault. However, if the critical term is small, then the system is
said to be fault-free. The decision function dk, which is the sum of log likelihood over a
'sliding window', makes the algorithm robust in fault detection as more observations are
included in the decision. The pseudo code for the fault detection algorithm is given in
Table 4.1. The sample size 'N' and threshold 'h' should be chosen precisely so as to avoid
excess computation and missed alarms. The sample size 'N' depends on the dimension
of the state, or more precisely, the dimension of system noise [3]. Out of the various
methods used to determine the sample size and threshold value, simulation based methods
are used more often. The threshold parameter is chosen meticulously as there lies a trade
off between minimizing false alarm rates and minimizing miss alarm rates. The balance
between the missed alarm rate and the false alarm rate depends on the criticality of the
system [3]. The sample size is chosen for the nominal system. The log likelihood of the
28
system model becomes stable after an initial period of uncertainty and the determination of
threshold values can be done using heuristic approach.
4.2 Fault Prediction using Particle Filters
Particle filters are used as primary tools for state estimation in nonlinear, non
Gaussian processes. They allow managing the uncertainty which is inherent in the long
term prediction problems. Hence, it is possible to obtain the expectation of the growth in
the fault dimension and also to establish a consistent methodology for the generation of a
PDF associated with the prediction. Prognosis is a problem that goes beyond the scope of
filtering applications as it involves future time spheres. It can be defined as the procedure
where long-term predictions, which describes the evolution in time of a fault indicator. are
generated with the purpose of estimating the remaining useful life of a failing component
[2]. Prognosis aims to project the current system condition in time using a state dynamic
model in the absence of future measurements. As it entails large-grain uncertainty an ac
curate and precise prognosis scheme must consider critical state variables (such as crack
length, corrosion pitting) as random variables with associated probability distribution vec
tors. After the probability distribution of the failure is estimated, other important prognosis
attributes such as confidence intervals can be calculated. These facts suggest a possible
solution 10 the prognosis problem based on recursive Bayesian estimation techniques that
combine both the information from fault growth models and on-line data obtained from
sensors monitoring key fault parameters [5].
In achieving a reliable and cost effective operation failure prognosis plays an im
portant role. Undoubtedly, this is of great interest in many industrial processes, including
mechanical systems (e.g. automotive and aircrafts), power systems, continuous-time pr0-
cesses other discrete-time processes etc. There are several approaches related to prognosis
of faults. However. only a few of them, offer appropriate tools for real-time estimation of
the RUL as a continuous function of time.
Failure prognosis is based on both an accurate estimation of the current state
and a model describing the fault progression. It is beneficial if the failure is detected and
isolated at an early stage of the fault as it can be assumed that the measurement data will
29
be available for a certain time window allowing for corrective measures to be taken. At the
end of the observation window, the prediction outcome is passed on to the user as additional
adjustments are not feasible since corrective action must be taken to avoid a major failure.
A particle filtering based algorithm for prognosis would require a procedure that
has the capability to project the current particle population in time in the absence of new
observations, adjusting the weights if required. Also, since the main source for uncertainty
is related to the fact that both the process and measurement models (including noise defini
tion and statistics) are subjected to errors, it is not reasonable to expect an improvement in
prognosis accuracy only by perfecting the state and model parameter estimation technique.
A two-level procedure has been adopdet to solve the above issues [2). This procedure in
tends to reduce the uncertainty associated with long-term predictions by using the current
state PDF estimation, the process noise model, and a record of corrections made to previous
computed predictions.
P-step ahead predictions are generated using the a priori estimate, and adjusting
their associated probabilities according to the noise model structure. A second prognosis
level uses these predictions and the definition of critical thresholds to estimate the Remain
ing Useful Life (RUL) PDF, also referred to as the Tnne-To-Failure (TTF) PDF [2).
A p-step ahead prediction is evaluated using the state evolution model as:
k+p p(xk+plzH) = J p(xkIZo,k) II p(XjIXj-ddxH +p- 1
j=k+1
N k+p = L u{il J .... J p(xk+1lx~» II p(XjIXj-l)dxk+1,k+p-l
;=1 j=k+2
(4.2.1)
(4.2.2)
Each one of these predicted trajectories considers a particle filter-based reaIization of the
state as an initial condition and some reasonable assumptions about the future operational
reginte, such as an expected loading profile for the machinery under analysis, etc. The
statistical information contained in all the trajectories is summarized under the definitions
of hazardous thresholds for the system under analysis [2). The evaluation of these integra1s
may be difficult or may require significant computational effort, even in the case when a
particle filter algorithm is used to approximate the state PDF for subsequent time instants.
30
In order to simplify and solve this problem, different approaches are adopted. One of them
is explained in detail.
The successive expectation of the model update equation, indicates the prediction
of the evolution in time of the particles :
(4.2.3)
As noise and process nonlinearities could change the shape of the PDF the weight of every
particle should be modified at each prediction step. However, as the update step is essential
for prediction problem, it can not depend on the acquisition of new measurements. This is
solved by adopting a procedure based on the use of the process noise model.
The procedure is as follows: consider the predicted conditional PDF which de
scribes the state distribution at the future time instant k + 1 (I = 1, ... , p) when the particle
wl~!_l is used as initial condition. Assuming that the initial weights are a good represen
tation for the current state pdf, then it is possible to approximate the predicted state pdf at
time k + I, by using the law of total probabilities and the particle weights at time k + 1- I,
as shown in [2]:
(4.2.4)
The second prognosis level involves in estimating the remaining useful life PDF
of the system. The long term predictions and the knowledge of the critical threshold values
are used to determine the time to fail. The critical threshold also known as hazard zones
are statistically determined on the basis of historical failure data. By applying the law of
total probabilities the Time-To-Failure probability can be evaluated as:
N
'PTTF(ttf) = LPr{HIb < x~j < Hub}'W~} (4.2.5) i=l
where H Ib and H up are the lower and upper hazard bounds respectively. The intercept of
the long term prediction with the hazard zone determines the confidence interval within
which the failure is said to occur.
In order to understand the implementation of the particle filter approach for fault
detection and prediction the above mentioned algorithms have been implemented on a non-
31
linear non-Gaussian model [1].[6].[21].[22]. The fault detection results have been repli
cated here. The model in state space fonnat is given as:
where
for nominal case and
Xk-l 25xk_l fk(Xk-lo k) = -2- + 1 rl + 8cos(1.2k)
+ k-l
, ( k) Xk-l 12.5xk_l (k) Jk Xk-lo = -2- + 1 rl + 8 cos 1.2 + k-l
for the fault case has been used. The parameter specification is given in Table 4.2:
Table 4.2: Parameter Specification
Value Description 0.1 Measurement Noise Variance 1 Process Noise Variance
50 Number of TIme Steps 500 Number of Samples
(4.2.6)
(4.2.7)
(4.2.8)
(4.2.9)
The state estimates. mean error and the log likelihood are plotted with respect to
time. The system states are estimated both under fault free and fault conditions. One of the
coefficient values in the state evolution equation is changed so as to introduce a fault in the
system.
32
Model Siaia and Observation Plot 20
10
j 0
-10
-20 0 5 10 15 20 25 30 35 40 45 50
llme(Seconds}
150
100
i 50 .... .....
~ : 0
0
-50 0 5 10 15 20 25 30 35 40 45 50
llme(Seconds}
Figure 4.2: Model States under nominal condition.
In the Figure 4.2, the system states and the measurements are plotted with respect
to time under nominal conditions. The state and the observation vectors are single dimen
sional vectors. The process noise and measurement noise are zero mean Gaussian random
variables.
33
j
Stale and Observation EstImatIon (NomlnaJ CondItIon) ~r---'---~----'---'---~----'----.-r--~--~--~
------ ActuaJ Stata 20
---EstImated Stata
to
o
-10
_2OL---~--~--~L---~--~~~L-__ ~ __ ~ ____ L-__ ~
o 5 to 15 20 ~ ~ $ ~ ~ 00 llmo(Seccnds)
200r----r----~--~----._----r_--_r--~~--77~=_--~_, __ ActuaJ Observation
--EstImated Observation 150 ........ , ............................. ..
i 100
@ 00
o
~0~--~5-----1~0----1~5-----2O~---2~5----~~~---$~--~~~---~~--~00
llmo(Seccnds)
Figure 4.3: State Estimation using SIR algorithm.
In the Figure 4.3. the estimates are done using Sequential Importance Resampling
method. The estimates follow the actual sate values as there is no fault in the system
initially. The number particles is chosen to be fifty here. Increasing the number of particles
resulted in delayed simulation results because of more computation. On the other hand
reducing the number of particles gave a bad state and observation estimate.
34
Stat. and Observation EstImatlon(Faull Condition) ~'---'---'---'---'----'---'---'r=========~
: : :.. --ActuaI Stat • . . . . . . . . : . . . . . . . .. . ........ :................ .• ......... : ........ :. ---EstImated Stat. . . .. .
-10 _20L----L----~ __ ~ __ ~~ __ ~~ __ _L ____ ~ __ _L ____ ~ __ _J
o 5 10 15 20 25 30 35 40 50 TIme(Seconds)
200r----r----r---.----,----~--_.--r=~~~~==~==n :. __ ActuaI Observation
1! 150 ........ : ......... : ......... : .........•..........•.......... :. . . . .. --- EstImated Observation 1100 . . . .
! 50 o 0 ....,.,-,,..,.,
~o~---L----L----L----~--~----~--~~---L----L---~ 5 10 15 20 25 30 35 40 45 50
TIme(Seconds)
2oor---_r----~--_.----~--_,----,_----r_--_r----~--_,
~ 100
~ m
::;; -100
-200~--~----~--~~--~--_=~--~--~=_--_=----~--_7 o 5 10 15 20 25 30 35 40 45 50
TIme (Seconds)
Figure 4.4: Fault State Estimation.
In the Figure 4.4, one of the coefficients in the measurement model is reduced
by half the original value to introduce a fault in the system. The change in the coefficient
is made at time instant T = 30 seconds. The estimated state values follow the actual state
values till time instant T = 30 seconds. However, as the fault is introduced at this time the
estimates do not follow the actual values thereafter.
35
!
Negative Log likelihood Plot 180r----r----r----r--~r_--~--_,----._--_,----._--_,
180 ................... : ......... ~ .... .
140
120 . . . . . .. ...... . ............. ' ........ '.' .. . .... ' ......... ~
~ 100 ........ '.' .. . .... ','. . ..... '. . ........ ;. ......... ;. ...... .
.§'
180
60 ................. , .. .. . . .. . ........ _ ............................. _ ......... _ ................. . · .. . · .. · .. · .. · .. · .. · .. 40 . . ..... ............... .... . ....... .; ......... .;....... . '.' . . .. . .. '.'. . ...... ,....... . ~
· . · . · . · . · . · . · . 20 '---.;.-" .. .;..:., .. 'c..-" "':" '___.-:-_'...;,.? ': ... ,
· . · . · . · .
5 10 15 20 25 30 35 40 45 50 TIme(Seconds)
Figure 4.5: Negative Likelihood Plot.
In the Figure 4.5, the negative likelihood of the prior particles is plotted with re
spect to time. The residuals are calculated and then the likelihood of the prior particles is
evaluated using the measurements received at that instant. The decision function is calcu
lated using the equation in the previous section. The likelihood will take a large value for
fault free system, thus the negative likelihood takes a small value. As fault is introduced the
the residuals take a large value and the likelihood becomes small. The negative likelihood
shoots up and there is a marked difference in the likelihood values before and after the fault
introduction. Thus, the fault is said to be detected when there is a sudden rise in the value
of the negative log likelihood of the system.
36
Missed Alarm Plot 0.04,----,------r------.----,..----,------.
0.035
0.03 ...... .
J 0.025
I 0.02
I 0.015
.... ....... . ·····:················i
...... ; ................ : .. , .. .
om ............ ·············,···············0·············,·········· ..............••..•
0.005 ....
50 60 70 80 Threshold
90
Figure 4.6; Missed Alarm Rate Plot.
100 110
In the Figure 4.6, the missed alarm rate was plotted for different threshold values.
For every threshold value fifty simulations were made and the missed alarm was calcu
lated as the ratio of number of missed alarms to the number of total simulations. Thus, a
threshold value of 90-100 will be a good value for fault detection for this model.
37
ChapterS
Simulation Results
The simulation results have been divide in to three categories: state estimation
plots using SIR algorithm. fault detection plots and failure prediction plots.
The fault detection and failure prediction algorithm described in the previous
section have been implemented on Vertical Take off and landing model (VTOL model)
[18].[19] and DC motor model [20]. Both models are expressed in state space format
and the state evolution and measurement equations are evaluated according to the model
specifications.
5.1 VTOL aircraft model
The VTOL model for the aircraft can be described by
x{t) = A"x{t) + B.u{t) + w{t)
z{t) = C.x{t) + v{t)
(5.1.1)
(5.1.2)
where x = (Vh v" q
Table 5.1.
T T ()) • u = (5. 5!) . The states and inputs are described in
The subscript "c" stands for continuous. The model parameters are given as:
-0.0366 0.0271 0.0188 -0.4555
A" = 0.0482 -1.01 0.0024 -4.0208
0.1002 0.3681 -0.707 1.420
0.0 0.0 1.0 0.0
38
Table 5. I: VTOL model parameters
Symbol Description Vh horizontal velocity V. vertical velocity q pitch rate 9 pitch angle o. collective pitch control 0, longitudinal cyclic pitch control
0.4422 0.1761 1 0 0
3.5446 -7.5922 0 1 0 B.= ,C.=
-5.52 4.49 0 0 1
0.0 0.0 0 1 1
The discretization of (5.1.1) can be represented by:
x(k + 1) = Ax(k) + BU(k) + w(k)
z(k+1) = Hx(k+1)+v(k+1)
0
0
0
1
(5.1.3)
(5.1.4)
where A = eAcT. B = (Ii{ eAc~dT)B •• H = C. andtbe sampling period is T = 0.005second.
Process noise variance and measurement noise variance are given as following:
Q = diag{0.0012, 0.0012 ,0.0012, 0.0012}.
R = diag{0.012,0.012,0.012,0.012}.
39
5.1.1 Simulation results of VTOL model
The state space format of the Vertical Take off and Landing model has four states
representing the horizontal velocity, vertical velocity, pitch angle and pitch rate, which are
plotted with respect to time:
VTOL states and Observallons l~r----r----'----'----.----'-----r----rr--~~~~~7.0
--Horlzon1a/ Velocity --Verlk:al Velocity
. .. ,.. --Pitch Rate 100 .................................... " .......... " ....... " ..
-- Pitch Angle ~ ............................. : ......... . . ..... ;. .................... .
o
_looL----L----L---~----~--~----~~--L----L----L---~ o 5 10 15 m ~ M ~ ~ ~ ~
nme(Seconds)
l~r----r----.----.----.----'-----r----rr--~~--.-.7.~~ --Horizon1a/ Velocity
100 ......... ..
. . . .
--Verllcal Velocity . --PHch Rate --PUch Angle
~ ........ , ........ , ............................................................. , ......... , ...... .. : ,. : . . . .
o
-50
_looL---~--~~----~--~-----L--~~~--L---~----~--~ o 5 10 15 20 25 M ~ ~ ~ ~
nme(Seconds)
Figure 5.1: VTOL State and Observation.
In Figure 5.1, at any time instant we have a four element vector representing the
state of the system. The system has zero mean Gaussian random vectors as process noise
and measurement noise.
40
~ VTOL Slate. Estimates(NomInaI Condition) 11 80
~ :~.::::: :::::::::::::::::::::::::::::::::::::::::::.> ...... ~:1: 1 :::::=:::!~~ ~ 20 .... :.:: ......... ;.. ". ...........: .. :::';~."i~.~.::q .• ~ ......... . ~ 0 :- .. , ......... , .. , ..... .,..., .................................. ;.'!':'! ..... ; ...... c ••
o ~O~--~----~----~--~~--~----~----~---i--__ -L __ ~ :c 0 5 10 15 20 25 30 35 40 45 50
TIme(Seconds)
(];~5t > 0 5 10 15 20 25 30 35 40 45 50
TIme(Seconds)
40
j 20bd;;j ...... , .. , ......................... , .............. ...... : .. ~ ~ 0 "'c.::: : ......• "',: .•.....• ' .. ~. ",:
-20 . o 5 10 15 20 25 30 35 40 45 50
TIme(Seconds)
20 t 10~ ............ +[ .......... ..L ........ [~ .............. '~ ..... · .. ·.'···~ .. ···± .. : .. MJ ... i .. ~-~=~~ . ~ °rv:':[~:~~1~~:~ _10L----i----~--~~--~~--~--Z-~----~---i~--~--~
o 5 10 15 20 25 30 35 40 45 50 TIme(Seconds)
Figure 5.2: VTOL State Estimation (Nominal Condition).
In the Figure 5.2, each of the states are estimated using the Sequential Importance
Resampling algorithm. The system is initially under fault free condition. The four state
estimates follow the actual state values. The number of particles are chosen cautiously to
have a good estimate with less computation.
41
f VTOL State EstImate (Fault Conclltlon)
h]···+·~r~ :J: 0 5 10 15 20 25 30 35 40 45 50
Tlme(sec)
fI~E:~ > 0 5 10 15 20 25 30 35 40 45 50
Tlme(sec)
u ] .. ··l-ii··++H~r.!bJ o 5 10 15 20 25 30 35 40 45 50
T1me(sec)
20
m l.al ~ ... ' .. : .... l. L_~~ Ii 0
-200~--~5----~10~--~1~5--~2O=----=25~--~30~--~35~---4O~~~45~~-750· Tlme(sec)
Figure 5.3: VTOL Fault State Estimation.
In the Figure 5.3. a sensor variation is introduced at time instant T = 30 seconds.
The estimated state values follow the actual state values till time instant T = 30 seconds.
However. as the fault is introduced at this time the estimates do not follow the actual values
thereafter. Sensor variation fault affects the first state the most, which is visible from the
plot
42
i
Neagatlve LogfikeDhood Plot
16°r---~-----r----;-----:---~-----r----~::::~==~==--l ,
140
120
........ ~ .. ;~:::~ ...... ~ .•..........•......... i ..... . --~---~---~---~--~---~--i , ,
......................... _ ....... l.
Fault Introduced ~.. :' alT=30s9c . ,
. . ......... ~ ......... , ... . --- .:. - - .. - - -· . . · . . · . · . VTOL.modeJ un(jer fault cOndition :
. . .. . . . .. . . . . . . . ... ........ ... . .. . ... i. i
.9100 " i z
• Nominal Functioning of , , i : VTOL mOdel ~
80 ~----'------'------'----=-----'----~.. . ................................. .
60
400~--~5----~1~0----~15~--~2~0----~~--~~~---~~--~4O~--~~~--~80 T1me(Seconds)
Figure 5.4: VTOL Negative Likelihood Plol
In Figure 5.4. the negative log likelihood of the prior particles is plotted with
respect to time. The log likelihood will take a large value for fault free system, thus the
negative likelihood takes a small value. As the fault is introduced. the residuals become
large and the log likelihood becomes small. The negative log likelihood shoots up and there
is a marked difference in the log likelihood values before and after the fault introduction.
Thus. the fault is said to be detected when there is a sudden rise in the value of the negative
log likelihood of the system.
43
Threshold Estimation 1~r----.-----r----.-----r----.-----r----.-----r----'
110 ......... : .............................. .
100
3!
1 90
~
80
70 ................ ..
801L-----2~----3~----~4----~5----~6----~7~--~8----~9L---~10
Number of Iterations
Figure 5.5: VTOL Threshold Estimation.
In the Figure5.5, the overshoot of the negative log likelihood can be considered
as a detection of fault in the system. However, there can be faIse aIanns too. In order to
avoid faIse alanns a threshold limit has to be chosen for the overshoot of the negative log
likelihood of the prior particles. Thus, if the negative log likelihood exceed the threshold
limit, the fault is said to have occurred in the system, else it is a faIse alarm. For the
VTOL model the threshold is calculated using a heuristic method. The algorithm has been
simulated fifty times and the mean difference between the minimum and maximum value
of negative log likelihood is calculated as 87.5. Ninety percent of the mean overshoot value
has been chosen as the threshold, which is computed as 137.5. The fault is detected at time
instant T = 33.5 seconds.
44
5.2 DCmotor
DC motor is used to control the motion of the aircraft flap. The electrical circuit
of the armature and the free body diagram of the shaft is shown in the diagram below:
T
• L
r ( . +
Figure 5.6: DC Motor Model.
The velocity of the shaft,is monitored. The resistance and the inductance val
ues are carefully chosen to impart the required speed. The motor is said to be in fault free
condition if the negative of decision function, is below the predefined threshold limit. How
ever, if the threshold limit is exceeded, then the fault is said to be detected. The remaining
useful life of the system has been calculated using the p-step ahead prediction along with
the defiuition of hazard zone, as discussed in the previous chapter.
Dc Motor Model: The dynamics of a DC servo motor are described by the elec
trical signa1s and the mechauical motion of the armature as follows:
di.. . L.di + R.~. = v. - K.6 (5.2.1)
Jij + b8 = Kti. (5.2.2)
where the symbols are listed in Table 5.2. The state space format of DC motor model is:
45
Let 9 =W
Table 5.2: DC motor model parameters
Symbol Description J Moment of inertia of motor and load b Viscous damping of motor and load
L. Inductance of the armature R" Resistance of the armature
Voltage across the terminal Back EMF
Torque sensitivity
The resistor and inductor values fluctuate within the tolerance limits under nomi
nal condition. When the fault is introduced both the inductor and resistor values cross their
respective tolerance limits. The state estimates are made under nominal and fault condi
tions. The negative likelihood is plotted in both the cases. Failure prediction is done after
the fault is detected.
46
5.2.1 Simulation Results of DC motor model
The system is represented in state space format with angular speed and current as
the state variables. The speed of the shaft of the DC motor model is tracked.
tYJ\VSfZE 0\:] o 100 200 300 400 500 500 700 500 900 1000
l1me(Seconds)
2
J J _:,-_" ._,,_. ' ..... 1 ___ "_"_" --: .. .J...1_ .. _ .. _ ...... !L--·_ .. _ ....... ·.·_ .. _" _" '_" ..... _"_"_" ..... 1_ .. _ .. _ .. _ .. .J...·.·,..-.. _" '_"_1'-,-"_"_"_" ..... )_ .. _ .. _ .........
o 100 200 300 400 500 500 700 800 900 1000 l1me(Seconds)
I(U! Ell! I II 1 o 100 ~ 300 400 500 500 ~ 500 900 1~
Ii :l;L.........: •... ±......' •.. =-:.c-j:Jmj ... ·· .. r .. m ..... : ..... j f~ 2 ...... , ......... , ......... , ......... , ......... : ......... : .......... : ....... : ......... : ...... ..
I-~ 1~ ~ 300 ~ ~ 500 ~ 500 ~ 1~ l1me(Seconds)
Figure 5.7: DC Motor States.
In the Figure 5.7, the input to the motor is a sinusoidal voltage. The current and
the speed of the DC motor are plotted with respect to time. The speed increases gradually
and finally gets stabilized. The system is under nominal condition.
47
DC Motor Slat_ EatImaIos 0.51--.----,.--.,----.----,--.,.----,---~-_::_.==
--EatImaIos --AcIuaI
_ 0
-I i~ -C.5
~ -1
-1.50~--:1~00:----,2"'OO:--300::-'-::-----.,.400'---500.L---600..L..---.J7oo'---600.I...---900..L.---l...J000 Tbne(Soc:onds)
10r--~--,---.r---r--.--~---r--~::I~----~~~I~wms=
J Sr········;·· ...... ~ ......... ~ ..... "Or ••••••••••••••••••••••••••••• ~ ••••••••••••••••••
J 6f- .. .....•.. •......................... ; .. ;.. ~l ; -iUl 4f-· ...... ..... . ...... : ......... : ........•
J2V· ........ .: ......... .; ......... .: .......... :. ......... ;. ......... > ........ ~. .. . .. . .. .
00 . '-; .
400 500 600 700 600 600 100 200 300 1000 Tbne(Soconds)
Figure 5.8: DC Motor State Estimates.
In the FigureS.8, the two states of the motor, that is current and the angular speed
are estimated respectively using SIR algorithm. As the system has no faults the estimates
follow the actual values very closely.
48
CD
Negative Logllkenhood Plot 2or---,----,r---.----;~--~--_,----._--_.----._--_,
18 ........ ......... ......... . ...........•.......•.......
16 ...................................•.........•....... -..... .
... ~ ........ ~ ........ ~ ........................... .
~ 12 .................................. .; ......... .; .......... :- ................. , ......... .
r 10
60L----1~OO-----20~0----~~----«WL----5~00----~600L----7~00----~600L----600~---1~OOO
Tlme(Seconds)
Figure 5.9: DC Motor Negative Likelihood Plot
In the Figure 5.9, the estimate of the speed and current is made under fault free
conditions. As the system has no faults the residuals take a smaI\ value and the likelihood
becomes large. Thus, the negative likelihood becomes smaI\. The likelihood plot gives the
lower and upper bounds for fault free condition.
49
Bounds lor Negative Logilkehood 7.7r----r--r----.----,---.---'-r---r--~--,___-___,
7.69 .................. .
] 7.68
j 7.67
7.66 , ................ .: .......... : ......... :- ......... , .. .
7.680~--~5~--~170----~15~--~~7----~~--~~7----~~--~~~---E~--~~ Number of IteraHone
9.B ,-----r--,-------.----.--r--""T'--,----,----.----, 9.6 " ............... .. ~ ............................ -......... -......... ~ ......... -...... . · . .. . · . .. . · .. .
-g 9.4
.B 9.2
8. 9
.........
a. :> B.B ."
8.6
8.4~---=---~-----:'~--=------::7_--=--=------:':-----'::------:' o 5 10 15 20 25 ~ ~ ~ E 50
' ............ .
Number of ItBraHone
Figure 5.10: Lower and Upper Threshold Bounds for fault detection in DC motor model.
In the Figure 5.10, the likelihood is evaluated under fault free conditions for fifty
iterations. The maximum and minimum values of the negative log likelihood is calculated
in all the itemtions and the mean value of the maximum and minimum is evaluated. The
mean value of the negative likelihood under nominal condition is calculated using Monte
Carlo simulation. The negative log likelihood value fluctuates between 7.69 and 8.56,
which forms the lower and upper bound respectively. The mean negative log likelihood is
evaluated as 8.12, which forms the threshold value for fault detection.
50
51,::U' ................... 'i~' ... '1: I: )]' ': ... 'R; . !U 5 ..'-.. : .. : . .:- .. _ ............... . ~- 0 . .
o 100 200 300 400 500 500 700 800 900 1000 1lme(Seconds)
Figure 5.11: DC Motor model Fault State plot
In the Figure 5.11, the speed and current is plotted under fault conditions. The
fault introduced in the model is a gradual increase in the inductor and resistor values, be
yond their respective tolerance limits. The fault is introduced at T=500 seconds.
51
DC Motor state EstImates(FauR Condition) 1~-'---'--~--'---'-~--~--r=~~~
j 1: -0.5
~ -1 · . -1.6 ........ , ......... , ........ < ........ .:. · . · .
4~--~---=~~~---=~~~--~~~~--~---=~~~ o 100 200 300 400 600 600 700 eoo 900 1000 T1me(Seconds)
12,---.----.---r---,----r---.----.--Ir==~~~ ---EsUme!es
if 10······, .. .. . '. . . .. F!iiII1" .... ... .... Actual.S Actual
!lUll 8 ........ : ......... : ......... , ........ ~T":!UC<Id: ......... : .. . . . ..: ...... . !l. 6; . .........~~ . . 1 4
.Ii: 2
100 200 300 400 600 600 700 eoo 900 1000 T1me(Seoonds)
Figure 5.12: DC Motor model Slate Estimates under fault condition.
In the Figure 5.12, the speed and current are estimated under fault conditions
using SIR algorithm. As the fault is introduced at T = 500 seconds the estimates follow the
actual value till that time. The estimates tend to deviate from the actual thereafter.
52
Negative Logllkellhood 900r----r---,----,----.----~_._r----r_--_r----r_--,
~ I : !
800
700
300
200
1 .. ; " .. . ... '. .......... . .... ·.1 ... , ..
. I: Fault Detected atT<is74 sec , I,
.... '~l+"""""" 1 : 1 :
................... '1' .................................... . I : DC motqr under 1 : fault com!ltlon :~
Nominai ~FunciJoriti1gcii bCiripjor' .... ',' .. · .~.
.... '.' .... loE+-----.;..----;-----<--+:.I 1 · .. · .. · .. · . . . 1
1 1
.... , , ...... ~ ......... ~ ............... ! . ".". . ..... _ .. . : 1
1 1 ....... : .......... ; ... ······:······1 ", 1 1 . .
....•.. ·····0·····1··0········~···
. ........... -
:: G-l_ ob:~==~==~==~==~=±~~~~~~~~ a 100 200 300 400 500 600 700 800 900 1000
T1me(Seconds)
Figure 5.13: DC Motor Negative Log likelihood Plot.
In the Figure 5.13. the negative log likelihood of the particles is plotted and there
is a gradual overshoot after the fault has been inttoduced. The fault is said to be detected
when the negative log likelihood exceeds the threshold value. At time T = 574 seconds. the
negative log likelihood shoots above the threshold value(S.12) and the fault is said to be
detected.
53
Prediction Resuft 25 .................... : ......... : ........ . . .
: . . t:onflda'P Into 20 . . . .. ..... : .... .,. PredictlOA WlndGW : J"",-"",,-. : ...
. .. . 1 . :I~-"'" I ::: Lon Tann BOunde 1 :
0115~~·~;~~~1~~~~~~~~~·~~~·~;~ __ _ U 10' : ~~
5 .......... . ........... ;.. ".
~~--~1700~--2=00~--~~~---~~---500~--~I-500~---~~--~500~~I~OOLO----1~000 Thne 1
1 1
O.02r-----r----.----r--,.----.--r-.----,--.... --;t-1r---,
0.015
0.01
0.005
: Pi.,babllily banally F~on ~ i-n= . : : ............ :..... .:...... : Ramalnbjg Useful :
. I: . ~ .... ... .... .. " ...... I· ...... : ......•.. ; .....
100 200 ~ ~ 500 Thne
500 700 500
1 1
.. ·1 ...... . 1 :
goo
Figure 5.14: P- Step ahead prediction of angular speed of DC Motor model.
1000
In the Figure 5.14, the p-step ahead prediction is done, which predicts the speed
of the motor for future time using the procedure described in the previous chapter. The long
term bounds are defined by taking ± fifty percent of the marked value of the resistance(2
ohms) and the inductor(O.OI H) values. The lower and upper hazard zones(blue dashed
lines) are defined on the basis of historical failure data as 12.9 rads/sec and 14.9 rads/sec
respectively, with the critical zone(red dashed line) as 13.9 rads/sec. For more accuracy,
the intercept of p-step ahead prediction(green curve) with the lower hazard zone and the
critical zone has been considered as the confidence interval. The failure is expected to
occur at eighty percent of the confidence interval. The probability of time to failure is
plotted according to equation (4.2.5). The RUL is defined as the time from the instance
54
fault is detected till the complete failure occurs. For DC motor model it comes out to be
291 seconds after the fault has been detected.
55
Chapter 6
Conclusion
6.1 Conclusion
The use of particle filter approach for fault detection and failure prediction for
nonlinear non-Gaussian system is suitable as no assumptions are made in solving the de
tection and prediction problem. Also, this approach uses the complete probability distri
bution information of the state estimates from particle filter for fault detection and failure
prediction which makes it very accurate.
The particle filter approach is based on Sequential Importance Sampling method.
As the Sequential Importance Sampling suffers from degeneracy problem, Sequential Im
portance Resampling has been implemented for state estimation. The Sequential Impor
tance Resampling algorithm was applied to the Vertical Take off and Landing (VTOL)
aircraft model for state estimation and fault detection. A sensor fault was introduced in one
of the states and the estimates were made. The estimates did not follow the actual values
after the introduction of fault. The decision rule for fault detection was evaluated using the
likelihood of the estimation parameter over a sliding window. The negative log likelihood
was plotted and its overshoot beyond the threshold value indicated the detection of fault.
The threshold value was chosen using a heuristic approach. The algorithm was executed
several times and the mean overshoot of the likelihood was chosen as the threshold value.
The Sequential Importance Resampling was also applied to DC motor model.
The speed of the motor was monitored under nominal and fault conditions. A time varying
resistor and inductor was introduced as a fault in the system and the state estimation was
56
done. The estimates did not follow the actual values after the introduction of fault. The
negative log likelihood was plotted and the overshoot indicated the detection of fault. Thus,
if the resistance value exceeded the tolerance limits then the fault was indicated by the
likelihood plot. A p-step ahead prediction was done to determine the remaining useful
life PDF given the hazard zone bounds. Thus, corrective measures can be taken before
complete failure occurs.
Thus, particle filter approach is a good method for sequential state estimation,
fault detection and failure prediction in case of nonlinear non-Gaussian models. Unlike
Kalman filters, particle filter makes no assumptions about the system.
6.2 Future work
The SIS Particle Filter suffers from Degeneracy problem and to overcome that
the resampling method was preferred. However, the resampling step introduces other prob
lems like loss of diversity of particles as the samples are chosen from a discrete distribution.
The discreteness of the resampJing stage implies that any particular sample with high im
portance ratio will be duplicated many times [9]. This actually leads to particle collapse.
Regularized particle filters overcomes the particle collapse problem as it resamples from a
continuous distribution. Also, there are other types of particle filters like: Auxiliary Parti
cle Filters, Rao-BlackwelIised Particle Filters. The Auxiliary particle filters are better than
the SIR particle filters as the initial particles are chosen in a better way so as to reduce the
errors. Thus, the state estimation can be improved further by using the Auxiliary particle
filter or Rao-BlackweIIised particle filter.
Particle filter approach has been applied for terrain navigation and tracking appli
cations [15]. It also finds its use in acoustic source localization and tracking in reverberant
environments. The particle filter approach is applicable for model based systems. Thus,
in order to use this approach for data mining it is crucial to have a neural network model
which will train the network according to the data given to the network. Once a model is
built the state space formulation can be done and the recursive estimation can be made.
57
Bibliography
[1] M. Sanjeev Arulampalam, Simon Maskell, Neil Gordon, and Tim Clapp, "A Tutorial
on Particle Filters for Online NonlinearlNon-Gaussian Bayesian Tracking", IEEE
Transactions on Signal Processing, Vol. 50, No.2, February 2002
[2] Marcos E. Orchard, "A Particle Filtering-based Framework for On-line Fault Diagno
sis and Failure Prognosis". School of Electtical and Computer Engineering ,Georgia
Institute of Technology August 2006.
[3] V. Kadirkamanathany, P. Liy, M. H. Jawardy and S. G. Fabriz, "Particle ltering-based
fault detectian in non-linear stochastic systems". International Journal of Systems
Science, 2002, volume 33, number 4, pages 259-265.
[4] ZHE CHEN, "Bayesian Filtering: From Kalman Filters to Particle Filters. and Be
yond". Manuscript.
[5] Marcos Orchard, Biqing Wu, and George Vachtsevanos, "A Particle Filtering Frame
work For Failure Prognosis". Proceedings ofWTC2005. World Tribology Congress
ill, Washington D.C .• USA. 2005.
[6] NJ. Gordon ,DJ.Salmond,A.F.M. Smith, "Novel approach to NonlinearlNon Gaus
sian Bayesian Stote estimatian ", lEE PROCEEDINGS-F,Vol.l40,No. 2,AprlI,1993.
[7] Michael K Pitt,Neil Shephard "Filtering Via Simulation: Auxiliary Particle Filters".
Journal of the American Statistical Association,VoI.94,No.446,1997
[8] Arnaud Doucet, Simon GodsiII and Christophe Andrieu. "On sequential Monte
Carlo sampling methods for Bayesianfiltering", Statistics and Computing (2000) 10,
197208.
58
[9] JFG de Freitas,M Niranjan,AH Gee and A Doucet. "Sequential Monte Carlo Methods
For Optimization Of Neural Network Models", Cambridge University Engineering
Department,November 1998
[10] Simon Maskell,Neil Gordon. "A Tutorial on Particle Filters for On-line Nonlin
earlNon Gaussian Bayesian Tracking", QinetiQ Ltd.,September ,2001.
[11] A., Ed. Doucet "Sequential Monte Carlo Methods in Practice (Statistics for Engi
neering and Infomuztion Science)", Springer Verlag New York ,2001.
[I2] George Vachtsevanos, Frank L. Lewis, Michael Roemer, Andrew Hess and Biqing
Wu "Intelligent Fault Diagnosis and Prognosisfor Engineering Systems", lohn Wiley
& Sons, INC. 2006.
[13] George Vachtsevanos and Biqing Wu Abhinav Saxena, Romano Patrick,Taimoor
Khawaja. and Marcos E. Orchard "Condition Based Maintenance: Enabling Tech
nologies and Application Domains ", School of Electrical and Computer Engineering
Georgia Institute of Technology Atlanta, Georgia.SimTech Workshop Singapore Jan
uary.2005
[14] Simon Maskell. "Sequentially Structured Bayesian Solutions", Simon
Maskell.February 21, 2004.
[15] Niclas Bergman. "Recursive Bayesian Estimation Navigation and Tracking Applica
tions ", Department of Electrical Engineering,Linkoping University. SE 581 83 Linkop
ing. Sweden Linkoping 1999.
[16] Chen.MZ. and Zhoo. D.H. "Particle Filtering Based Fault Prediction Of Non Linear
Systems", IFAC.2oo3.
[17] Andrieu, C., Doucet, A., and Punskaya, E.. "Sequential Monte Carlo Methods for Op
timal Filtering, in Sequential Monte Carlo Methods in Practice", New York: Springer
Verlag. 200 I.
59
[18] Y. M. Zhang and X.R. Li. "Detection and diagnosis of sensor and actuator failures
using IMM estil7Ultor", lEEE Transactions on Aerospace and Electronic Systems, vol.
34. No 4, pp. 1293-1313 1998.
[l9] Y. M. Zhang and J. Jin. "Integrated active fault tolerant control using IMM ap
proach ", IEEE Transactions on Aerospace and Electronic Systems. vol. 37, No 4. pp.
1221-12352001.
[20] "DC Motor Speed Modeling in Simulink", CIMS Example DC Motor Speed Mod
eling,Control Tutorials for Matlab and Simulink
[21] G. Kitagawa, "Monte Carlo filter and smoother for non-Gaussian nonlinear state
space models", J. Comput. Graph. Statist., vol. 5, no. I, pp.I25. 1996.
[22] B. P. Carlin. N. G. Polson, and D. S. Stoffer. "A Monte Carlo approach to nonnormal
and nonlinear state-space modeling ". J. Amer. Statist. Assoc., vol. 87, no. 418. pp.
493500, 1992.
60
top related