[Wiley Series in Probability and Statistics] Random Data (Analysis and Measurement Procedures) || Nonstationary Data Analysis

Download [Wiley Series in Probability and Statistics] Random Data (Analysis and Measurement Procedures) || Nonstationary Data Analysis

Post on 12-Dec-2016




3 download


  • C H A P T E R 12

    Nonstationary Data Analysis

    The material presented in previous chapters has been restricted largely to the measurement and analysis of stationary random data, that is, data with statistical properties that are invariant with translations in time (or any other independent variable of the data). The theoretical ideas, error formulas, and processing techniques do not generally apply when the data are nonstationary. Special considerations are required in these cases. Such considerations are the subject of this chapter.


    Much of the random data of interest in practice is nonstationary when viewed as a whole. Nevertheless, it is often possible to force the data to be at least piecewise stationary for measurement and analysis purposes. To repeat the example from Section 10.4.1, the pressure fluctuations in the turbulent boundary layer generated by a high-speed aircraft during a typical mission will generally be nonstationary because they depend on the airspeed and altitude, which vary during the mission. However, one can easily fly the aircraft under a specific set of fixed flight conditions so as to produce stationary boundary layer pressures for measurement purposes. The flight conditions can then be changed sequentially to other specific sets of fixed conditions, producing stationary data for measurement purposes until the entire mission environment has been represented in adequate detail by piecewise stationary segments. Such procedures for generating stationary data to represent a generally nonstationary phenomenon are commonly used and are strongly recommended to avoid the need for nonstationary data analysis procedures.

    There are a number of situations where the approach to data collection and analysis described above is not feasible, and individual sample records of data must be analyzed as nonstationary data. From a purely computational viewpoint, the most desirable situation is that in which an experiment producing the nonstationary data of

    Random Data: Analysis and Measurement Procedures, Fourth Edition. By Julius S. Bendat and Allan G. Piersol Copyright 2010 John Wiley & Sons, Inc.



    Figure 12.1 Sample records of nonstationary random process.

    interest can be repeated under statistically similar conditions. This allows an ensemble of sample records to be measured on a common time base, as illustrated in Figure 12.1. A more common situation, however, is that in which the nonstationary phenomenon of interest is unique and cannot be reproduced under statistically similar conditions. Examples include nonstationary ocean waves, atmospheric turbulence, and economic time-series data. The basic factors producing such data are too complex to allow the performance of repeated experiments under similar conditions. The analysis of data in these cases must be accomplished by computations on single sample records.

    An appropriate general methodology does not exist for analyzing the properties of all types of nonstationary random data from individual sample records. This is due partly to the fact that a nonstationary conclusion is a negative statement specifying only a lack of stationary properties, rather than a positive statement defining the precise nature of the nonstationarity. It follows that special techniques must be developed for nonstationary data that apply only to limited classes of these data. The usual approach is to hypothesize a specific model for each class of nonstationary data of interest that consists of deterministic factors operating on an otherwise stationary random process. Three examples are shown in Figure 12.2. These non-stationary time history records are constructed from


    Figure 12.2 Examples of nonstationary data, (a) Time-varying mean value, (b) Time-varying mean square value, (c) Time-varying frequency structure.

    (a)x(r) = a(t)+u(t)

    ( b ) j e ( f ) = a(t)u{t) (12.1)

    (c)x(i) = (/)

    where u(t) is a sample record of a stationary random process {w(r)} and a(t) is a deterministic function that is repeated exactly on each record. Such elementary nonstationary models can be combined or extended to generate more complex models as required to fit various physical situations.


    For a nonstationary random process (x(r)} as illustrated in Figure 12.1, statistical properties over the ensemble at any time t are not invariant with respect to translations


    in t. Hence at any value of t = , the probability structure of the random variable () would be a function of ij . To be precise, the nonstationary probability density function p(x, t{) of Mfi)} is defined by

    ProbU < x(tx) < + ] . = tano1 L L = 1 ( 1 2 . 2 )

    and has the following basic properties for any t:


    1 = p(x,t)dx J O O


    () = [*(f)] = xp(x,t)dx J o o

    io o x2p(x, t)dx (12.3)


    These Formulas also apply to stationary cases where p(x, t) = p(x), independent of t. The nonstationary probability distribution function P(x, ij) is defined by

    P{x,h) = Prob[-oo < x(ti) < x] (12.4)

    Similar relationships exist for P(x, ) as for the probability distribution function P(x) discussed in Chapter 3.

    If the nonstationary random process {x(t)} is Gaussian at t = t\, then p(x, ) takes the special form

    p(x,tx) = K ( r 0 v ^ ] - ' e x p | ~ [ ^ j ) ] 2 | (12.5)

    which is completely determined by the nonstationary mean and mean square values of x(t) at t = ii. This result indicates that the measurement of these two quantities may be quite significant in many nonstationary applications just as in previous stationary applications.

    12.2.1 Higher Order Probability Functions

    For a pair of times and t2, the second-order nonstationary probability density function of () and x(t2) is defined by

    . Probixj < x(fi)


    1 = | | p{x\,t\\x2,t2)dx\dx2


    oo (12.7) poo

    p(x2,h) = p(xi,t\\x2,h)dx\ J OO

    Rxx{t\,h) = E[x(ti)x(t2)] = [[ xlX2p(x\,ti;x2,t2)dxidx2 JJ C O

    For stationary cases, p(x\, t\; x2, h) = p(x\ , 0; x 2 , t2t\). Second-order nonstationary probability distribution functions may be defined analogous to Equation (12.4) by the quantity

    ( , ; 2 , 2 ) = Prob[-co < x(t\) < x\ andoo < x(i 2 ) < x 2] (12.8)

    Continuing in this way, higher order nonstationary probability distribution and density functions may be defined that describe the nonstationary random process {x(t)} in more detail. This procedure supplies a rigorous characterization of the nonstationary random process {x(i)}-

    Consider next two different nonstationary random processes {x(t)} and {y(t)}. For x(ti) and y(f2) the joint (second-order) nonstationary probability density function is defined by

    Prob[x < x(ti) < + Ax and y < y(t2) < y + Ay] p(xuti;y,t2)= hm^ (12.9)


    and has basic properties similar to Equation (12.7). In particular, the nonstationary cross-correlation function, which is discussed in Section 12.5, satisfies the relation

    Rxy{h,t2) = [x(ii)y(i 2)] = | | xyp{x,h;y,t2)dxdy (12.10)

    For stationary cases, p(x, t\;y, t2) = p(x, 0;y, t2-t\). The measurement of nonstationary probability density functions can be a formid-

    able task. Even for the first-order density function defined in Equation (12.2), all possible combinations of and t must be considered. This will require the analysis of a large collection of sample records. If a Gaussian assumption can be made, Equa-tion (12.5) reduces the problem of measuring p(x, -J to measuring () and (\), which is a much simpler undertaking. Nevertheless, ensemble averaging of a collection of sample records is still generally required, as discussed in Sections 12.3 and 12.4.

    12.2.2 Time-Averaged Probability Functions

    It often occurs in practice that only one or a very few sample records of data are available for a nonstationary random process of interest. There may be a strong


    temptation in such cases to analyze the data by time-averaging procedures as would be appropriate if the data were a sample record from a stationary (ergodic) random process. For some nonstationary data parameters, time-averaging analysis procedures can produce meaningful results in certain special cases, as will be discussed later. For the case of probability density functions, however, time-averaging procedures will generally produce severely distorted results. In particular, the probability density function computed by time-averaging data with a nonstationary mean square value will tend to exaggerate the probability density of low- and high-amplitude values at the expense of intermediate values, as demonstrated in the illustration to follow.

    Example 12.1. Illustration of Time-Averaged Probability Density Function. Assume a sample record of data consists of a normally distributed stationary time history with a zero mean and a variance \ over the first half of the record, and a second normally distributed stationary time history with a zero mean and a variance \ over the second half of the record. Then the time history x(t) is given by

    x{t) ' x\ (?) 0 < t < T/2

    x2{t) T/2

  • Ajisuap Ajmqeqc-Jd



    nonstationary process {x(t)}, the mean value at any time t is estimated by the ensemble average

    1 N

    )=*{) (12.11) 1=1

    The estimate {) will differ over different choices of the samples (xfo)}. Consequently, one must investigate for every t how closely an arbitrary estimate will approximate the true mean value. The expected value of ( is given by

    1 N

    \() = -]= ** ( 1 2 1 2 ) i=l

    where ( = E[Xi(t)] (12.13)

    is the true mean value of the nonstationary process at time t. Hence, () is an unbiased estimate of () for all t, independent of N. The variance of the estimate ( is given by

    Var[&(i)] = [{()-(}2} (12.14)

    Mean values of nonstationary random processes can be estimated using a special-purpose instrument or a digital computer, as illustrated in Figure 12.4. Two main steps are involved in the measurement. The first step is to obtain and store each record x((f) as a function of t. This may be done continuously for all t in the range 0 < t < Tor discretely by some digitizing procedure. After this has been done for records, the next step is to perform an ensemble average by adding the records together and dividing by N. If each x,(r) is digitized in, say, steps, then the total number of stored values would be MN.

    12.3.1 Independent Samples

    In most practical applications, the sample functions used to compute ( will be statistically independent. Hence independence will be assumed here. Upon expanding Equation (12.14), as in the derivation of Equation (4.9), it is seen that the sample variance at time t is given by

    V a r f e ( / ) ] = ^ ( 1 2 . 1 5 )

    where 2{) is the variance associated with the underlying nonstationary process {()} Thus the sample variance approaches zero as approaches infinity, so that ( is a consistent estimate of () for all t.

    2 (0 -Multiple

    store memory

    Ensemble-averaging circuit

    ( add and \ divide by N) Figure 12.4 Procedure for nonstationary mean value measurement.


    Confidence intervals for the nonstationary mean value ( can be constructed based on the estimate () using the procedures detailed in Section 4.4. Specifically, the (1 - a) confidence interval at any time t is

    - ( ({){;/2 , / , , , (./2 (12.16)

    where &x(t) is an unbiased estimate of the standard deviation of [x(t)} at time t given by Equation (4.12) as

    1 / 2


    and r n ; a / 2 is the a/2 percentage point of Student's t variable with n = N' 1 degrees of freedom defined in Section 4.2.3. Note that Equation (12.16) applies even when {x(t)} is not normally distributed, assuming the sample size is greater than, say, = 10. This follows from the central limit theorem in Section 3.3.1, which applies to the nonstationary mean value computation in Equation (12.11).

    12.3.2 Correlated Samples

    Consider a general situation where sample functions *;(/), 0 < t < T, i = 1,2, . . . , N, from a nonstationary random process are correlated such that, for every r,

    E[xi(t)xj(t)] = R^k, where k = j-i (12.18)

    The quantity RxAk, t) is called a nonstationary spatial cross-correlation function at time t between all pairs of records x,{t) and Xj{t) satisfying k=j i. It follows from the definition of Equation (12.18) that, by interchanging i and j ,

    Rxx(-k,t)=Rxx(k,t) (12.19)

    When the sample functions x,(f) and Xj(t) are independent, for i j corresponding to k0,

    R{k, t) = E[Xi{t)xj{t)} = fo(r)]Eto(0] = 2( fork 0 (12.20)

    At k = 0, Equation (12.18) becomes

    (0,=[2(}=2( + 2( (12.21)

    These relations yield the independent sample case in the preceding section.


    For correlated samples, Equation (12.15) now takes the general form

    Var&(0] + {{(-(}[^)-

    = " + N2 [ - / - * , r)-^(0] (12.22)

    The next problem is to simplify the double sum appearing in Equation (12.22). The index k=j -i takes on values k = 1 ,2 , .., 1. Altogether, there are N2-N terms. Because R^-k, t) = Rxx(k, t), the 2 -N terms in this double sum can be arranged so that there are two terms where k = - 1 of the form RxxiN 1, r), four terms where k = N 2 of the form RjiN - 2, r),..., and 20V - 1) terms where k = 1 of the form (1, 0- Thus, one derives the simplified expression

    *CH.O = 2 X > - f c ) / U M (12.23)


    As a check, note that the sum N-l

    2^(N~k) =N2-N (12.24) k=l

    Substitution of Equation (12.23) into Equation (12.22) now yields


    Equation (12.20) shows that Equation (12.25) reduces to Equation (12.15) when the records are independent, providing another check on the validity of Equation (12.25). The result of Equation (12.25) is an important extension of Equation (12.15) and should be used in place of Equation (12.15) for correlated samples.

    A special situation of complete dependence between all samples is worthy of mention. For this case,


    x(t) + u(t) for all* (12.26)

    Equation (12.25) now becomes

    Var^i ) ] = ^ + (2-)>( = 2

    ( (12.27)

    Thus, no reduction in variance occurs when the samples are completely dependent. For physical situations where a partial correlation may exist between the

    different samples, the following example may be helpful in giving quantitative results.


    Example 12.2. Variance of Mean Value Estimate for Exponential Correlation Between Samples. An exponential form for the nonstationary cross-correlation function Rxxik, t) will now be assumed so as to obtain some quantitative results to characterize different degrees of correlation. To be specific, assume that

    Rxx{k,t)=fi{t)+ - * ) , -kc


    To evaluate the sum above, let


    / ( c ) = 5 > - * = 4=1

    l - g - ( " - i ) c




    / ( c ) = - J > - f c = k=l


    F(c) = ( / V - * ) * - f c = Nf(c)...


View more >