of modal parameters using the bootstrap in bendat and piersol (1 986) ... measure data from a...
Post on 09-May-2018
Embed Size (px)
STATISTICAL ANALYSIS OF MODAL PARAMETERS USING THE BOOTSTRAP
Thomas L. Paez*, Norman F. HunteP
Experimental Structural Dynamics Department * Engineering science a d d J s i & E J Sandia National Laboratories Albuquerque, New Mexico
Los Alamos National Laboratory Los Alamos, New Mexico
ABSTRACT. Structural dynamic testing is concerned with the estimation of system properties, including frequency response functions and modal characteristics. These properties are derived from tests on the structure of interest, during which excitations and responses are measured and Fourier techniques are used to reduce the data zhe inputs used in a test are frequently random, and they excite random responses h the structure of interest. when these random inputs and responses are analyzed they yiekj estimates of system properties that are random variabk and random process reaikations. Of course, such estimates of system properties vary randomly from one test to another, but even when deterministic inputs are used to excite a structure, the estimated properties vary f m test to test. when test excitations and responses are normally distributed, classical techniques permit us to Statistically analyze ihputs, responses, and some system parametem However, when the input excitations are non-normal, the system is nonlinear, and/or the property of interest is anything but the simplest# the classical analyses break abwn. The bootstrap is a technique for the statistical analysis of data that are not necessarily normaliy distributed. It can be used to statistically analyze any measure of input excitation or response, or any system pmperty, when data are available to make an estimate. It is designed to estimate the standard error, bias, and confidence intervals of parameter estimates. This paper shows how the bootstrap can be applied to the statiskal analysis of modal parameters.
system parameters a cdf (cum. distrib. fn.) and its estimator matrix. vector of future values in CVA matrix, vector of past values in CVA a transformation matrix components of SVD a data sample a bootstrap data sample a pdf (probability density function) dynamic system parameters optimum memory of past
estimate of standard error system input noise
x(t) structural response Y system response e statistic, parameter estimate 6. bootstrap replicate of statistic 1. INTRODUCTION AND MOTIVATION
Analysis of structural test data follows the sequence shown in Fgure 1. spectral densities and transfer functions are derived using averages of input and response Fourier spectra. Modal frequencies, dampings, and mode shapes are identified by fitting linear oscillatory models to the observed spectral properties. Because of measurement noise, temporal and sample variation of system parameters, and nonlinearity of real systems, parameter estimates vary from one analysis to the next. It is desirable to establish the relative degree of variation of parameter estimates so that we can gage the accuracy of system characterizations.
Statistical measures of accuracy like bias, standard error, and confidence intervals can be established for some spectral estimators like autospectral density, cross-spectral density and frequency response function. Other system characteristics like system mode frequencies, dampings and mode shapes are more diffiiult to analyze because the link between the underlying data and the modal measure of interest is quite complex. Some of the spectral estimators that can be analyzed using classical approaches are described in Bendat and Piersol (1 986) and Wirsching, Paez, and Ortiz (1995). These statistics are based on the assumptions of stationary, Gaussian data, and on rather complex statistical derivations.
In this paper we explore the accuracy of modal parameter estimates from another viewpoint. We utilize a recently derived statistical technique called the bootstrap (Efron. 1979) to directly estimate the error bounds associated with the computation of modal frequencies. A recent paper (Hunter and Paez, 1995) shows how the bootstrap can be used to perform statistical analysis on spectral densities and transfer functions from test data. Assumptions of stationarity. linearity, or Gaussian'w are not required. We demonstrate the technique on the modal parameters of data from a multidegree-of-freedom structure.
Measurement of Excitation Computation of Spectral
Response Functions, Canonical Variate Models
and Response Time Series 4 Densities, Frequency I
Figure 1. Analysis of structural test data.
System Identification Estimates,
Damping Values, Mode Shapes 4 Resonant Frequencies,
2. THE BOOTSTRAP PROCEDURE
The objective of bootstrap analysis is to assess the accuracy of parameter estimates that are statistics of measured data by estimating standard error, confidence intervals, and bias. To perform a bootstrap analysis, we measure data from a random source and assume that the observed data represent the source. The source is assumed to generate realizations with an unknown probabifi distribution. Each observed data point is assigned a probabili of Occurrence of l/n, where n is the total number of data points measured. A bootstrap sample of the data is created by selecting at random, with replacement, n elements from the measured data set. This process is illustrated in Figure 2. The procedure is readily implemented using a uniform random number generator which selects, with equal probability, integer values in the range 1 to n. Sampling is done with replacement, so each bootstrap sample may have several occurrences of some data values and other data values may be absent.
F + X = ( X I , X ~ ,..., x,) Creation of bootstrap sample accomplished through random selection among elements of X .
L F+ x' = (x* ,x7 ...., xq)
Figure 2. Obtaining a bootstrap sample.
In a bootstrap analysis, numerous bootstrap samples are created. The statistic of interest is computed from each bootstrap sample; the resulting quantities are known as bootstrap replicates of the statistic of interest. Standard error, confidence intervals, and bias of the statistic of interest are computed using standard techniques and formulas on the bootstrap replicates of the statistic of interest. For example, let 5 denote the number of bootstrap samples used in an analysis, and let&b= l,..., B, denote the bootstrap replicates of the statistic of interest. Then the standard error of the statistic of interest is estimated with
the bootstrap replicates of the statistic of interest, and identifying (or interpolating) the (a /2) x 10W percentile value and !he (l-a/2)x100% percentile value in the sorted list, and using the identified values as the limits of the confidence interval. Another more advanced method for confidence interval estimation is discussed in Efron and Tibshirani (1 993).
The number of bootstrap samples, B, used in an analysis, ranges from 25 to several thousand. The standard error of a parameter estimate may be computed using 25 to 50 bootstrap samples. Accurate computation of the confidence intervals of an estimated parameter requires analysis of a thousand or more bootstrap samples.
Bootstrap sampling provides an optimal estimate of the probability density function which characterizes the data source given that our knowledge of the source is iimited to the measured data. Computation of a statistic from the bootstrap samples simulates computation of the same statistic on samples drawn from the real world distribution. Properties of the "real world" distribution are estimated in the "bootstrap world" as illustrated in Figure 3.
3. A BOOTSTRAP EXAMPLE
Consider a set of data drawn from a random source with the probability density function illustrated in Figure 4. One hundred data points are generated using a random source with this density.
We assume that the 100 points are characteristic of the source. The mean of the sampled points is 1.3440. Using the bootstrap procedure outlined above, we create 400 bootstrap samples of these 100 points. (Normally, each bootstrap sample contains as many points as are available in the original measured data set.) From each sample we compute the sample mean. The standard deviation of these sample means is 0.0660; this is the standard error of the mean estimate. The theoretical mean of this distribution is 1.3333. The 400 bootstrap replications of the original data also albw computation of confidence intervals on our estimated mean. The 99% confidence intervals on the mean are 1.1471 to 1.5073. The true mean lies well within these intervals. To further illustrate the typical bootstrap results, Table I shows the results of seven different realizations of the distribution. In each case, the true mean lies well within the confidence intervals indicated. Note that, as expected, a smaller number of points leads to a broader confidence interval.
in one type- of bootstrap analysis, the two-sided, (1 - a) x 100% confidence intervals are obtained by sorting
Unknown Probabiri Observed
F 4 X = ( ~ 1 ~ x 2 ,..., xn)
Figure 3. The bootstrap approximation to the real world. The observed distribution is our best estimate of the true distribution. The observed sample is X, and the statistic of interest 6 = s ( X ) can be computed based on this. In the bootstrap world the observed data are used to generate as many bootstrap samples x* as we wish. Each bootstrap sample is used in the formula 6 = s(X) to compute a bootstrap replicate of the statistic of interest. The bootstrap replicates are used to analyze the standard error. confidence intervals and bias of the statistical estimator.
Figure 4. Probability density function of a random source.
Table 1. Mean and c