estimates of correlation functions of nonstationary random processes (corresp.)

3

Click here to load reader

Upload: g

Post on 24-Sep-2016

219 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Estimates of correlation functions of nonstationary random processes (Corresp.)

70 IEEE TRANSACTIONS ON INFORMATION THEORY JANUARY

Thus the expected number of coincidences for this sequence in the array position is approximately given by ri/m f 1/9(ri - rJm). Obviously, if m is large compared to ri this expression reduces to the expected number of coincidences for an arbitrary sequence when placed in an arbitrary position. Thus, to detect this sequence would be improbable. As to be expected this shows that the method is more successful for cases in the indicated ranges of values of S and rd. The property of symmetry and spread follow from similar calculations applied to this sequence in positions near the previously studied position.

The array b;i whose rows give the coincidences of the sequences (a), (b), (c), (d), (e), respectively, with the sequence of Z’S is given by:

2301211100 13562122 23110000 343336 754424 .

The normalized array cij is obtained by dividing each row by the number of elements in the corresponding sequence. This gives, to two decimals,

0.25 0.38 0 0.13 0.25 0.13 0.13 0.13 0 0 0.10 0.30 0.50 0.60 0.20 0.10 0.20 0.20 0.20 0.30 0.10 0.10 0 0 0 0 0.25 0.33 0.25 0.25 0.25 0.50 0.58 0.42 0.33 0.33 0.17 0.33

The largest element of the matrix is 0.60. This corresponds to sequence (b) starting with the fourth position in the selection array. After striking out its row, the next largest element is 0.58 corre- sponding to sequence (a) starting in the first position. Again after striking out its row, the next largest values is 0.50 corresponding to sequence (d) which starts in the sixth position. When the sequences are arranged in this order it is observed that they would be ade- quate for generating the random sequence and hence there is no need for adopting other sequences.

The foregoing example should be regarded as a simple illustration. Difficulties in fact occur and the procedure can stand improvements.

THOMAS L. SAATY U. S. Arms Control and Disarmament Agency

Washington, D. C.

Estimates of Correlation Functions of Nonstationary Random Processes

The need for estimating the auto- or crosscorrelation function of nonstationary random processes frequently arises in modern com- munication and self-adaptive systems. In most situations, however, only one sample function of the process can be observed over a finite time interval. As is well known, such data provide a meaningful estimate of the desired correlation function in the ergodic situation. Nevertheless, the necessity of knowing this quantity demands an investigation as to what extent and in which cases it might be pos-

Manuscript received April 16, 1964; revised May 26, 1964; July 21, 1965. This work was supported in part by the National Science Foundation, Watshington, D. C., under Contract G-18997.

sible or feasible to estimate the time-varying correlation function from a single, finite-time sample record. The present correspondence is concerned with such a possibility.

In recent years, the estimation of a time-averaged correlation function for nonstationary random processes has been considered by several authors.132 However, we shall be concerned with an estimation of truly time-varying auto- or crosscorrelation functions. Obviously, we will only be able to construct some kind of best ap- proximant, i.e., one which is optimum with respect to a certain error criterion, but without having control over the actual size of the error. For our estimate the minimum mean square error be- tween the true ensemble-average correlation function and a weighted time average of the single sample function will serve that purpose.

DEFINITIONS

The random processes to be considered must be of at least second order, i.e., they must possess finite second moments. In addition, it is required that the fourth product moment is also finite. For simplicity, the first moment, or mean value, of such a process will be assumed to be identically zero. It follows then, that the mean m,(t) and variance c:(t) of a random process s(t), which is real valued, are

m,(t) = E[x(t)] = 0, (0

c:(t) = E[X2(t)] < cx3 ) (2) where E indicates the mathematical expectation. The autocorre- lation function will be defined as

&dt, 4 = -mx(o4t - 41, (3) with 7 as the delay variable. An additional condition remains:

P:z(L lb, T) = m&Jdta - ~)X(tb)dt* - 41 < m * (4) Equation (4) defines the special form of the fourth product moment of z(t) which will occur in the analysis.

For the purpose of estimation we restrict ourselves to 7 2 0, such that the autocorrelation function R,,(t, 7) depends only on past values of the random process. (This is no restriction on generality.) Thus, it is reasonable to consider the single sample function z(t)- using the same symbol for the stochastic process as for a particular sample function-as given over a &nite time interval of length T which contains only past values with respect to a particular obser- vation point to. This time instant to is the estimation point for the desired autocorrelation function. Only this case will be considered. However, the approach can easily be extended to include cross- correlation functions between z(t) and another process, say y(t), if the crosscorrelation function is defined accordingly. Thus, the general term correlation function will be used freely.

Clearly, the only reasonable operations which can be performed on z(t), t E [to - T, to], result in some sort of time average within this given interval of observation.

The ergodic situation, although invalid for the general non- stationary case, suggests approximants of the desired correlation function at to which would be of the form

1 = ---y /m;T-r, dt + to>x(t + t, - T- r) a. (5)

The different, and more general, approach taken here3 consists of

1 A. A. Kharkevioh, Spectra and nnalysis (translated from the Russian), Con- sultants Bureau, New York, N. Y., 1960, pp. 165-185.

2 J. Kampb de F&i& and F. Frenkiel, “Correlations and spectra for nonsta- tionary random functions,” Mathematics of Computation, vol. 16, no. 77, pp. I-21, January 1962.

* H. Berndt, “Estimation of time-varying correlation functions,” Ph.D. disaerta- tion. Purdue University, Lafayette, Ind., June 1963, pp. 7-10.

Page 2: Estimates of correlation functions of nonstationary random processes (Corresp.)

1966 CORRESPONDENCE 71

multiplying the integrand of (5) by an appropriate weighting func- tion h(2, 7, T), where T is now a parameter, and forming the follow- ing estimate:

“R&o, T, h) = so h(& T, T)x(t + to)x(t + to - T) dt. (6) -co One of the advantages of such an approximation is that none of

the variables appear in the limits of integration. The main objective is, however, to weigh the values of the single sample function in such a way over an interval 2’ - 7 that the desired ensemble aver- age at to, over all sample functions, is best approximated by this weighted time average.

OPTIMUM WEIGHTING FUNCTION

An estimate of form (6) will be determined, such that the expecta- tion of the square of the difference of the approximant and the true autocorrelation function is minimized.

The mean of the weighted average (6) is given by

E[aR&o, T, VI = so w, 7, wLz(t + to, T) dt, (7) -a which is, in general, a biased estimate. We do not require an un- biased estimate here, but simply a minimum mean square error.

The estimate (6) has a mean square error of

s2(R, =R) = E[{R,,(to, 7) - =Rm(to, 7, h)j21; 0 0 = ss wa, 7, TMb, 7, TX&a + to, t* + to, 4 dtb &a -@a -co

- 2R&o, T> so h(h 2 7, m?%& + to, 4 dt + R&o, 4.

-m

(8)

Taking the first variation leads to

&(R, ‘=R) = 2 [; s_:. 6h(t,, r, T)

.h(tb, 7, T)cl:& + to, tb + to, T> dt, dt,

- 2R&o, T) s’ 6h(t,, T, T)Rz,(ta + to, T> dt,. (9) -a

In deriving (9), use was made of the symmetry of the fourth mixed moment with respect to t, and tb. By setting this first variation to zero the condition for the minimum mean square error is obtained.

The weighting function h(t,, 7, 2’) satisfying

c@(R, OR) = 0

will, thus, be the optimum weighting function, and the following linear integral equation must be solved:

s

0

ML 7, T)it&a + to, tb + to, 7) dtb -co

- R,,(to, 7)R,,(t, + to, 7) = 0 for all t, E (- ~0, 0). (11) This relationship for an optimum weighting function, which we

might designate by ho(t,, 7, T), is indeed very similar to the Wiener- Hopf equation4 in prediction theory. Unfortunately, the various

4 W. B. Davenport and W. L. Root, An Introduction to the Theory of Random Signals and Noise. New York: McGraw-Hill, 1958, pp. 219-249.

elegant methods for a direct solution of that linear integral equation cannot be applied here in the nonstationary situation. A solution in closed form is, in general, impossible. Of course, a series solution, in terms of the eigenvalues and eigenfunctions of the kernel, can be obtained for each case, but this procedure is not very well suited for a numerical analysis.5 It was found advantageous to select immedi- ately a convenient approximate method of solution which mini- mizes the numerical effort and gives the optimum weighting func- tion in a physically realizable form.

In order to proceed to such a numerical solution of (11) it is not only necessary to choose autocorrelation functions of interest, but the corresponding fourth product moment must be known, too. Only for a Gaussian stochastic process is the knowledge of the auto- correlation function sufficient. There, the fourth mixed moment is given by

+ R&a, tcz - tJRzz(ta - T, t, - 4)

+ R&a, ta - tb + r)R,,(t, - r, t, - tb - r). (121

No such simple relationship holds in more general cases. However, it is an important property of autocorrelation functions that there always exists a Gaussian process with an identical autocorrelation function.6 This property makes it possible to obtain an ho(t,, 7, T) without assuming more than R,,(t,, 7) by substituting the equiva- lent Gaussian process. From a comparison with information theo- retical concepts, it is felt that this procedure may lead, in general, to an upper bound on the mean square error. But neither a sufficient proof nor a counter-example to this conjecture has been found. The “Gaussian assumption” has been used on the basis of these argu- ments.

The details of the numerical investigation will not be discussed here.7 It should only be mentioned that it was found most useful to employ the method of undetermined coefficients or collocations in solving (11) numerically on an IBM 7090 computer. Using this approach a large number of optimum weighting functions were evaluated to estimate time-varying correlation functions which show linear, exponential, and periodic time dependence with pre- dominantly exponential decay. Special attention was given to auto- correlation functions of locally stationary random processes in the terminology of Silverman.g

In general, similarities in the appearance of optimum weighting functions for different correlation functions have not yet been ob- served. But in the locally stationary case, ha(&) has nearly always been found to be approximately constant over almost 80 percent of the observation interval showing strong increases towards the end- points and a small oscillatory behavior for t, close to to, t, < to. The dependence on 7 does not seem to change this picture significantly. A representative example’0 is given in Fig. 1. Optimum weighting functions obtained for time independent correlation functions, i.e., stationary random processes, show essentially the same behavior, too. These results indicate that a finite time integrator correspond- ing to uniform weight might also lead to satisfactory estimates in certain cases, if the observation interval can be chosen in an opti-

6 F. B. Hildebrand. Methods of Applied Mathematics. New York: McGraw-Hill, 1952, pp. 442-444.

6 M. L&w, Probability Theory, 3rd ed. Princeton, N. J.: Van Nostrand, 1963, pp. 466467.

7 For a more detailed discussion and examples see: H. Berndt,, op. cit., pp. 13-18 and pp. 49-62; or: G. R. Cooper and H. Berndt, “Estimation of time-varying COI‘TB- l&ion functions,” School of Electrical Engineering, Purdue University, Lafayette. Ind., Tech. Rept. TR-EE63-1, March 1963.

8 F. B. Hildebrand, op. cit., pp. 448459. 9 R. A. Silverman, “Locally stationary random processes,” IRE Trans. on In-

formation Theory, vol. IT-3, pp. 182-187, September 1957. ‘0 For additional examples see Cooper and Berndt.’

Page 3: Estimates of correlation functions of nonstationary random processes (Corresp.)

72 IEEE TRANSACTIONS ON INFORMATION THEORY JANUARY

TIME t,

Fig. 1. Optimum weighting function for the autocorrelation function R,,(t,, 7) = [l - (t. - r/Z)/A]e-lrl evaluated at to = 0 for T = 0.

mum way. But further investigations of optimum weighting func- tions for other classes of processes will certainly be necessary.

CONCLUSIONS

We have shown that an estimate to time-varying autocorrelation functions can be obtained by forming a weighted time-average. If the weighting function is chosen appropriately, such an estimate possesses a mean square error that is smaller than that of the cus- tomary mean autocorrelation function. Although the optimum weighting function which minimizes the mean square error depends on the autocorrelation function to be estimated, the results indicate that there is at least one fairly wide class of time-varying autocorre- lation functions which possess essentially similar optimum weight- ing functions. Consequently, further systematic evaluations should be encouraged. Such a study for certain classes of correlation func- tions may well lead to definite pat,terns in the appearance of weight- ing functions which would ease the measurement problem.

H. BERNDT~~ G.R. COOPER

School of Elec. Engrg. Purdue University

Lafayette, Ind.

11 Presently at Siemens and Halske BG, Munich, Germany.

The Distribution of Intervals Between Zero Crossings of Sine Wave Plus Random Noise

In a recent paper’ formulas were derived for the probability den- sity of the distribution of intervals between zero crossings of a waveform consisting of a sine wave with added random noise. Opportunity has now been found for the simulation of this func- tion on a digital computer, the probability density being computed by taking about 20 000 sampled intervals in each of three cases. The noise in each case was arranged to have a flat spectrum from 0 to 1, the sine wave having an amplitude of three times the rms value of the noise, its frequency (p) being taken as 0.375, 0.625, and 0.875, respectively. The results are compared with the theoretical curves in Figs. 1, 2, and 3, showing excellent agreement.

Fig. 1. Distribution of zero crossing intervals (a = 3, p = 0.375).

3

2

Manuscript received June 39, 1965. 1 8. hf. Cobb, “The distributmn of intervals between zero crossings of sine wave

plus random noise and allied topics,” IT-II, pp. 220-233, April 1965.

IEEE Trans. on Information Theory, vol.

Fig. 2. Distribution of zero crossing intervals (a = 3, 4 = 0.625).