numerical simulation of stationary and non-stationary gaussian random processes

14
Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes Author(s): Joel N. Franklin Source: SIAM Review, Vol. 7, No. 1 (Jan., 1965), pp. 68-80 Published by: Society for Industrial and Applied Mathematics Stable URL: http://www.jstor.org/stable/2027216 . Accessed: 17/06/2014 13:40 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . Society for Industrial and Applied Mathematics is collaborating with JSTOR to digitize, preserve and extend access to SIAM Review. http://www.jstor.org This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PM All use subject to JSTOR Terms and Conditions

Upload: joel-n-franklin

Post on 15-Jan-2017

215 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

Numerical Simulation of Stationary and Non-Stationary Gaussian Random ProcessesAuthor(s): Joel N. FranklinSource: SIAM Review, Vol. 7, No. 1 (Jan., 1965), pp. 68-80Published by: Society for Industrial and Applied MathematicsStable URL: http://www.jstor.org/stable/2027216 .

Accessed: 17/06/2014 13:40

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

Society for Industrial and Applied Mathematics is collaborating with JSTOR to digitize, preserve and extendaccess to SIAM Review.

http://www.jstor.org

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 2: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

SIAM REVIEW

Vol. 7, No. 1, .January, 1965

NUMERICAL SIMULATION OF STATIONARY AND NON- STATIONARY GAUSSIAN RANDOM PROCESSES*

JOEI N. FRANKLINt

1. Introduction. If r is a ranidonm variable, ranging over somue measure space, the collection of functionis of timiie x(t) = xr(t) is called a random process. The process is called Gaussian if, for every fiilite collection of times t, < ... < tn, the randolmi variables x(1l), , x(ta) have a multivariate Gaussian distribu- tion. The process is called stationar y if, for anly fixed iiicremlenlt At, the random variables x(t. + At) have the same joinit distributioil as the randomll variables x(ti).

A linear tranisform-iiationi of a Gaussiani randonm process is another Gaussian randomii process. The theory of linear transformations of Gaussian randoml proc- esses is very well developed; see, for example, Doob [1]. But there is not an equally good theory for nioinliniear trainsfornmatioins. For example, if x(t) is a Gaussian randonm process with iiiean zero and given autocorrelation, as defined in ?2, there is no knowin formula for the distribution or for the autocorrelation of the random process y(t) defined by an equation N[y(t)] = x(t), where N is a given nonliniear differential operator. The best available way to study the process y(t) is to compute sama-ples of it. First we need a method for computing numerical samples of the Gaussiani process x(t) with given autocorrelation. If each coma- puted sample of x(t) is bounded and smooth, as it should be if its given auto- correlation is bounded, theni samples of y (t) may be computed from the differen- tial equation N[y(t)] = x(t) by any standard numerical technique, such as Adams' method or the Runge-Kutta method; see Hildebrand [2].

The purpose of this paper is to present numerical methods for the computationi of samples of a Gaussian random process x(t) with prescribed autocorrelation or power spectral density.

2. Gaussian processes and linear filters. In this section we shall summarize the results which we shall need from the theory of linear transformations of Gaussian processes. Throughout the paper we shall assume that Ex(t) = 0, where E denotes the expected value operator.

The autocorrelation function is defined as

(2.1) R(tj , t2) = E(x(t1)x(t2) ).

If x(t) is a stationary process, we write

(2.2) R(r) = E(x(t)x(t -)).

For stationary processes, we also define the power spectral density: 00

(2.3) S(o) = fR(r)e-iT dr. 00

* Received by the editors October 7, 1963, and in revised form July 13, 1964.

t Computing Center, California Institute of Technology, Pasadena, California.

68

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 3: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

SIMULATION OF GAUSSIAN RANDOM PROCESSES 69

In texts such as Davenport and Root [3] it is shown that S(w) dw may be thought of as the average contribution of the frequency interval dw to the power of a sample function x(t). Since, in general, the Fourier trainsform (2.3) can be in- verted, a knowledge of the power spectral density S(w) is equivalent to a knowl- edge of the autocorrelation R(r). As it is stressed by Blacknmaii and Tukey [4], it is usually more natural to work with S( w).

A linear processor with time-invariant elements iiiay be described niathe- matically as a linear transformation

rt (2.4) x(t) = f g(t - s)w(s) ds.

This transformationi produces an output x(t) depenidiilg onl the iilput process w(s) for times s ? t. Setting w equal to the Dirac delta function, we see that g(t) is the response of the filter to a delta function input. Thus, g(t) = 0 for t < 0. As it is shown in [3], the power spectral densities Sw(w) and Sx(w) of the input and output are related by the equatioii

(2.5) S (c) = I G(w) J2S,(w),

where eiwtG(w) is the response of the filter to the inlput ezca; equivalently, G(w) is the Fourier transform of g (t).

Formula (2.5) suggests a method for creatinig a sigiial x with prescribed power spectral density S(w) = Sx(w). We pick a tranisfer-function G(w) satis- fying I G(w) I2 = S(w), and we choose the random signal w to have spectral density Sw(w) _ 1.

The condition & (w) _ 1 defines an idealized randonm process w(t) knlown as white noise. From (2.3) we see that RI?( r) = 5(r). In particular, the variance Ew2(t) is b(O) = + 0 for all time. Although white noise is convenient for theoretical discussion, it is Inot suitable for direct input in digital computation.

If we assume that S( w) may be represented with sufficient accuracy by a rational function, i.e., a quotient of two polynomials ill w, then it is possible to find a transform which satisfies the conditiorn I G(w) S2 = (W). As it is shown in [3], the conditions

(2.6) 0 ? S(w) < ??, S(W) = S(-W), S(w) --Oasw ioo

insure that the rational function S(w) has a representationi

(2.7) 8(c) Q(iW) 2

o real.

where P(z) and Q(z) are polytiomiials in z with real coefficients, where the de- gree of P is less than that of Q, and where the zeros of Q(z) lie in the halfplane Re z < 0. Thus we may choose

G(w) = P( i) Q(iw)

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 4: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

70 JOEL N. FRANKLIN

or, equivalently, if D = d/dt = the differential operator,

(2.8) x(t) P(D) w(t).

The last equation is to be understood in the sense that we first solve the differen- tial equation

(2.9) Q(D)>(t) = w(t), -00 < t < 00

for the steady-state solution ?(t), and then obtain the required signal

(2.10) x(t) = P(D)O(t)

as a linear combination of derivatives of 0 (t) of order lower than the degree of Q. Further analysis is required for numerical computation because of the nu-

merically unrealizable white noise w(t) in the differential equation (2.9). We shall present in ?6 a numerical method for solving the stochastic differential equation (2.9). The intervening sections summarize the necessary techniques and discuss the more general numerical simulation of nonstationary processes.

3. The Crout factorization. Let P = (pij), i, j = 1, * , n, be a positive definite, real, symmetric matrix. It is shown in [2, Appendix A], or in Gantma- cher [5], that P has a factorization

(3.1) P = TT*,

where T is lower-triangular, with positive elements on the main diagonal. If the existence of the factorization (3.1) is given, it is easy to see how to compute the components t j of T in the order

ij = 11, 21, *.. , nl; 22, 32, , n2; * * ; nn.

Since tij = 0 for j > i, (3.1) states that I

(3.2) pij= tiktjk. kel1

First we compute

(3.3) tii = pl.

The other elements in the first column are

(3.4) til = tLi1pi1, i = 2,. , n.

If the preceding columns k < j have been computed, we compute the diagonal element

/ j-1 \1/2

(3.5) tj= pjj- E x k k=1

If j < n, the elements below the diagonal are computed from the formula

(3.6) tij = te d t pb j - E tik t tk h w ish j + 1sma n. k=l

4. Completely equidistributed sequences. Suppose that we wish to simulate a

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 5: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

SIMULATION OF GAUSSIAN RANDOM PROCESSES 71

randoimi se(uencie x , x2, of inidependenit samiiples fromii the un iformii dis- tribution on the interval 0 < x < 1. Various miethods of simulating such a sequence are discussed or refcieteced in [6]. lF'or the purposes of this paper it is recomamenided that someiC transcendental numnber 0 > 1 be choseii, anid that the fractional parts

(4.1) X71= n = 1, 2,

be conmputed as accurately as possible. By fractional part we ilmeaii the part beyolnd the decimal, e.g., {3.14} = .14.

We have choseti (in binary notationi)

(4.2) 0 = 11.001001000011111101101010100010001.

This nuniber represents 0 = 7r to :35 bits. The powers 0n(ii = 1, 2, , 20000) were computed with nio round-off error, i.e., 0' wvas comiiputed to 35n bits. Since this was a loing computationi, which would be expeinsive to repeat for inidividual applications, the iiunibers xn = { 6}, rouiided to 35 bits, n = 1, ... , 20000, have been stored permianently oni miagnietic tape. By ala abbreviated miethod of programmiing suggested by Mfr. Robert Sehbieider, who wrote our present pro- gram, we plan later to coimpute and store 350,000 of these numbers. Whern several different sequences, x,n (, , x,, (n = 0, 1, 2, ) arerequiredfor a single applicationi, we use the interlaced sequenices

(1) (2) (s) Xi, = X1is+ X Xn = Xns-r2 *t = X?S+S i t = 0, 1, X

The reasons for preferring this time-coinsuminig method of genierating the se- quence xr, are discussed rigorously in [6]. As it is discussed in [6], almost every sequence of the formii (4.1) has the property of comaplete equidistribution, which is defined as follows. A sequeice xn is equidistriibuted if, for any interval a ? x < b, where 0 ? a < b < 1, the iiunuber of xl, , XN lying ini the interval, divided by N, approaches (b - a)/1 = b - a as N -o, i.e.,

(4.3) 1- Z 1> b-a as N >scc

1_ n<TN

Although we should expect a truly ranidomi se(uence to he equidistributed, not every equidistributed sequence looks like a random sequence. For example, as it follows from a genieral result of H. Weyl [7], the numbers {nv"2/10001 are equidistributed; but they do not look random-l because for more than 99 % of all n (takeni sequenitially) we have xn+1 > xn, whereas x,+? > x,, should occur 50 % of the timne. Therefore, we require a stroniger coiidition than simple equi- distributioni (4.3).

Let k be any positive integer. We shall say that a sequenice x,, is equidistributed by k's if for every set of k itntervals (ai, bi), i = 1, , k, where 0 _ ai < bi < 1 we have

1 k

(4.4) _Z ai<zEi<i1- lH (bi-ai) as N -*o. (4.4) ~N ai?<xn+i<bi i=1

(i=,.. *,k) n.=l . *N

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 6: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

72 JOEL N. FRANKLIN

In other words, considerinig the sequence of successive groups of k numbers X.+1 X.+2 X*... * X,,+k(n = 1, 2, *.) to be successive poinits P,, in the space of k dimlensions, we require that the points Pn be equidistributed in the k-dimen- sional uInit hypercube. This requiremeilt has many consequenices. For example, if a sequence is equidistributed by twos (k = 2), then it is simply e(quidistributed (4.3) and x,,^+ > xt, 50 % of the timiie. A sequence equidistributed by k's for all k is called completely equidistributed.

Let x,, = Io"}. It is showii in [6] that, for almost all 0 > 1, for every pair of positive integers k and r the successive groups of r numbers X,k, X,Xk+1 ,

Xnk+r-l(n = 1, 2, -.-) are equidistributed in the unit hypercube of r dimnensions. In particular, since we may set k = r in the preceding senitenice, almost all se- quences x,, = I O} with 0 > 1 are completely equidistributed. It is apparent that these sequences x,, have all of the properties commonly attributed to ran- dom sequences. In particular, for the purpose of spectral synthesis it is essential to know that completely equidistributed sequences xn are white seqatences, i.e., sequences for which

(4.5)~ lim N -

)0nT-2 if T

> O.

Given a sequence x7, of independent samples from the uniformii distribution on 0 ? x < 1, we can construiet a sequence wn of independent samuples from the Gaussian distribution with mean 0 and variance 1. We use the method of Box and M\luller. The sequence w1 , W2, W3, - is obtained from the formiulas

(4.6) W2n-1 = (-2 ln X2n-1)2 cos 27rx2 , n = 1, 2, ...

W2n = (-2 ln X2n1 )2 sin 27rx2n , n - 1, 2,

In their paper [9] Box and i\Muller prove that the independence of the Xn implies the independence of the wn

Suppose that we wish to simulate a sequence of n-dimensional vectors z(0),

Z(), z (2),, which are independent samples from the n-dimensional multi- variate Gaussian distribution with positive definite moment matrix 311. We can do so by means of a one-dimensional sequence w1 , w2, ... of the type discussed in the preceding paragraph. We first define vectors

Wgl ~ Wn+1 W2n+1

(4.7)I= X = . XW(3 = . _Wn_ W2n L W3n

These vectors simulate samples from the n-dimensional Gaussian distribution with the identity moment matrix I. Let M be factored by the Crout factorization M TT* discussed in ?3. Then we define

(4.8) ZM = Tw(i) i = O, 1, 2,

These vectors are the required vectors because z = Tw comes from the multi-

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 7: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

SIMULATION OF GAUSSIAN RANDOM PROCESSES 73

variate Gaussian distribution with meani zero and moment matrix

(4.9) [E(zaz#)] = E(zz*) = E(Tww*T*) = TT* = M.

5. Simulation of nonstationary Gaussian random processes. Let x(t) be a randonm process which is Gaussian with meaii zero but which is not necessarily stationary. For all sampling times t1 < t2 < ... < t, the samples x(tv), v = 1, , n, come from the n-dimensional Gaussian distribution. The moment matrix of this distribution is

(.1) Ml = [E(x(t.)x(t) )] = [R(ta, t),

where R(s, t) is the nonstationary autocorrelation. We suppose that the function R(s, t) is giveln. Let a finite collection of times

t1 < t2 < ... < t4 be given. We suppose that the matrix [R(ta, ta)] is sym- metric and positive definite, i.e., we suppose that

n n /n \2

(5.2) E E R(ta , to)Xa X = E E Xa x(ta)) > O a=1 0=1 a=1

unless all Xa = 0. The moment matrix (5.1) is now factored by the Crout fac- torization M = TT*. Nuumbers wi, w2, * * *, wn are generated simulating inde- pendent samples from the Gaussian distribution with mean 0 and variance 1; the generation of these iiumbers was discussed in ?4. Then the required samples x(ta) are given by the formula x = Tw in an obvious vector notation. Explicitly,

X (tl) = tillw,

(5.3) X ( t2 ) = t21W1 + t22w2,

X(tn) tnlWl + tn2W2 + . * * + tnnnw

Some discussion of the assumption (5.2) is in order. Suppose this assumption fails. Then for some Xi, . , Xn not all 0, we have

n \2 (5.4) E E Xa x(ta)) = 0.

a=1

Without loss of generality, suppose XAn # 0. Then

/ n-1 \2

(5.5) E (X(tn) - Z Xa'X(ta)) = 0 a=1

with X,' = Xa/Xn . In other words, x(tn) is redundant; x(t7) should not be sampled, but should be calculated as a linear combination E XA'x(t,) of the other samples. In general, if

(5.6) rank [R(to,, to)] r < n

and if columns i , , r of the matrix [R(t, , ta)] are independent, then the random variables x(tg) for ,B = fly, - , fir have a positive definite moment

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 8: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

74 JOEL N. FRANKLIN

matrix. The variables x(to) for f # F3 , , f can be calculated as linear comli-

binations of the x(t ) for f = i1, , fr .

6. Simulation of stationary Gaussian random processes. A stationary process is, of course, a particular type of noiistationary process. Therefore, we could apply the general method described in the last section to stationary proc- esses. But a stationary process can be simulated in a more efficient manner, which does not become increasingly cumbersome as the number of samples to be computed increases.

I1n engineering and scientific problems we are usually given, not the autocor- relatioin R(T), but the power spectral density S(w), which is a given function of w with the properties

(6.1) S(w) > 0, S() = S(-w), S(w) -O asw-- co .

We are also giveni a positive time inieremeiit At, which may or may not be sinall. If x(t) is the correspondinig stationary Gaussian random process, we wish to compute inidefinitely many samples of x (t) for t = 0, LAt, 2 At,

Assuming that S(w) may be represented with sufficient accuracy by a rational function, we now use (2.7) through (2.10). Let (2.9) have the form

(6.2) 0 n (t) + alq5(11 (t) + ***+ an?q5(t) = w (t) _ o< t < oo

where w(t) is white noise. In order to compute x(t) = P(D)4(t), we shall re- quire samples of the state-vector

(6.3) z(t) = r (t) t = 0, At, 2At,

The vector z satisfies the stochastic differential equation

(6.4) dz(t) = Az(t) + f(t), -oo < t < x, dt

where

0 1 0 ... 0 0 1 0 K * ] F 1

(6.5) A0 . * * *j f(t)= L i

O O O ... 1 O

A a -Lan-1 -an-2 1 * * -al_ w(t) First we shall compute z(0). This vector is a Gaussiain random vector with

positive definite moment matrix

66 = E(z(O)z*(0)) = E(z(t)z*(t)), for any time t,

(6.6) = [E(zizj)] = [E?,0('-l)0(j-)]2 i, j = 1, ... , n

The momenit matrix M can be computed without a knowledge of the eigen-

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 9: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

SIMULATION OF GAUSSIAN RANDOM PROCESSES 75

values of A. A proof of the following two formulas appears in [8, Theorem 2]. The matrix M has components of the form

6j , i + j odd

(6.7) = i(_l)(i-i)/2m(j?2) i + j even,

where the numbers moi, mi1, ... , mn-1 can be computed by solving the n linear equations

(6.8) (_1)k Z (1l)gan-2q+kmq = 'o 0k , *,n-2, k/2?q< (n+k)/2 2 k=n - 1,

where we define ao = 1. Example. For n = 6, (6.7) and (6.8) become

mO O -ml 0 m2 0 O ml 0 -iM2 0 m3

[mtj] = O 0 m2 0 -m3 0

M2 0 -m3 0 M4 0

_ O m3 0 -iM4 0 m5

where mo, * * , m5 are found by solving

a6 -a4 a2 -1 0 0 [mOi 0 0 a5 -a3 a, 0 ? ml 0

0 -a6 a4 -a2 1 0 .m2 101

1 0 -a5 a3 -a1 0 m3 ? 0. 0 0 a6 -a4 a2 -1 m4 0 o 0 0 a5 -a3 a, j m5j 1

When M has been computed, it can be factored by the Crout factorization, M = TT*. The required initial vector z(0) = z(?) can now be computed from the formula

(6.9) z(0) = Tw(?))

which is found by setting i = 0 in (4.8). Suppose z(t) has been computed for any t ? 0. We shall now show how to

compute the succeeding state-vector, z(t + At). Let

(6.10) X(t) = e = exp At

be the solution of the initial-value problem

dX(t) -

AX(t) t > 0, (6.11) dt

X(0) = I.

We shall need to know the value of the matrix exp At for the single value of

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 10: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

76 JOEL N. FRANKLIN

time t = At. If the eigenvalues of A are unknown, the constant matrix exp (A At) can be computed directly from the initial-value problem (6.11) by any stand- ard technique such as the Runge-Kutta method.

Since z satisfies the inhomogeneous differential equation (6.4), we have

(6.12) z(t + At) = eAAtz(t) + r,

where r = r(t) is the Gaussian random vector At

(6.13) r = f e(At-8)Af(t + s) ds.

This vector is uncorrelated with the random vector t

(6.14) Z( t) e A (ft-T)f(r) di-, 00

since

E(f(t + s)f* (r)) = C (t + s - r) (6 175

= 0 for s > 0 and r < t,

where, by (6.5), C is the positive semidefinite matrix

(6.16) C0 L0 0. . .h _

Since r is uncorrelated with z(t), a sample of r may be computed independently of z(t) in order to compute z(t + At) from (6.4).

To compute a sample of r, we need to know the moment matrix

Mr= E(rr*) At A At

(6.17 ) =E J (A t-8 1) Af(t + sl)f*(t + S2)e(At82)A* ds1 ds2

At Lr 8A in 8A( be

Let the integrand in (6.17) be called J(s). Then

(6.18) - J(s) = AJ(s) + J(s)A*. ds

Integration of (6.18) for 0 < s < At gives

(6.19) eAAtC[eAAt]* - C = AMr + MrA*.

The eigenvalues of A are the zeros of the polynomial

(6.20) Q(q) =n + a1&-1 + a2n-2 + ... + a.-

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 11: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

SIMULATION OF GAUSSIAN RANDOM PROCESSES 77

By the choice of Q( ) in the representation (2.7), the zeros of Q( ) lie in the half- plane Re v < 0. Therefore, the Ljapunov equation (6.19) has a unique solution Mr ; for a proof of the existence and uniqueness of the solution Mr see [5] or [8].

Note that the matrix Mr depends on At but not on t. Written out, (6.19) becomes n2 linear inhomogeineous equations' for the n2 unknown components of Mr. If we define

(6.21) exp A At = [eij], Mr = [i], i,j = 1, * ,

(6.19) becomes

n (eine{ n if i < n or j < n. (6.22) E (aikkj + aJkAikk) = 2

k=1 (e2 n-1 if i = n and j == n.

Since the moment matrix Mr is symmetric, the number of equations can be re- duced to ln(n + 1). Let the right-hand side of (6.22) be called bi,. Since Aij = Aji and bij = bji, we may rewrite (6.22) in the form

n

(6.23) Z (aikAl. + aikAik) = , i, j i1, ** , n. k=1

This equation is unchanged if i and j are interchanged. Therefore, it is sufficient to use (6.23) for i ? j. We then obtain ln(n + 1) equations in the same number of unknowns:

E aiEkjk + E aik/kJ + E ajk/ik + EI ajh ki = bXj (6.24) k<j k>j k_i k>i

1 ?<j<?i<?n.

Having computed the matrix Mr from (6.24), we factor it by the Crout fac- torization Mr = TrTr*. We are now ready to compute a sample for r. Let t + At = vAt. Let w(') be one of the vectors defined in (4.7). We then compute

(6.25) r == TrW(p) z(t + At) = eAAtz(t) + r.

This process may be repeated indefinitely for t 0= , At, 2At, * . We recall that the initial vector z(O) was computed in (6.9).

In the representation (2.7) let the polynomial P have the form

(6.26) P(v) = bo0m + biRm-1 + + bi bo #0O, m < n.

The required scalar function x(t), with power spectral density S(W), is com- puted as the linear combination

(6.27) x(t) = boZm+i(t) + bizm(t) + *. + bmzi(t), t = 0, At, 2At, ...

where zi(t), * , Zn(t) are the components of the vector z(t); (6.27) is equiva- lent to the relation x(t) = P(D)4(t) applied to the definition (6.3) of z(t).

7. A numerical example. Let the spectral density

9c 2 + 1 (7.1) SWo = 2 -5)2 + 4W'2

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 12: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

78 JOEL N. FRANKLIN

be prescribed. Let x(t) be the corresponding Gaussian process. We wish to compute samples of x(t) for t = 0.0, 0.1, 0.2, 0.3, *..

We know that for real co, S(co) has a factorization

bo (iw) + bi 2

(7.2) S(co) = (ij )2 + a,(iw) + a2

where the a's and b's are real, and where the zeros of ?2 + ajr + a2 lie in the halfplane Re ? < 0. Factorization of the numerator and of the denominator in (7.1) yields

(7.3) bo= 3, b1 = 1; a1 = 2, a2 = 5.

We are thus led to the differential equation

(7.4) j + 2? + 54- w(t).

We require samples of the vector

(7.5) z [] for t = k(0.1). We shall then compute the samples x(t) as linear combinations:

(7.6) x(t) = 3?(t) + (t), t = k(0 1).

First let us compute the moment matrix

(7.7) M = E[z(0)z*(0)].

According to ?6 we have

M =[mO 0

where

Fa2 MO 01So

Lo a,j LMlj

Therefore, since a1 = 2 and a2 = 5,

(7.8) M [ IJ

The Crout factorization of M is M = TT*, where

(7.9) T-[Tv5 0]

Then the initial vector z(0) is computed as

(.0Z - (?) \/5 WI (7.10) z (O) = [TW2

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 13: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

SIMULATION OF GAUSSIAN RANDOM PROCESSES 79

where w1, w2, ... simulate independent samples from the Gaussian distribu- tion with mean 0 and variance 1, as discussed in ?4.

We now require the basic solution exp (At) for t = At = 0.1. We have

At [ 0() 42(t)] (7.11) e = [L(t) (t) X(t)

where

fj + 2kj + 54j = 0, j= 1, 2,

01(?)= 1, k1(O) = 0; 022(0) = 0, 2(0) = 1.

In fact, these equations are equivalent to the initial-value problem

(7.12) =AX [ 2] X X(0) =I.

The solution of (7.11) is

et cos 2t Xe- sin 2t (7.13) eA t

2

-e-t(cos 2t + 2 sin 2t) 2e t(2 cos 2t - sin 2t)j For t 0.1 computation gives

F 0.886804 0.0898821

(7.14) eAt= = [ej]. - 1.24633 0.7969221

Then

0 0 .00808 .07163 (7.15) [ei2 - -C = [ei2e2] - F .0 L0 1 J .07163 -.3649]

Let

(7.16) Mn =[ ] be the moment matrix defined by (6.17). Then (6.19) becomes

(7.17) L:.0 2T76=3 [ ? 12]. [a fl + [a b] [O -5] .07163 -.3649 -5 -2- b c- b c- 1 -2-

This system yields three equations in three unknowns:

2b = .00808,

(7.18) -5a - 2b + c = .0716,

-lOb -4c = -.3649.

The solution of these equations is

(7.19) a = .0003, b .0041, c = .0811.

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions

Page 14: Numerical Simulation of Stationary and Non-Stationary Gaussian Random Processes

80 JOEL N. FRANKLIN

We then find the Crout factorization

(7.20) Mr [a b] TTr* = t21] Lb c.i L21 iL- 0 t22j

where

(7.21) til= .017, t2l = .24, t22 = .16.

(Of course, automatic digital computationi would conserve more accuracy). Formula (6.25) now yields

(7.22) Z(tk+1) e Z(tk) + TrW(k?l) k = 0, 1, ,

or, by (7.14) and (7.21),

(7.23) Z(tk F. 89 .09]

[(k) 017 0 ] W2k+31

= L1.25 .80] Z(tk) + .24 .161 LW2k+4'

where z(0) -z(to) is given by (7.10). It should be emphasized that, apart from round-off error, (7.23) is exact; there is no truncation error. Finally, we have the required samples

(7.24) X(tk) = Zl(tk) + 2Z2(tk), k = 0,1,

Except for the generation of the basic, independent, artificial Gaussiali num- bers Wk, the whole process is so simple that it could be carried out on a desk- calculator. The temptation would be to do so, where one would obtain the num- bers Wk from any available table of random numbers. This course would be risky because the validity of the calculations depends upon the assumlption that the Wk form a white sequence. If a table of Wk is used, the table should be constructed according to the principles discussed in [6].

REFERENCES

[1] J. L. DOOB, Stochastic Processes, John Wiley, New York, 1953. [2] F. B. HILDEBRAND, Introduction to Numerical Analysis, McGraw-Hill, New York, 1956. [3] W. B. DAVENPORT, JR., AND W. L. ROOT. An Introduction to the Theory of Random

Signals and Noise, McGraw-Hill, New York, 1958. [4] R. B. BLACKMAN AND J. W. TUKEY, T he Measurenment of Power Spectra front the Point

of View of Communications Engineering, Dover Publications, New York, 1959. [5] F. R. GANTMACHER, The Theory of Mliatrices, vols. 1 and 2, transl., K. A. Hirsch, Chelsea,

New York, 1959. [6] J. N. FRANKLIN, Deterministic simulation of random processes, Math. Comp., 17 (1963),

pP. 28-59. [7] H. WEYL, Uber die Gleichverteilung von Zahlen mitodulo Eins, Math. Ann., 77 (1916), pp.

313-352. [8] J. N. FRANKLIN, The covariance matrix of a continuous autoregressive vector time-series,

Ann. Math. Statist., 34 (1963), pp. 1259-1264. [9] G. E. P. BOX AND MERVIN E. MULLER, A note on the generation of random normal

deviates, Ibid., 29 (1958), pp. 610-611. [10] T. E. HULL AND A. R. DOBELL, Random number generators, this Journal, 4 (1962), pp.

230-254.

This content downloaded from 194.29.185.230 on Tue, 17 Jun 2014 13:40:00 PMAll use subject to JSTOR Terms and Conditions