on determining the predictor of nonfull-rank multivariate stationary random processes

10
SIAM J. MATH. ANAL. Vol. 18, No. 4, July 1987 (C) 1987 Society for Industrial and Applied Mathematics O02 ON DETERMINING THE PREDICTOR OF NONFULL-RANK MULTIVARIATE STATIONARY RANDOM PROCESSES* A. G. MIAMEEf Abstract. Algorithms for determining the generating function and the predictor for some nonfull-rank multivariate stationary stochastic processes are obtained. In fact, it is shown that the well-known algorithms given by Wiener and Masani [Acta Math., 99 (1958), pp. 93-137] for the full-rank case are valid in certain nonfull-rank cases exactly in the same form. Key words, nonfull-rank multivariate stationary processes, generating function, best linear predictor AMS(MOS) subject classification. Primary 60G10 1. Introduction. One of the important problems in the prediction theory of multi- variate stationary stochastic processes is to obtain some algorithm for determining the best linear predictor in terms of the past observations. Wiener and Masani [9], [10] solved this problem for the full-rank case, when the spectral density f of the processes is bounded above and away from zero, in the sense that there exist positive numbers c and d such that (1.1) cI<-f(O)<=dl. Masani [2] improved their work substantially showing that the same algorithm is valid if in lieu of (1.1) one assumes that (i) f L; (1.2) (ii) f-L. Several other authors proved the validity of the same algorithm under more general settings, cf. for example Salehi [8], Pourahmadi [6]. However, all these results are under the severe restriction of full-rank and there has been no extension of Wiener and Masani’s algorithm beyond the full-rank case. The purpose of this note is to show that the algorithm remains valid exactly in the same manner for the nonfull-rank processes which satisfy the following conditions: (i) The range of f(0) is constant a.e. Lebesgue measure, (1.3) (ii) f Lo, (iii) f# L1, where A # stands for the generalized inverse (to be defined later) of the matrix A. In the full-rank case these conditions clearly reduce to the conditions (1.2), and hence our result generalizes Masani’s algorithm in [2]. Masani’s assumption and approach rests on a characterization [2, Thm. 2.8], for full-rank minimal multivariate stationary stochastic processes. Our motivation and assumptions are based on a characterization of Jo-regularity (which we shall call "purely minimal") due to Makagon and Weron [1]. We will employ Wiener and Masani’s algorithm to find the predictor of an associated full-rank process (to be clarified later), which is produced using the technique of Miamee and Salehi [5], and using this we will obtain our algorithm for the nonfull-rank process. * Received by the editors March 18, 1985; accepted for publication (in revised form) May 20, 1986. This research was partially supported by Air Force Office of Scientific Research grant F49620 82 C 0009. f Isfahan University of Technology, Isfahan, Iran. Present address, Department of Mathematics, Hampton University, Hampton, Virginia 23668. 909 Downloaded 11/25/14 to 141.217.58.222. Redistribution subject to SIAM license or copyright; see http://www.siam.org/journals/ojsa.php

Upload: a-g

Post on 27-Mar-2017

212 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

SIAM J. MATH. ANAL.Vol. 18, No. 4, July 1987

(C) 1987 Society for Industrial and Applied MathematicsO02

ON DETERMINING THE PREDICTOR OF NONFULL-RANKMULTIVARIATE STATIONARY RANDOM PROCESSES*

A. G. MIAMEEf

Abstract. Algorithms for determining the generating function and the predictor for some nonfull-rankmultivariate stationary stochastic processes are obtained. In fact, it is shown that the well-known algorithmsgiven by Wiener and Masani [Acta Math., 99 (1958), pp. 93-137] for the full-rank case are valid in certainnonfull-rank cases exactly in the same form.

Key words, nonfull-rank multivariate stationary processes, generating function, best linear predictor

AMS(MOS) subject classification. Primary 60G10

1. Introduction. One of the important problems in the prediction theory of multi-variate stationary stochastic processes is to obtain some algorithm for determining thebest linear predictor in terms of the past observations. Wiener and Masani [9], [10]solved this problem for the full-rank case, when the spectral density f of the processesis bounded above and away from zero, in the sense that there exist positive numbersc and d such that

(1.1) cI<-f(O)<=dl.

Masani [2] improved their work substantially showing that the same algorithm isvalid if in lieu of (1.1) one assumes that

(i) f L;(1.2)

(ii) f-L.Several other authors proved the validity of the same algorithm under more generalsettings, cf. for example Salehi [8], Pourahmadi [6]. However, all these results areunder the severe restriction of full-rank and there has been no extension of Wienerand Masani’s algorithm beyond the full-rank case.

The purpose of this note is to show that the algorithm remains valid exactly inthe same manner for the nonfull-rank processes which satisfy the following conditions:

(i) The range of f(0) is constant a.e. Lebesgue measure,

(1.3) (ii) f Lo,

(iii) f# L1,

where A# stands for the generalized inverse (to be defined later) of the matrix A. Inthe full-rank case these conditions clearly reduce to the conditions (1.2), and henceour result generalizes Masani’s algorithm in [2].

Masani’s assumption and approach rests on a characterization [2, Thm. 2.8], forfull-rank minimal multivariate stationary stochastic processes. Our motivation andassumptions are based on a characterization of Jo-regularity (which we shall call"purely minimal") due to Makagon and Weron [1]. We will employ Wiener andMasani’s algorithm to find the predictor of an associated full-rank process (to beclarified later), which is produced using the technique of Miamee and Salehi [5], andusing this we will obtain our algorithm for the nonfull-rank process.

* Received by the editors March 18, 1985; accepted for publication (in revised form) May 20, 1986.This research was partially supported by Air Force Office of Scientific Research grant F49620 82 C 0009.

f Isfahan University of Technology, Isfahan, Iran. Present address, Department of Mathematics,Hampton University, Hampton, Virginia 23668.

909

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 2: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

910 A.G. MIAMEE

In 2 we set down the necessary preliminaries. Section 3 is devoted to establishingour algorithm for determining the generating function and in 4 we will show thevalidity of Wiener and Masani’s algorithm for the best linear predictor.

2. Preliminaries. In this section we set down notation and preliminaries. Most ofthese are standard and can be found in [4], [9] and [10]. Let H be a complex Hilbertspace and q a positive integer. Hq denotes the Cartesian product of q-copies of H,endowed with a Gramian structure as follows: For any two vectors X (xl, xq) r

and Y (yl,..., yq)T, the superscript T stands for the matrix transpose, in Hq theirGramian matrix (X, Y) is defined by

(X, Y) (xi, xA)]/q,j:It is easy to verify that it has the following properties:

(X, X) >_- O,(x,x)=0 x=0,

E I;j=l i=1 j=l

where X, , X, are in Hq, i, Bj are constant q x q matrices, and _-> 0 means A isa nonnegative definite matrix. We say that X is orthogonal to if (X, )= 0. It is wellknown that I-/q is a Hilbert space with the inner product

((X, ))= trace (X, ).A subset M of ilrq is called a subspace if it is closed and AX+BY e M, whenever Xand are in M, A and B are q x q constant matrices. It is easy to see that M is asubspace if and only if M Mq for some subspace M of H. For any X in gq, (XlI)denotes the projection of X onto M, and that is the vector whose kth coordinate is(xlM), which is the usual projection of x onto the subspace M.

A bisequence X,,, n e Z, in Hq is called a q-variae stationary stochastic process ifthe Gramian (X,, X,,) depends only on m- n.

It is well known that every q-variate stationary stochastic process X has anonnegative matrix valued measure F on [0, 2’], called its spectral measure such that

e-i(m-n) dF(O).

f stands for the Radon-Nikodym derivative of the absolutely continuous (a.c.) part ofF with respect to the normalized Lebesgue measure dO, and it is called the spectraldensity of the process.

To every stationary stochastic process X, n e Z the following subspaces areattached:

M(+o) sp (X,-oo n oo), i.e., the subspace of Hq generated by all X, n e Z,M(n sp (Xk, -O < k <- n ),M(-o) f3 M(n),

M’(n)=sp(Xk, kCn).

A q-variate stationary stochastic process is called(a) Nondeterministic if M(+c)# M(n) for some and hence all n in Z;(b) Purely nondeterministic if M(-)=0;(c) Minimal if M’(n) rs M(+o) for some and hence all n Z;(d) Purely minimal if f’l, M’(n) O.

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 3: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

PREDICTOR OF NONFULL-RANK MULTIVARIATE PROCESSES 911

If X. is nondeterministic then X. M(n 1) for all n, and hence it has a nonzeroone-sided innovation process

g. X- (XlM(n- 1)).

If X. is minimal then X. M’(n) for all n, and hence it has a nonzero two-sidedinnovation process

b, X, (X, IM’(n)).The corresponding one-sided and two-sided predictor error matrices are defined by

G (go, go) and (o, o),

respectively. (XIM(0)) is called the best linear predictor of lag t,. Clearly X, isnondeterministic if and only if G # 0 and minimal if and only if # 0. A nondeterminis-tic (purely nondeterministic) process X, is said to be nondeterministic (purely nondeter-ministic), offull-rank if G is invertible. The process is called full-rank minimal if it isminimal and its two-sided predictor error matrix is invertible.

It is useful to note that we have the following inclusions between these variousclasses of processes.

PROPOSITION. The following inclusions are valid:(a) Purely nondeterministic processes

_nondeterministic processes;

(b) Purely minimal processes minimal processes_

nondeterministic processes;(c) Purely minimal processes

_purely nondeterministic processes.

Proof. (a) This is clear and well known. (b) If a process is purely minimal, thenf’l, M’(n) =0 which clearly implies that M’(n)# M(-) for some n, and hence theprocess must be minimal. This proves the first inclusion in (b). Now if a process isminimal then M’( n rs M(+) for some n. Since M(n 1

_M’(n) we have M(n 1 rs

M(+). That is, the process must be nondeterministic.(c) Since M(n-1)c__ M’(n) wehave f-I, M(n)_ f3, M’(n), which then shows that purely minimality is stronger thanbeing purely nondeterministic.

Remarks. (a) Considering the well-known characterization for minimal and thatof purely nondeterministic univariate processes, one can see that neither of these twoclasses is a subset of the other one.

(b) All the inclusions stated in the above proposition are proper. This for the firstinclusion in (b) is shown via the next example, and for the rest of them it is eitherwell-known or can be easily verified.

Example. Let Y, be orthonormal and Z any vector orthogonal to all Y,’s, andlet X, Y, + Z. Then

(i) X, is stationary, because (Xm, X,)=’m-,.o+(Z, Z) depends only on m-n.(ii) X, is minimal, because otherwise Xo must belong to sp {X: k rs 0} M’(0)

and hence there exists a sequence p, of finite sums of the form

P. E aXk E a( Yk + Z)kO kO

such that Xo- limp.. Now since

P"=Y’krsO aYk+(kO a)Z= Q,+r,Z and Q,+/-r,Z

the convergence of p, implies the convergence of both sequences Q, and r, to someQ and r, respectively. Taking limit on both sides of the last equation, we get

Xo= Q+ rZ

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 4: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

912 A.G. MIAMEE

which gives

Yo-Q--(r-1)Z, Yo-Q+/-(r-1)Z.

we see that Yo-Q=0 or Yo =Q. But since each Qn is in sp{Yk’k#0}, we getYo sp { Yk" k # 0) which is impossible, because Yn is an orthonormal sequence.

(iii) X, is not purely minimal. In fact, its spectral distribution F((9) (9 + 80, where8o is the unit mass concentrated at 0, is not a.c. A purely time domain proof of thisfact may also be given. Q.E.D.

It is known that

M(n) 5", sp (g.-k) + M(-oo).k=0

Consider G as a linear operator on C to C, C being the complex plane. Let Jbe the matrix of the projection on C onto the range of G, and we put (4-+j)-i H.The normalized one-sided innovations are defined by h, Hg,. One can show that [4]

X 5". Akx/- h-k+ (XnlM(-oo)).k=0

Although Ak’s in this decomposition are not unique, the coefficients Akx/ are in factunique and this enables us to associate the following function to our process

O(e’)= ’, Akx/-eik.

k=0

this is called the generating function of the process.We shall be concerned with the class Lp(1 _-<p =<oo) of all q x q matrix valued

functions g on [0, 2r] whose entries are in the usual Lebesgue space Lp. L+ willdenote the subspace of L2 consisting of those matrix valued functions whose nthFourier coefficient vanishes for n < 0, i.e.,

e-g((9) dO 0 for all n < 0.

For any q x q matrix A its generalized inverse A# is defined to be

A# pN(A)xA-PR(A),where N(A) denotes the null space of A, A-1 denotes the (many valued) inverse of A,and R(A) stands for the range of A. One can see that A# has the following properties[7]"

AA#A A, A#AA# A#,(A#A)* A#A, (AA#)* AA#,N(A)+/-= R(A#), R(A)+/-= N(A#).

For the ease of reference we state the following two theorems which are due toMasani [2, Thm. 2.8] and to Makagon and Weron 1], respectively.

THEOREM 1. Let X,, n Z, be a q-variate stationary stochastic process with spectraldistribution F. X, is full-rank minimal if and only if f F’ is invertible and f-lE LI.

THEOREM 2. Let X, n Z be a q-variate stationary stochastic process with spectralmeasure F. The process X is purely minimal if and only if

(i) F is a.c. with respect to Lebesque measure; with spectral density f,(ii) R(f(0)) is constant a.e. Lebesque measure,(iii) f# El.

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 5: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

PREDICTOR OF NONFULL-RANK MULTIVARIATE PROCESSES 913

3. Determination of the generating function. In this section we give an algorithmfor determining the generating function of a (not necessarily full-rank) stationarystochastic process. The result of this section extends Masani’s algorithm developed in[2] to the nonfull-rank case. Our technique is essentially that used by Miamee andSalehi in [5], where the following formula for the two-sided prediction error matrix

of a purely minimal process was obtained:

We will continue this work under the assumption that our process is purely minimalor equivalently assuming that conditions (i), (ii) and (iii) of Theorem 2 are valid. Lethi, h2,""", hp, hp+l,""", hq be an orthonormal basis for the q-dimensional complexEuclidean space Cq such that

and

R=R(f(O))=sp(h,,l<-_i<-p) a.e. (d0),

N R +/- N(f(0)) sp (hi, p + 1 -<_ _-< q).

Let el, e2,’’ ", eq be the standard basis of C q. Define the unitary operator U on Co

by Uhi-ei, l-<i_-<q. Letting Rl-sp(ei, 1----<i----<p) then Ri=sp(ei,p+l_<-i_-<q).Clearly U maps R onto R1 and R1 onto R +/-l and U* maps R1 onto R and Ri ontoR. As usual we will identify any linear operator on Co with its matrix with respectto the standard basis of C. By our choice of U we have

(3.1) Uf(O)U*g(O) 0

0 0

where g(O) is a p xp nonnegative matrix valued function whose rank is a.e. equalto p. Let

Y,, UX,, n Z

be a new stationary stochastic process, then we have

e-i(’-"f(O) dO U*

lfo(3.2)2

e-’(-’)Uf(0)U* dO

e--" gdO.

2

This shows that, for p + 1-< k-< q, the kth component yk of Y, is zero for all n Z.The p-variate stationary stochastic process Z,, (Y,..., yp)T has spectral densityg. Since U takes R onto R1 and R- onto R-, one can see that

(3.3) (UfU*)# U*f#U.

Now since Xn is assumed to be purely minimal, Theorem 2 implies that f#(0) isintegrable. Thus (3.2) implies that g-1 is integrable and hence by Theorem 1, Zn isfull-rank minimal.

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 6: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

914 A.G. MIAMEE

We are going to utilize Masani’s algorithm to obtain the generating functionand predictor Z of this full-rank minimal process Zn, and then use this to get thegenerating function and predictorX of our process Xn. The following lemma, whichreveals the close tie between xIt and tb, is crucial in the development of our algorithm.

LEMMA. Let X, n Z be a purely minimal stationary stochastic process with spectraldensity f. Let g be the spectral density of the corresponding full-rank minimal process Z,discussed above. Ifp and It are the generatingfunctions ofX and Z,, respectively, then

here V is the unitary matrix obtained above.Proo We first note that, since and as generating functions are optimal (cf.

Lemma 3.7 and Definition 4.1 in [3]). Now from (3.1) we get

so f has two factorization, namely the one in (3.4) and

f=*,where both and

belong to L+; to complete the proof it suffices to show that the factor $ is also optimal(cf. uniqueness Theorem 4.4 of [3]). To prove this, we first note that since the zerothcoefficient +(0) of is nonnegative definite and

ti+(O) U* [+(0)0

we have

(3.5)On the other hand, if

(3.6)

+(0) >=0.

f //*,is another factorization of f, then

but g * implies that

UfU* (UvU*)(U’,/U*)*

Since is the generating function of Z. one can prove that the function

is the generating function of ,,. In fact we know that the generating function q of aq-variate stationary stochastic process X is given by

P= Z Anx/-ei’*,n=O

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 7: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

PREDICTOR OF NONFULL-RANK MULTIVARIATE PROCESSES 915

where A.’s are the coefficients in the representation

Xo A.g_. + (XolM(-oo))n=O

of X in terms of its innovation process

g. X.- (X.lM(n- 1))and G =(go, go) is the predictor error matrix. Comparing Z. with Y. =[Z.IO], wenote that

==o 0 0+(YIMY(-))"

Although the coefficients arising in this sum are not unique, they will give us thegenerating function uniquely, and we have

Thus

y 0 0e-in

.=o 0 0

I"= A"Z/--e" 01 [z O0] [aIt0 0 0

is the optimal factor of

[A"Zo- ] e-inOn=0

(3.7) and (3.8) together with the optimality of

imply that

o] => (u+(o)u*)(u,+(o)u*)*.

This in turn implies that

This together with (3.5) shows that is the optimal factor of f. Thus by the uniquenesstheorem mentioned aboveD

ownl

oade

d 11

/25/

14 to

141

.217

.58.

222.

Red

istr

ibut

ion

subj

ect t

o SI

AM

lice

nse

or c

opyr

ight

; see

http

://w

ww

.sia

m.o

rg/jo

urna

ls/o

jsa.

php

Page 8: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

916 A.G. MIAMEE

Now we are ready to give the algorithm determining the generating function ofour purely minimal q-variate stationary stochastic process X,"

Step 1. Since f satisfies condition (i) in (1.3) and its range is constant withdimension p, one can easily see that the rank of the covariance matrix Fo J f(0) dOis also p. In fact, it can be shown that a unitary matrix U has the property (3.1) if andonly if it can reduce Fo to the form

(3.10) UFoU* [ :]0

with :o being a p x p nonsingular matrix. Using this fact, we can find our unitarymatrix U of this section by finding a U as in (3.10).

Step 2. Taking the unitary matrix U obtained in Step 1, we can form the full-rankdensity g as in (3.1). Because f satisfies conditions (ii) and (iii) of (1.3), as we sawbefore, g has the properties (i) and (ii) of (1.2). Thus we can use Masani’s algorithmdeveloped in 4 of [2] to find the generating function xp, of the process withdensity g.

Step 3. We can find the generating function of our original process with densityf, via the formula (3.9) above.

Remark. One can similarly extend the other available extension of Masani’salgorithm (such as that in [8]) for the full-rank case to obtain corresponding algorithmsfor the nonfull-rank case.

4. Determination of the predictor. In this section we show that the uniqueautoregressive series, of [2], giving the linear predictor in the full-rank case, can beused to obtain the predictor in our nonfull-rank case. In fact as we will see, exactlythe same formula works in this case as well. We continue to assume that F is a.c. andthe density f of our stationary stochastic process X, satisfies conditions (1.3). Usingthe notations and results of 3, we know that

f=U0

and the density g satisfies conditions (i) and (ii) of (1.2). Thus, using the techniquedeveloped in [2] one can show that

= EkZ-k inHp,k=O

where

k

Ek C+.Dk-.n=O

with Ck and Dk being the kth Fourier coefficients of and Ig-l, respectively. Nowone can easily verify that

= Oy_

and

in Hq,

(4.1) IEk :1 .--o [C+.0 001IO 001"

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 9: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

PREDICTOR OF NONFULL-RANK MULTIVARIATE PROCESSES 917

Since Y, UX., one can also verify that

Hence we have

(4.2)

Letting

n U:g ( k=0 E,ko2 U*k=O

00 Y-k)

in Hq.

we get the following autoregressive series representation for the best linear predictor X"

X= FkX-k.k=O

Now let us examine the coefficients Fk in (4.3) more carefully. Doing this, we will beable to write Fk in terms of the Fourier coefficients of the generating function @ ofour original process X, rather than that of the auxiliary process Z,. From (4.2) wecan write

Now using (4.1), we have

Thus

with

k

F,k n,.+nNk-.,n=O

But by the lemma we have

(4.4) =U*[Thus we observe that M. and N, are exactly the nth Fourier coefficients of and

#, respectively.Summarizing, we have shown that the best linear predictor can be written

exactly in the same form obtained in [2] for the full-rank processes, i.e., we have,, M+nDk-n X-k in Hq

k=O =0

where M,, and N,, are the nth Fourier coefficients of @ and its generalized inverse(instead of and its inverse -1 in the full-rank case).

0U, N,,= U.

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p

Page 10: On Determining the Predictor of Nonfull-Rank Multivariate Stationary Random Processes

918 A.G. MIAMEE

REFERENCES

[1] A. MAKAGON AND A. WERON, Wold-Cramer concordance theorem for interpolation of q-variatestationary processes over locally compact abelian groups, J. Multivariate Anal., 6 (1976), pp. 123-137.

[2] P. MASANI, The prediction theory of multivariate stochastic processes, III, Acta Math., 104 (1960), pp.141-162.

[3], Shift invariant spaces and prediction theory, Acta Math., 107 (1962), pp. 275-290.[4], Recent Trends in Multivariate Prediction Theory, P. R. Krishnaiah, ed., Academic Press, New

York, 1966, pp. 351-382.[5] A. G. MIAMEE AND H. SALEHI, On the bilateral prediction error matrix of a multivariate stationary

stochastic process, this Journal, 10 (1979), pp. 247-253.[6] M. POURAHMADI, A matricial extension of the Helson-Szegi theorem and its application in multivariate

prediction, J. Multivariate Anal., 16 (1985), pp. 265-275.[7] J. B. ROBERTSON AND M. ROSENBERG, The decomposition ofmatrix-valued measures, Michigan Math.

J., 15 (1968), pp. 353-368.[8] H. SALEHI, On determination of the optimalfactor ofa nonnegative matrix-valuedfunction, Proc. Amer.

Math. Soc., 2 (1971), pp. 383-389.[9] N. WIENER AND P. MASANI, The prediction theory of multivariate stochastic processes, I, Acta Math.,

98 (1957), pp. 111-150.10], The prediction theory of multivariate stochastic processes, I, Acta Math., 99 (1958), pp. 93-137.

Dow

nloa

ded

11/2

5/14

to 1

41.2

17.5

8.22

2. R

edis

trib

utio

n su

bjec

t to

SIA

M li

cens

e or

cop

yrig

ht; s

ee h

ttp://

ww

w.s

iam

.org

/jour

nals

/ojs

a.ph

p