second-order moments of a stationary markov chain …

18
SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN AND SOME APPLICATIONS T. W. Anderson Stanford University TECHNICAL REPORT NO. February 1989 U. S. Army Research Office 49-M. -- i D)AAGDOO- O- 00-6 ..- 2 DAAL03-89-K-0033 Theodore W. Anderson, Project Director DTIC Department of StatisticsELECTE .:.2 2 MAR 1989 Stanford University 0 Stanford, California D Approved for Public Release; Distribution Unlimited. $ g '89 0 0

Upload: others

Post on 14-Jan-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

SECOND-ORDER MOMENTS OF A STATIONARY

MARKOV CHAIN AND SOME APPLICATIONS

T. W. Anderson

Stanford University

TECHNICAL REPORT NO.

February 1989

U. S. Army Research Office

49-M. -- i D)AAGDOO- O- 00-6 ..- 2 DAAL03-89-K-0033

Theodore W. Anderson, Project Director

DTICDepartment of StatisticsELECTE

.:.2 2 MAR 1989Stanford University 0Stanford, California D

Approved for Public Release; Distribution Unlimited.

$ g '89

0 0

Page 2: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

The view, opinions, and/or findings contained in this report are those of the author(s) andshould not be construed as an official Department of the Army position, policy, or decision,unless so designated by other documentation.

Page 3: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

. ln'(. Aq- [!tVT fn

SECURITY CLASSIFICATION Of TtS PAG9 ?Wh., Data MJIseied)

REPORT DOCUMENTATION PAGE READBErFOREr COMPLETIG FORM

1. REPORT NUMBER 2. GOVT ACCESSION NO. . RECIPIENT'S CATAt.OG NUMBER

22 N/A N/A

4. TITLE (md SwbiftU.) S. TYPE OF REPORT A PERIOD COvE[REO

SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN Technical reportAND SOME APPLICATIONS

6. PERFORMING ORG. REPORT NUMIER

7. AuTNOIa) S. CONTRACT OR GRANT NUMSER(s)

T.W. Anderson R023 03KO39DAAL03-89-K-0033

9. PERFORMING ORGANIZATION NAME AND ADDRESS 10. PROGRAM ELEMENT. PROJECT. TASK

Statistics Department - Sequoia Hall AREA A WORK UNIT NUMBERS

Stanford UniversityStanford, CA 94305-4065

I I. CONTROLLING OFFICE NAME AND ADDRESS 12. fEORTrAT 989U.S. Army Research Office e ruaryP.O. Box 12211 11. NUMBER OF PAGESResearch Triangle Park, NC 27709 13pp.

14. MONITORING AGENCYh14AME & AOdRESS(IIdliermnt freat Controllin Otffice) IS. SECURITY CLASS. (of thie report)

Unclassified

ISa. DECL ASSI FICATION/ DOWNGRADINGSCH EOULEC

16. DISTRISUTION STATEMENT (o this Report)

Approved for public release; distribution unlimited.

17. DISTRIBUTION STATEMENT (oft l. .6.b@ct tered i, Stock 20 if iferen from Report)

NA

IS. SUPPLE MENTARY NOTESThe view, opinions, and/or findings contained in this report are those of theauthor(s) and should not be construed as an official Department of the Armyposition, policy, or decision, unless so designated by other documentation.

Stationary Markov chain, second-order moments, estimation of stationaryprobabilities, simulation, first-order autoregression

'a& ANISUACT (rvfa siovev wwm t Vwevermyd Idertify by block numbr)

See reverse side for Abstract.

D 9 J Ts 1473 mo orv ,OF v 4ss1s oSLETt UNCLASSIFIED

SECURITY CLAS1FICATOIt OF ThIS PAGE (When Da0a Entered)

Page 4: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

ICSiTV CLAI55PICATIO Or T1g1 PsA69tV( 01e6 80"06ud

20. ABSTRACT.

The i-th state of a finite-state Markov chain can be indicated by a vectorwith I in the i-th position and O's in the other positions. The vector-valuedprocess defined in this way is autoregressive in the wide sense. The second-order moments, moving average representation, and spectral density of thisprocess are obtained. The numerical-valued chain is a linear function of thevector; a simple condition is derived for it to be first-order autoregressivein the wide sense. The stationary probabilities of the chain can be estimatedby the mean of observations on the vector-valued process; this estimator isasymptotically equivalent to the maximum likelihood estimator.

A 'n For

By_________Dstrilltution/

AvaiI1:, ,. lty Codesay, and/or

UDist Special

UNCLASSIFIEDTiamar Tns P&K~a Dae Ent

Page 5: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

1. Introduction.

A finite-state Markov chain is a stochastic process in which the variable takes on one of a finite

number of values. Although the values may be numerical, they need not be; they may be simply

states or categories. If they are given numerical values, the values are not necessarily the first so

many integers, as in the number of customers waiting in a queue. When the chain is described in

terms of states, it is convenient for many purposes to treat the chain as a vector-valued process. Thevector has 1 in the position corresponding to the given state and 0 in the other position. Then the

vector-valued process is first-order autoregressive in the wide sense when the Markov chain is first-

order. Anderson (1979a), (1979b), (1980) pointed out analogies between Gaussian autoregressive

processes and Markov chains in terms of moments, sufficient statistics, tests of hypotheses, etc.

In this paper the consequences of the autoregressive structure of the vector-valued process are

developed further to yield various second-order moments and the spectral density of the process.

It is shown that using tOe mean of this process as an estimator of the stationary probabilities

is asymptotically equivalent to the maximum likelihood estimator and is asymptotically efficient

(Section 4). The numerical-valued Markov chain is considered as a linear function of the vector-valued process, and a simple condition is obtained for it to be a wide-sense first-order autoregressive

process (Section 5).

2. A Stationary Markov Chain.

A stationary Markov chain {xt} with discrete time parameter and states 1,..., m is defined

by the transition probabilities

(1) Pr{xt = jlxt- = i, Xt_ 2 = k....} =P, i,j,k,...= 1,...m, t = -1,01,..

where pij 0 and EL pij = 1. Let Pr{xt = i} = pi, i = 1,.. m (pi >0 and Z' 1 p = 1).

Then

m

(2) pipij = Pj, j=1...,rn.

If P = (pij) and p = (pi), a column vector, then the above properties can be written as

(3) p'P=p', PE =6, p'E = 1,

where e = (1, 1,..., 1)'. Let ej be the m-component vector with 1 in the i-th position and O's

elsewhere, and let {zt} be a sequence of m-component random vectors. Then the Markov chain

can be written

(4) Pr{zt = ejjzt-1 = Ci,Zt- 2 = Ek,'} ... Pij, i,j,k,... = 1,...,m, t = ... ,-1,0...

Page 6: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

Note that zt has 1 in one position and 0 in the others; hence e'zt = 1.

3. Second-order Moments.

It follows from the model that

(5) E(Zjtlt-I = ei, Zt-2 = E...) Pi,

where zpt is the j-th component of zt. We can write (5) in vectoi form as

(6) £(ztlzt-1, zt-2,. . .) =Pzt-.

Let

(7) Vt Zt - Ptzt-j

be the t-th disturbance. Then

(8) 6vt= £{C[(Zt- P'zt.)Zt-l, zt- 2,. .

=0.

In (8) the outer expectation is with respect to zt- 1 , zt-2, . Similarly

(9) 6zt... E &(0(vtIzt-l , Zt-2, . .)Z 1 8 ]

0, 1, 2,.

Since Vt-s = zt-s - Pzt-s-I,

(10) Evtvt,_ = 0, s = 1,2,.

Thus {vt} is a sequence of uncorrelated random vectors.

We can iterate zt = P'ztj- 1 + vt to obtain

(11) Zt = Vt + PIrt-1 + "'" + (P') -1Vt-se+ + (P')yzt-s.

Then

(12) 16(ztlzt-S, zt-3-1,.. = (P')Szt-s.

This conditional expected value could alternatively be obtained from the fact that the transition

probabilities from xt-. to xt are the elements of P8 .

2

Page 7: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

Since {xt} is stationary, {zt} is stationary and zt has a marginal multinomial distribution with

probabilities Pi, . p,ro. Hence, the expectation of zj is

(13) £zt = p,

and the covariance matrix is

(14) Var(zt) = Dp - pp' = V,

say, where Dp is a diagonal matrix with i-th diagonal element pi. From (11) we also find

(15) eztz _s = (P 1)s(ztz' = (P')SDp, s = 0, 1,...

Since £zt = gZt-s = p, and (P')'p = p, the covariance matrix between zt and zt-, is

(16) Cov(zt, z_,) = (P')'V, s = 0, 1,...

Thus (14) and (16) determine the second-order moments of {zt}.

The conditional covariance matrix of zt and vt is

(17) Var(ztlzt- 1 = i, zt-2,..) = Var(vtlzt-I = Ei,zt-2,....

= Dpi - pipi

=Vi,

say, where Dpi is a diagonal matrix with pij as the j-th diagonal element and

I p/P2

(18) P-

From this conditional variance of vt (which has conditional mean value 0) we find the (marginal)

covariance matrix of vt as

m(19) evtv, =

j=1M

= pi (Dpi - pip')

= p- E pip:P )

by (2). Note that

(20) PVP = P'DpP - P'pp'PP

= Lppipi - PP.

3

Page 8: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

Thusm

(21) V = P'VP + ZpiV.i=1

The second-order moments of {zt} are the second-order moments of a first-order autoregressiveprocess with coefficient matrix P' and disturbance covariance matrix Zim=L piVi. (See Anderson

(1971), Sections 5.2 and 5.3, for example.) However, the conditional covariance matrix of vt given

zt-1 depends on zt-1. This fact shows that vt and zt-1 are dependent, though uncorrelated.

Let A 1 = 1, A2 ,..., Am the characteristic roots of P, tj = 6,t 2 ,..., tm be the corresponding

right-sided (column) characteristic vectors, and w' = pw .. , w be the corresponding left-sided (row) characteristic vectors. It is assumed that there are m linearly independent right- and

left-sided vectors (that is, that the elementary divisors of P are simple). Let

( 0 ... 0

(22) y A2 . . " = 0 A2

(o 0 ... ,,)

(23) Q=(P, q2 ,...,qm)= (p, Q 2 ),

(24) T = (C, t2, tn) (C, T2).

Normalize the vectors so T'Q = I; that is, T' = Q- 1 and Q' = T - 1. Then

(25) P = TAQ' = TAT - ',

(26) P' = TA 8 Q' = TAT-1.

Since e'V = e'(Dp - pp') = 0,

(27) (P')'V = QA'T'V

= Q 2 A"T2Dp

= (P'-pe')Dp, s=1,2.

We shall assume that the chain is irreducible and aperiodic. Then },ij < 1, i = 2,...,m. As s

increases, the covariance function decreases as a linear combination of A2s,..., A .

4

Page 9: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

We can write zt in the moving average representation

00

(28) zt = Y(P')"Vt_,.s O

Since (P)')3 Vt., = Q 2 A*T 2 1vt-., s = 1,2,..., (28) converges. The representation (28) is trivial,

however, because

(29) (P')vt-, = (P ' ) s( zt-a - P'zt-,-1)

= (P')Szt_, - (P')+lzt_(,+,).

The spectral density of {zt} for A 0 0 is (Hannan (1970), p. 67, for example)

(30) (1I- P'ei'l)-l V(I- Pe-i) - I

= [Q(I - Ae'>)T'-V[T(l - Ae-")Q']- 1

= (T')-'(I - Ae")-1 Q-'V(Q')-'(I - Ae-')-'T-1

= Q(I - Ae')-'T'VT(I - Ae-'A)-'Q '

= Q 2 (I - A 2 e' )-T 2 DpT2 (I - A2e 2"

Since IAji < 1, i = 2,..., m, I - A 2e0I and I - A2e - i l\ are nonsingular for all real A.

4. Estimation of the Stationary Probabilities.

Consider a sequence of observations on the chain, xl,. •., XN. These define a sequence z,. .,ZN

from the process {zt}. Let

N

(31) S = Ezt.t=l

Then .S = Np. The covariance matrix of S is

(32) E(S - Np)(S - Np)'NE -F (t - p)(z. - P)'

=1t,s----

N t-I N s-I NSE E e zt- p)(z - ,)' + Y, E -I(zt- p)(z. - ,)' + E E(zt -p)(zt- ,)'t=i 2=1 8=i t=1 t=1

N t-I

E [r(zt - p)(zt-r - p)' + C(zt-, - p)(zt - p)'] + NVt=i r=1

N t-I

Z. E [(p)rV + Vpr] + NVt= 1=l

5

Page 10: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

N' t-1

= Z Z [W2 A T'Dp + DpTA W']t=1 r=l

N

= Z [W 2 (I- A 2 )-A 2 (I- A2-')TDpt=1

DpT2(I - A'- 1 )A 2 (I - A 2 )-1W 2 ] +NV

= N [W 2 (I- A 2 )-lA2T 2 Dp + DpT2 A 2 (I-A2)-w]SW 2 (I - A 2 )-2A 2 (I - A )TDp - DpT2(I - A')A 2(I- A 2 )-'W]

+ AV.

Then the covariance matrix of=1

(33) S'

is

(34) W 2 (I - A 2)- 1T2Dp + DpT2(I - A2)-'W2 - Dp + pp'

-1 [W 2(I - A 2 )-2 A 2 (I - AN)T 2Dp + DpT2(I - )A 2 (I - A2 )- 2 W 21].

Theorem.

(35) v-N"(i-p) -- N[O,W 2(I-A 2)-lT2 Dp+DpT 2 (I-A 2)-IW-Dp+pp'].

The sum S is the vector of frequencies of the states 1,.. .,m being observed, and j is the

vector of relative frequencies. Since £C = p, the vector i is an estimator of p. Grenander (1954),

Rosenblatt (1956), and Grenander and Rosenblatt (1957) showed that in the case of a scalar process

with expected value a linear function of exogenous series and a stationary covariance function the

least squares estimator of this linear function is asymptotically efficient among all linear estimators.

(See also Anderson (1971), Section 10.2.) The result holds as well for vector processes. In particular,

the mean of a set of observations is an asymptotically efficient linear estimator of the constant

mean of a wide-sense stationary process, as is the case here.

An alternative method of estimating p from the data z 1 ,. ., ZN is to estimate P and find its

left-sided characteristic vector corresponding to the characteristic root 1. If z, is given (that is,

fixed), the maximum likelihood estimator of P is

(36) P= Zt I )Z.6=2 t=2

Page 11: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

(See Anderson and Goodman (1957).) Note that

0zlt 0 ... 0

1" t=1 Z ... 0

N-(37) Zzt-iz = -1Z~ .i .N0 0 ... Et=Z1 z.t

is diagonal. In fact, the diagonal elements of (37) are the components of S - ZN. If there is

no absorbing state, every component of p is positive and the probability that (37) is nonsingular

approaches 1 as N - oc and P is well-defined. Since P is a consistent estimator of P, as N cc

the probability approaches I that E'ji, 1 and

(38) ,', =

have a unique solution for p.

We observe thatN

(39) (z - zNY)P = -, ) zt-z;t=2

N= zt

t=21

(= - ZI)N

From (38) and (39) we obtain

(40) N-I(z.P - z;) = Na(± - i)'(P - I)

= Na(j - P)T 2 (; 2 - I)4 2,

where

(41) P = (E,Ti2) A 0 W2P

Since the left-hand side of (40) approaches 0 as N - oo, we deduce that

(42) Na(,t - pb) _P- 0.

The two estimators of p are asymptotically equivalent. See, also, Henry (1970).

In many situations a sample x1,.. .,XN (or equivalently, z,. . .. , ZN) is not drawn from a

stationary process, but from a process starting with some given state. In that case one would

might discard enough initial observations to ensure that over the remaining period of observation

7

Page 12: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

the process is stationary or almost stationary. Thus the use of i calculated from the remaining

observations as an estimator of p would waste the initial observations. (If one wanted to run a

simulation and insisted on independent observations, one would start the process with a possibly

random state, let it run until stationarity is achieved, and make one observation. Then one would

repeat the procedure. Clearly this is expensive in computational resources and is unnecessary.)

To estimate P and then p does not require stationarity; one can start from any state. The

estimator of P is a maximum likelihood estimator, and hence the estimator of p (and A 2 , W 2 , and

T 2 ) is maximum likelihood and asymptotically efficient. (Several different sequences from the same

chain can be aggregated: see Anderson and Goodman (1957).)

The investigator will typically want to estimate the asymptotic covariance matrix of the esti-

mator, given in (35), which involves

m

(43) DpT2(I- A 2 )-'Q2 = Dp E tj(1 - Aj)-'q'j=2

m _

= Dp E E Astjqf.j=2 s=O

The rate of convergence is governed by the root that is largest in absolute value. The asymptotic

covariance matrix can also be written as

(44) Dp E(P - p,+ [Dp L(P' - epl)s]F - Dp + pp'.s=0 sO

Since

(45) (P - Ep') = p, - EP', s 0, 1...,

the covariance matrix (44) is

000

(46) Dp Z P ' + Dp( P)' - Dp - pp'"

The infinite sum can be approximated by a finite sum since the sum is convergent. For an estimator

P and p are replaced by P and P, respectively.

A possible computational method is to power P until the rows of p' are similar enough, that

is, until P 8 is similar to ep'. Then (46) follows. Note that at each step only P8 and '=0 P" need

be held in memory.

As a way of simulating a probability distribution of a finite set of outcomes, Persi Diaconis

(personal communication) has suggested setting up a Markov chain with this probability distribu-

tion as the stationary probability distribution. An example of particular interest is simulating the

8

Page 13: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

uniform distribution of matrices with nonnegative integer entries and fixed column and row sums

(contingency tables). Diaconis and Efron (1986) have discussed the analysis of two-way tablesbased on this model.

Diaconis sets up the following procedure to generate a Markov chain in which all possible tables

(with assigned marginals) are the states of the chain. At each step select a pair of rows (g, h) and

columns (i,j) at random; with probability 1 add 1 to the g, i-th cell and to the h, j-th cell andsubtract I from the g, j-th cell and from the h, i-th cell. If a step would lead to a negative entry,

cancel it. Each step leaves the row and column sums as given and so determines the transition

probabilities of a Markov chain with the possible tables as states. Since the matrix of transition

probabilities is doubly stochastic (the probability of going from one state to another is the same as

the probability of the reverse), the stationary probabilities are uniform; that is, p is proportional

to e.

Diaconis and Efron were interested in the distribution of the X 2 goodness-of-fit statistic, say

T(z), where z is the vector representation of the two-way tables; that is, they want Pr{T(Z) < r}for arbitrary -r E [0, oc). Given pi = Pr{Z = ej} the probability can be calculated

m

(47) Pr {T(Z) < 7} = EP,I[T(ei) _ r],i=1

where I(.) is the indicator function; that is, I[T(e1 ) < r] = 1 if T(e,) < T, and = 0 if T(e1 ) > r.

Given a sample z 1 ,..., ZN, the probability (47) is estimated byN

(48) 1 1 [T(zt) < r].t=I

The estimator of the cdf of T(Z) is the empirical cdf of T(Z) although z1 ,. .. ,ZN are not inde-

pendent. However, the sample variance of I[T(zt) <_ r] divided by N is not an estimator of thevariance of (48). If a subset of Z1 ,. . .,ZN is taken, say, zt,,. .. ,zt,, so that min Itj - tjI is great

enough that zt, .... zt can be considered independent, the sample variance of I[T(zt,) g r] pro-

vides an estimator of a lower bound to the variance of (48).

If the specified row and column totals are such that each possible value of T can come fromonly one table, there is a 1 - 1 correspondence between the statistic and the table. That is, thevalue of the statistic is simply another label for the state. Then the generation of the statistic

is given by the Markov chain, and its empirical cdf is an asymptotically efficient estimator of the

distribution of T. These facts suggest that even if there is not a 1 - 1 correspondence between the

statistic and the table, the empirical cdf is an asymptotically efficient estimator.

5. Scoring a Markov chain.

Suppose each state is assigned a numerical value aj, i = 1,.. ., m. Let yt = ai if and only if

xt is in state i. For example, the state of a queue may be indicated by the number of customers

9

Page 14: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

waiting. In that case it is convenient to label the states 0, 1 .... N and set a, = i; here N is the

maximum number of customers who can be waiting. (In many other cases the index of the state

may have no numerical meaning.)

The process {yt} can also be defined by

(49) Ye = a't

where a' = (a, . m). The second-order moments of {yt} can be found from those of {zt}. The

mean is

(50) eyt = a 'P = A;

say; the variance is

(51) Var(yt) = at'Va = a'Dpa - (a'p)2

= ct'QT'Dpa - (a'p)2

m

= j(a'q,)(aDpt);

and the covariances are given by

(52) Cov(yt' yt-.) = a' Cov(z, zt.)a

= a'(P')sVaa'Q2 A-T tDpa

= j(a'q,)('Dptj)A.i=2

A similar result was derived by Reynolds (1972) in a different way.*

The nature of the process {Yt} can be found from the nature of the {ze} process. Let C be the

lag operator; that is, £zt = zt-1. Then the {zt} process can be written in autoregressive form as

(53) (I - P'L)z = Vt.

Since P = TAQ' and T'Q = I, multiplication of (53) on the left by T' gives

(54) (I- AL)w = u,

where

(55)~l W 1 ()95)w = Tlz =zt = 42)--)I

* I am indebted to Jayaram Muthuswamy for calling my attention to this problem and associated

literature.

10

Page 15: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

and

(56) ut = T'vt= ( t= 2

We can re-write the last m - 1 components of (54) as

(57) J - A,,C)w(' ) = u(').

The first component of (54) is (1 - £)1 = 0. Then

(58) Wt2 = (J- A2L)-I

Multiplication of (54) on the left by Q yields

(59) Yt = ,t'zt = a'(P, Q2) 2) = ap + a'Q 2(I -A2L)

= a'p + a'Q 2 (I - A 2 L)-Tvt.

Multiplication of (59) by II - A2 1CI yields

(60) (1 - (yt - IL) = (,'qj) -I (1 - pjC)t>,t.I I j=2 hOI,j

This equation defines an autoregressive moving average process (in the wide sense) with an autore-gressive part of order m - 1 and a moving average part of order at most m - 2. Note that if aj = 1

for some j and a, = 0 for i 0 j, than (60) defines the marginal process of zit.

The above development is in terms of real roots and vectors. If a pair of roots are complex

conjugate, the corresponding pairs of vectors are complex conjugate and the analysis goes through

as before.

The covariance function (52) will be the covariance function of a first-order autoregression if

there is only one term in the sum on the right-hand side. Let Q2 = (q2, Q3), T2 = (t 2,T3 ), and

(61) A2= ( 2 A0)

for a real root A2 . Then

(62) Cov(yt, Yt-.) = (&tq 2 )(a'Dpt2 )A) + Q'Q 3 A3T 3 Dpa.

This is the covariance function of a first-order autoregressive process if A2 is real and either a'Q3 =

0 or if a'DpT3 = 0. This fact was given by Lai (1978) in his Theorem 2.3 under the assumption

that all of the characteristic roots are real and distinct.

11

Page 16: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

The alternative conditions may be written as

(63) a'Q = (,a'q2,0)

and

(64) a'DpT = (, a'Dpt 2,0).

Multiply (63) on the right by Q-1 = T' and (64) by T-1 - = Q'Dj 1 to obtain

(65) a'= tie' + (a'q 2 )t2

and

(66) a = pe' + (a'Dpt2)q'D-1 .

The conclusion is that {yt} is a first-order autoregressive process (in the wide sense) if the Markov

chain is irreducible and aperiodic, if there are m linearly independent characteristic vectors, and

if the vector a is a linear combination of e and of either a right-sided characteristic vector of

P corresponding to a real root (other than 1) or a left-sided vector corresponding to a real root

multiplied by Di 1.

12

Page 17: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

References

Anderson, T. W. (1971). The Statistical Analysis of Time Series, John Wiley & Sons, Inc., New

York.

Anderson, T. W. (1979a). Panels and time series analysis: Markov chains and autoregressive

processes, Qualitative and Quantitative Social Research: Papers in Honor of Paul F. Lazarsfeld

(Robert K. Merton, James S. Coleman, and Peter H. Rossi, eds.), The Free Press, New York,

82-97.

Anderson, T. W. (1979b). Some relations between Markov chains and vector autoregressive pro-

cesses, International Statistical Institute: Contributed Papers, 42nd Session, December 4-14,

1979, Manila, 25-28.

Anderson, T. W. (1980). Finite-state Markov chains and vector autoregressive processes, Pro-

ceedings of the Conference on Recent Developments in Statistical Methods and Applications,

Directorate-General of Budget, Accounting and Statistics, Executive Yuan, Taipei, Taiwan,

Republic of China, 1-12.

Anderson, T. W., and Leo A. Goodman (1957), Statistical inference about Markov chains. Annals

of Mathematical Statistics, 28, 89-110.

Diaconis, Persi, and Bradley Efron (1985). Testing for independence in a two-way table: New

interpretations of the chi-square statistic. Annals of Statistics, 13, 845-913.

Grenander, Ulf (1954). On the estimation of regression coefficients in the case of an autocorrelated

disturbance. Annals of Mathematical Statistics, 25, 252-272.

Grenander, Ulf, and Murray Rosenblatt (1957). Statistical Analysis of Stationary Time Series.

John Wiley and Sons, Inc., New York.

Hannan, E. J. (1970). Multiple Time Series. John Wiley & Sons, Inc., New York.

Henry, Neil Wylie (1970). Problems in the Statistical Analysis of Markov Chains. Doctoral Disser-

tation, Columbia University Libraries.

Lai, C. D. (1978). First order autoregressive Markov processes. Stochastic Processes and their

Applications, 7, 65-72

Reynolds, John F. (1972). Some theorems on the transient covariance of Markov chains. Journal

of Applied Probability, 9, 214-218.

Rosenblatt, Murray (1956). Some regression problems in time series analysis. Proceedings of the

Third Berkeley Symposium in Mathematical Statistics and Probability, University of California

Press, Berkeley and Los Angeles, 1, 165-186.

13

Page 18: SECOND-ORDER MOMENTS OF A STATIONARY MARKOV CHAIN …

Technical ReportsU.S. Army Research Office

Contracts DAAG29-82-K-0156, DAAG29-85-K-0239, and DAALO3-89-K-0033

1. "Maximum Likelihood Estimators and Likelihood Ratio Criteria for Multivariate EllipticallyContoured Distributions," T. W. Anderson and Kai-Tai Fang, September 1982.

2. "A Review and Some Extensions of Takemura's Generalizations of Cochran's Theorem," GeorgeP.H. Styan, September 1982.

3. "Some Further Applications of Finite Difference Operators," Kai-Tai Fang, September 1982.

4. "Rank Additivity and Matrix Polynomials," George P.H. Styan and Akimichi Takemura,September 1982.

5. "The Problem of Selecting a Given Number of Representative Points in a Normal Populationand a Generalized Mills' Ratio," Kai-Tai Fang and Shu-Dong He, October 1982.

6. "Tensor Analysis of ANOVA Decomposition," Akimichi Takemura, November 1982.

7. "A Statistical Approach to Zonal Polynomials," Akimichi Takemura, January 1983.

8. "Orthogonal Expansion of Quantile Function and Components of the Shapiro-Francia Statis-tic," Akimichi Takemura, January 1983.

9. "An Orthogonally Invariant Minimax Estimator of the Covariance Matrix of a MultivariateNormal Population," Akimichi Takemura, April 1983.

10. "Relationships Among Classes of Spherical Matrix Distributions," Kai-Tai Fang and Han-FengChen, April 1984.

11. "A Generalization of Autocorrelation and Partial Autocorrelation Functions Useful for Identi-fication of ARMA(p,q) Processes," Akimichi Takemura, May 1984.

12. "Methods and Applications of Time Series Analysis Part II: Linear Stochastic Models," T. W.Anderson and N. D. Singpurwalla, October 1984.

13. "Why Do Noninvertible Estimated Moving Averages Occur?" T. W. Anderson and AkimichiTakemura, November 1984.

14. "Invariant Tests and Likelihood Ratio Tests for Multivariate Elliptically Contoured Distribu-tions," Huang Hsu, May 1985.

15. "Statistical Inferences in Cross-lagged Panel Studies," Lawrence S. Mayer, November 1985.

16. "Notes on the Extended Class of Stein Estimators," Suk-ki Hahn, July 1986.

17. "The Stationary Autoregressive Model," T. W. Anderson, July 1986.

18. "Bayesian Analyses of Nonhomogeneous Autoregressive Processes," T. W. Anderson, NozerD. Singpurwalla, and Refik Soyer, September 1986.

19. "Estimation of a Multivariate Continuous Variable Panel Model," Lawrence S. Mayer and KunMaxo Chen, June 1987.

20. "Consistency of Invariant Tests for the Multivariate Analysis of Variance," T. W. Andersonand Michael D. Perlman, October 1987.

21. "Likelihood Inference for Linear Regression Models," T. J. DiCiccio, November 1987.

22. "Second-order Moments of a Stationary Markov Chain and Some Applications," T. W. An-derson, February 1989.