# arma autocorrelation functions - nc state autocorrelation functions ... the partial...

Post on 06-Mar-2018

218 views

Category:

## Documents

Embed Size (px)

TRANSCRIPT

• ARMA Autocorrelation Functions

For a moving average process, MA(q):

xt = wt + 1wt1 + 2wt2 + + qwtq.

So (with 0 = 1)

(h) = cov(xt+h, xt

)= E

qj=0

jwt+hj

qk=0

kwtk

=

2w

qhj=0

jj+h, 0 h q

0 h > q.

1

• So the ACF is

(h) =

qhj=0

jj+h

qj=0

2j

, 0 h q

0 h > q.

Notes:

In these expressions, 0 = 1 for convenience.

(q) 6= 0 but (h) = 0 for h > q. This characterizesMA(q).

2

• For an autoregressive process, AR(p):

xt = 1xt1 + 2xt2 + + pxtp + wt.

So

(h) = cov(xt+h, xt

)= E

pj=1

jxt+hj + wt+h

xt

=p

j=1

j(h j) + cov(wt+h, xt

).

3

• Because xt is causal, xt is wt+ a linear combination of wt1, wt2, . . . .

So

cov(wt+h, xt

)=

2w h = 00 h > 0. Hence

(h) =p

j=1

j(h j), h > 0

and

(0) =p

j=1

j(j) + 2w.

4

• If we know the parameters 1, 2, . . . , p and 2w, these equa-tions for h = 0 and h = 1,2, . . . , p form p+ 1 linear equations

in the p+ 1 unknowns (0), (1), . . . , (p).

The other autocovariances can then be found recursivelyfrom the equation for h > p.

Alternatively, if we know (or have estimated) (0), (1), . . . , (p),they form p + 1 linear equations in the p + 1 parameters

1, 2, . . . , p and 2w.

These are the Yule-Walker equations.

5

• For the ARMA(p, q) model with p > 0 and q > 0:

xt =1xt1 + 2xt2 + + pxtp+ wt + 1wt1 + 2wt2 + + qwtq,

a generalized set of Yule-Walker equations must be used.

The moving average models ARMA(0, q) = MA(q) are theonly ones with a closed form expression for (h).

For AR(p) and ARMA(p, q) with p > 0, the recursive equationmeans that for h > max(p, q + 1), (h) is a sum of geomet-

rically decaying terms, possibly damped oscillations.

6

• The recursive equation is

(h) =p

j=1

j(h j), h > q.

What kinds of sequences satisfy an equation like this?

Try (h) = zh for some constant z.

The equation becomes

0 = zh p

j=1

jz(hj) = zh

1 pj=1

jzj

= zh(z).

7

• So if (z) = 0, then (h) = zh satisfies the equation.

Since (z) is a polynomial of degree p, there are p solutions,say z1, z2, . . . , zp.

So a more general solution is

(h) =p

l=1

clzhl ,

for any constants c1, c2, . . . , cp.

If z1, z2, . . . , zp are distinct, this is the most general solution;if some roots are repeated, the general form is a little morecomplicated.

8

• If all z1, z2, . . . , zp are real, this is a sum of geometricallydecaying terms.

If any root is complex, its complex conjugate must also be aroot, and these two terms may be combined into geometri-cally decaying sine-cosine terms.

The constants c1, c2, . . . , cp are determined by initial condi-tions; in the ARMA case, these are the Yule-Walker equa-tions.

Note that the various rates of decay are the zeros of (z),the autoregressive operator, and do not depend on (z), themoving average operator.

9

• Example: ARMA(1,1)

xt = xt1 + wt1 + wt.

The recursion is

(h) = (h 1), h = 2,3, . . .

So (h) = ch for h = 1,2, . . . , but c 6= 1.

Graphically, the ACF decays geometrically, but with a differ-ent value at h = 0.

10

• 5 10 15 20 25

0.2

0.4

0.6

0.8

1.0

Index

AR

MA

acf(

ar =

0.9

, ma

=

0.5,

24)

11

• The Partial Autocorrelation Function

An MA(q) can be identified from its ACF: non-zero to lag q,and zero afterwards.

We need a similar tool for AR(p).

The partial autocorrelation function (PACF) fills that role.

12

• Recall: for multivariate random variables X,Y,Z, the partialcorrelations of X and Y given Z are the correlations of:

the residuals of X from its regression on Z; and

the residuals of Y from its regression on Z.

Here regression means conditional expectation, or best lin-ear prediction, based on population distributions, not a sam-ple calculation.

In a time series, the partial autocorrelations are defined as

h,h = partial correlation of xt+h and xtgiven xt+h1, xt+h2, . . . , xt+1.

13

• For an autoregressive process, AR(p):

xt = 1xt1 + 2xt2 + + pxtp + wt,

If h > p, the regression of xt+h on xt+h1, xt+h2, . . . , xt+1 is

1xt+h1 + 2xt+h2 + + pxt+hp

So the residual is just wt+h, which is uncorrelated withxt+h1, xt+h2, . . . , xt+1 and xt.

14

• So the partial autocorrelation is zero for h > p:

h,h = 0, h > p.

We can also show that p,p = p, which is non-zero by as-sumption.

So p,p 6= 0 but h,h = 0 for h > p. This characterizes AR(p).

15

• The Inverse Autocorrelation Function

SASs proc arima also shows the Inverse Autocorrelation Func-tion (IACF).

The IACF of the ARMA(p, q) model

(B)xt = (B)wt

is defined to be the ACF of the inverse (or dual) process

(B)x(inverse)t = (B)wt.

The IACF has the same property as the PACF: AR(p) ischaracterized by an IACF that is nonzero at lag p but zerofor larger lags.

16

• Summary: Identification of ARMA processes

AR(p) is characterized by a PACF or IACF that is:

nonzero at lag p;

zero for lags larger than p.

MA(q) is characterized by an ACF that is:

nonzero at lag q;

zero for lags larger than q.

For anything else, try ARMA(p, q) with p > 0 and q > 0.

17

• For p > 0 and q > 0:

AR(p) MA(q) ARMA(p, q)

ACF Tails off Cuts off after lag q Tails off

PACF Cuts off after lag p Tails off Tails off

IACF Cuts off after lag p Tails off Tails off

Note: these characteristics are used to guide the initial choiceof a model; estimation and model-checking will often lead to

a different model.

18

• Other ARMA Identification Techniques

SASs proc arima offers the MINIC option on the identifystatement, which produces a table of SBC criteria for various

values of p and q.

The identify statement has two other options: ESACF andSCAN.

Both produce tables in which the pattern of zero and non-zero values characterize p and q.

See Section 3.4.10 in Brocklebank and Dickey.19

• options linesize = 80;

ods html file = varve3.html;

data varve;

infile ../data/varve.dat;

input varve;

lv = log(varve);

run;

proc arima data = varve;

title Use identify options to identify a good model;

identify var = lv(1) minic esacf scan;

estimate q = 1 method = ml;

estimate q = 2 method = ml;

estimate p = 1 q = 1 method = ml;

• run;

proc arima output

http://www.stat.ncsu.edu/people/bloomfield/courses/st730/varve3.html