arma model

19
Moving Average Models MA(q) In the moving average process of order , each observation is generated by a weighted average of random disturbances going back periods. It is denoted as , and the equation is where the parameters may be positive or negative. covariance White noise processes may not occur very common, but weighted sums of a white noise process can provide a good representation of processes that are nonwhite. The mean of the moving average process is independent of time, since The variance: is stationary, so the variance of to be finite. So we have 1

Upload: pawan-verma-verma

Post on 04-Feb-2016

234 views

Category:

Documents


0 download

DESCRIPTION

.

TRANSCRIPT

Page 1: Arma Model

Moving Average Models MA(q)

In the moving average process of order , each observation is generated by a weighted average of random disturbances going back periods.

It is denoted as , and the equation is

where the parameters may be positive or negative.

covariance

White noise processes may not occur very common, but weighted sums of a white noise process can provide a good representation of processes that are nonwhite.

The mean of the moving average process is independent of time, since

The variance:

is stationary, so the variance of to be finite. So we have

1

Page 2: Arma Model

Let’s examine some simple moving average processes; calculating the mean, variance, covariance and autocorrelation function for each. These statistics are important since:

1. They provide information that helps characterize the process;2. Help us to identify the process when we construct models.

Example 1

mean =variance =covariance for lag one

Thus, the process has a covariance of zero when the displacement is more than one period. (It has a memory of only one period.)

*Autocorrelation function for

[Graph ]

2

Page 3: Arma Model

Example 2

Equation

*mean =*variance =*covariance

*Autocorrelation function is

NOTE: The process has a memory of two periods.

[Show graph ]

3

Page 4: Arma Model

In general, the formula of autocorrelation for a moving average process of order [ ] is

We can see now why the sample autocorrelation function can be useful in specifying the order of a moving average process: the autocorrelation function for the process has non-zero values and is zero for

Invertible

process

If the process is invertible, i.e., we can “invert” the series and

express the current value of in a current disturbance and its lagged value. This is so called an autoregressive representation.

Notice that the autoregressive representation exists only if

4

Page 5: Arma Model

Autoregressive Models – AR(P)

In the autoregressive process of order , the current observation is generated by a weighted average of past observations going back periods, together with a random disturbance in the current period.

It is denoted as , and the equation is:

where is a constant term which relates to the mean of the series and can be positive or negative.

The properties of autoregressive processes:

If the autoregressive process is stationary, then its mean, denoting , must be invariant with respect to time, i.e.,

The mean of then is

This formula also gives us a condition for stationarity, i.e.,

Example 1

*mean = and is stationary if

5

Page 6: Arma Model

*covariance

Similarly, the covariance for K-lag displacement is

*the autocorrelation function

NOTE: This process has an infinite memory. The current value depends on all past values, although the magnitude of this dependence declines with time.

[Show graph ]

6

Page 7: Arma Model

Example 2

*mean =

The necessary condition for stationarity is that

*variance and covariances (assuming

In general, for we have

Now we can solve in terms of

Starting from

Substituting (3) into (1)

then substituting (4) into (5)

after rearranging,

then solve for .

7

Page 8: Arma Model

*autocorrelation function

In general, for

We can use to derive the autoregressive parameters: Yule-Walker equations.

Suppose we have the sample autocorrelation function for a time series which is . Then we calculate the sample autocorrelation function.

then substitute the value into Yule-Walker equations to solve

[Show graph of ]

8

Page 9: Arma Model

The Partial Autocorrelation Function

The partial autocorrelation function can be used to determine the order of processes.

The reason why it is called partial autocorrelation function is because it describes the correlation between and minus the part explained linearly by the intervening lags.

The idea here is to use Yule-Walker equations to solve for successive values of , which is the order of the process.

Example

Suppose we start from the assumption that the autoregressive order is

one, i.e. . Then we have = or sample autocorrelation If

the calculated value is significantly different from zero, the

autoregressive order is at least one (using to denote

Now, consider . Solving Yule-Walker equations for ,

If is significantly different from zero, the process is

at least order 2 (denote . If is approximately zero, the order

is one.

Repeating the process and get We call the partial autocorrelation function and we can determine the order of processes from its behavior.

In particular, if the true order is , then .

To test and see if it is zero, we use the fact that it is approximately

normally distributed with mean zero and standard error . So 5 percent

level is to see whether it exceeds in magnitude.

9

Page 10: Arma Model

Autoregressive-Moving Average Models ARMA(p,q)

Models

where

Assume that the process is stationary, so that its mean is constant over time:

Notice that this gives a necessary condition for stationarity of

For variance and covariance, let us assume

Variance:

0 1 1 1 1 1 1 1 1[( )( )]t t t t t tE Y Y

and

We have

or

Covariance

Autocorrelation

For ,

Notice that the autocorrelation function begins at its starting point , which is a

function of and ; then decline geometrically from the starting value. This reflects the fact that the moving average part of the process has a memory of only one period.

10

Page 11: Arma Model

Show graphs

Determining the Order of an ARMA(p,q) ModelIn practice, choosing the order of and requires balancing the benefit of including more lags against the cost of additional estimation uncertainty. On the one hand, if, say, the order of an estimated autoregressive is too low ( is too low), one will omit potentially valuable information contained in the more distant lagged values. On the other hand, if it is too high, one will be estimating more coefficients than necessary, which in turn introduces additional error into one’s forecast.

One approach to choosing, say, , is to start with a model with many lags and to perform hypothesis tests on the final lag, i.e., the F-statistic approach. For example, one might start by estimating and test whether the coefficient on the sixth lag is significant at the 5% level; if not, drop it and estimate model, and test the coefficient on the 5th lag, and so on. The drawback of this model is that it will produce too large a model, at least some of the time: even if the true order is 5, so the 6th coefficient is zero, a 5% test will incorrectly reject this null hypothesis 5% of the time just by chance. Thus, when the true value of is five, the method will estimate to be six 5% of the time.

The BIC(Bayes information criterion)One way around the problem is to estimate by minimizing an “information criterion”. One is called BIC, sometimes it is also referred as the Schwarz information criterion (SIC), which is defined as:

The BIC estimator , is the value that minimizes BIC(p), among the possible

choices of .

11

Page 12: Arma Model

The AIC(Akaike information criterion)

where is the order of autoregression and T is the sample size.

Non-stationary Process

Very few of the economic time series in practice are stationary. But many can be differenced so that the resulting series will be stationary. The number of times that the original series must be differenced before a

stationary series results is called the order of integration.

Example:

If is first-order integrated non-stationary series, then

is stationary.

If is second-order integrated series, then

would be stationary.

If is non-stationary, the statistical characteristics of the process is not independent of time anymore.

Example: Random walk where

The variance of the process:

12

Page 13: Arma Model

The variance is infinite when approaches infinite.

The same is true for the covariance.But, is stationary, (white noise), so we have .

13

Page 14: Arma Model

How Can We Decide a Series is Non-stationary?

1. Autocorrelation function. If it is stationary, the autocorrelation function should die off quickly.

Example: Nonstationary vs. stationary series.

ARIMA_stationary_nonstationary.sas

2. Unit Root Test

The problem:

Consider the model

In the random walk case, the OLD estimation of this equation produces an estimate of that is biased toward 0. The OLS estimate is also biased toward zero when is less than but near 1.

Dickey-Fuller Unit Root Test

Consider the model

The reduced form is:

The equation is said to have a unit root if

There are three test statistics:

The critical values for them are simulated.

14

Page 15: Arma Model

The Augmented Dickey-Fuller Test for a Unit Autoregressive RootThe Augmented Dickey-Fuller (ADF) test for a unit autoregressive root tests the null hypothesis against the one-sided alternative in the regression

Under the null hypothesis, has a stochastic trend; under the alternative

hypothesis, is stationary. The ADF statistic is the OLS t-statistic testing .

If instead the alternative hypothesis is that is stationary around a deterministic linear trend, then this trend “t” (the observation number) must be added as an additional regressor, in which case, the Dickey-Fuller regression becomes:

where is an unknown coefficient and the ADF statistic is the OLS t-statistic testing .The length of can be determined using AIC. The ADF does not have a normal distribution, even in large sample. Critical values for one-sided ADF test are simulated.

Problems with Non-stationary ProcessesSuppose we have

where and . The series is non-stationary since it has a trend in its variance.

Issues:1.Regression of a random walk on time by least squares will produce high value; i.e., if the true process is , just by doing so.2.If , will be even higher, and will increase with the sample size and reach one in the limit.3.The residuals have on average only about 14% of the true variance.

4.The residuals are highly correlated, roughly at lag one where is the

sample size.5.Conventional t-tests are not valid.6.Regression of one random walk variable on another one is strongly subject to the spurious regression phenomenon.

15