comparison of efficiencies of several estimators for linear regressions with autocorrelated errors

4
Comparison of Efficiencies of Several Estimators for Linear Regressions With Autocorrelated Errors Author(s): Masahito Kobayashi Source: Journal of the American Statistical Association, Vol. 80, No. 392 (Dec., 1985), pp. 951- 953 Published by: American Statistical Association Stable URL: http://www.jstor.org/stable/2288559 . Accessed: 16/06/2014 00:48 Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at . http://www.jstor.org/page/info/about/policies/terms.jsp . JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected]. . American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journal of the American Statistical Association. http://www.jstor.org This content downloaded from 62.122.79.21 on Mon, 16 Jun 2014 00:48:17 AM All use subject to JSTOR Terms and Conditions

Upload: masahito-kobayashi

Post on 20-Jan-2017

215 views

Category:

Documents


0 download

TRANSCRIPT

Comparison of Efficiencies of Several Estimators for Linear Regressions With AutocorrelatedErrorsAuthor(s): Masahito KobayashiSource: Journal of the American Statistical Association, Vol. 80, No. 392 (Dec., 1985), pp. 951-953Published by: American Statistical AssociationStable URL: http://www.jstor.org/stable/2288559 .

Accessed: 16/06/2014 00:48

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at .http://www.jstor.org/page/info/about/policies/terms.jsp

.JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide range ofcontent in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new formsof scholarship. For more information about JSTOR, please contact [email protected].

.

American Statistical Association is collaborating with JSTOR to digitize, preserve and extend access to Journalof the American Statistical Association.

http://www.jstor.org

This content downloaded from 62.122.79.21 on Mon, 16 Jun 2014 00:48:17 AMAll use subject to JSTOR Terms and Conditions

Comparison of Efficiencies of Several Estimators for

Linear Regressions With Autocorrelated Errors

MASAHITO KOBAYASHI*

This article compares several estimators for linear regression models with first-order autocorrelated errors. Asymptotic vari- ances of the estimators of regression coefficients are obtained up to order T2 for the maximum likelihood, the iterated Prais- Winsten, and the iterated Cochrane-Orcutt methods. It is shown that the first two methods are equivalent in efficiency and that the last is less efficient. This loss in efficiency is due to the effect of the initial observation in the regressor. Asymptotic biases of the estimators for the autocorrelation coefficient are also given.

KEY WORDS: Cochrane-Orcutt estimator; Prais-Winsten es- timator; Maximum likelihood estimator; Asymptotic variance; Trended regressor.

1. INTRODUCTION

We have various estimation methods for linear regression models with first-order autoregressive (AR) errors-for ex- ample, the Cochrane-Orcutt method. Several papers have com- pared their relative efficiencies by Monte Carlo experiments. They have found that the maximum likelihood (ML) estimator and the Prais-Winsten (PW) estimator have smaller mean squared errors than the Cochrane-Orcutt (CO) estimator, and that this difference in efficiency is large when the regressor is trended. For their results, see Rao and Griliches (1969), Beach and MacKinnon (1978), Spitzer (1979), and Park and Mitchell (1980).

This article compares efficiencies of these estimators ana- lytically: We obtain asymptotic variances of the estimators of regression coefficients up to order T2 for the ML method, the iterated PW method, and the iterated CO method. Our results coincide with the findings by the Monte Carlo experiments just cited. In particular, it is shown that the loss of efficiency of the CO method is due to the effect of the initial observation of the regressor variable.

Some Monte Carlo studies have also reported that the CO estimator and the ML estimator for the correlation coefficient have substantially equal downward bias and that the PW esti- mator is less biased. Here we obtain asymptotic biases of these estimators up to order T- '. Our results support this finding in the case when the error term has positive serial correlation and the regressor has higher serial correlation.

In Section 2 the estimators to be examined are illustrated. Section 3 presents the main results-the asymptotic biases and variances of the estimators; and Section 4 gives the discussion. The algebraic details are only outlined in the Appendix. Full details will be sent by me on request. I henceforth represent, for example, the iterated Cochrane-Orcutt estimator simply by CO, where there is no fear of ambiguity.

* Masahito Kobayashi is with the Institute of Economic Research, Kyoto University, Sakyo-ku, Kyoto, Japan. This work was originally a part of the author's master's thesis submitted to the Faculty of Economics, University of Tokyo. The author thanks Yukio Suzuki, Kei Takeuchi, Naoto Kunitomo, and anonymous referees for their helpful comments.

2. MODEL, LIKELIHOOD FUNCTION, AND ESTIMATORS

Consider the linear regression model

y=B fix + u, t= 1, ...,T (2.1)

where xt = (1, xt2, . . ., x,p)', fi = (f,l, . . , fl,p). Assume that the error terms follow a first-order AR process with mean zero; that is,

ut = put_ + et,

t = . . . -1, O, 1, . . ., - < p < 1, (2.2)

where et's are normally distributed with mean zero and variance a2, independently and identically. Then the log-likelihood func- tion is defined as

L(fJ, p, a2) _ constant- 2T log(1 - p2)

- 2(1 -p2)(yi - XI)2la2

T

- a E [(y, - ,Bx,) - P(Y1-i - ,Bx, )]2/a2. (2.3)

Also assume that the ith regressor (xi, . ., xTi) is a real- ization of the AR(1) process with mean zero, variance vi, and autocorrelation coefficient Ai, and uncorrelated to the other regressors; this assumption is adopted by several Monte Carlo studies. Then we have

T

lim E x,ilT = O, i = 2, . ,p, T-.o t=l

T

lim E x,ix,_1j/T = 1, i = j = 1 T-.o t=i+ I

=)4v,, i=j?2

=0, i=j, (2.4)

where - 1 < Ai < 1. I suppose that essentially the same ar- guments could be made for regressors of different types.

The iterated CO estimator for (fi, p, a2) is obtained by max- imizing

L(co)(fi, p, a2) = -(T- 1)log a2

T

- E[y, - fix, - P(Y1- - ,Bx,1)]2/a2 + constant, t=2

(2.5)

which is the logarithm of the conditional likelihood function for given Yl The Hildreth-Lu estimator differs from this es- timator only in computational procedure. For details of the

? 1985 American Statistical Association Journal of the American Statistical Association

December 1985, Voi. 80, No. 392, Theory and Methods

951

This content downloaded from 62.122.79.21 on Mon, 16 Jun 2014 00:48:17 AMAll use subject to JSTOR Terms and Conditions

952 Journal of the American Statistical Association, December 1985

Hildreth-Lu estimator see, for example, Maddala (1977, p. 279). It can be easily shown that the a priori knowledge of a2

does not affect these estimates for fi and p. The iterated PW estimator is obtained by maximizing the

function

L(PW)(fi, p, a2) = constant-A(1 --p2)(yi -&x1)2/a2

T

- '(T- l)loga2- E [(yt-fx,)

-p(Y,-l - fx,_1)]/a2. (2.6)

This function differs from the exact likelihood function (2.3) by the term i log(l - p2), which is the prior density proposed by Zellner (1971, pp. 216-220). Even if a2 is known, the estimates for fi and p remain the same.

The ML estimator is obtained by maximizing the exact like- lihood function (2.3). Note that the estimator in the case with a2 unknown is different from that in the case with a2 known. Here we restrict ourselves to the former case, which is of prac- tical interest. A computational procedure of ML is given in Beach and MacKinnon (1978).

3. MAIN RESULTS

This section presents asymptotic variances up to order T-2 for the estimators of regression coefficient,/3 and asymptotic biases of order T-I for the estimators of p and a2. In the following, Ifico), for example, denotes the estimate of A, by the CO method.

All the estimators of f,i's are asymptotically unbiased up to order T- 1; that is,

E[fi - ] = o(T-1), i = 1, . . .,p, (3.1)

for CO, PW, and ML. The asymptotic variances of f,i's are

var[B(mI) ] = Iii + o(T-2), i = 1

_ + (1 - ))(1 -p2)U2 +

2(1 -2p)i + p2)3(l -pAi)V,

+ o(T-2), i : 2, (3.2)

var[ fW)I = var[ imoI + o(T-2), (3.3)

and

varA[f(C)] = var[A(MIo)]

+ Xi( 1 - p2)/ (2It) + o(T 2) (3.4)

where Iij denotes the (i, j) element of the information matrix and IiP is the (i, j) element of its inverse. Note that x = o(AT) means x/A, -) 0 as T increases infinitely.

The ML estimator of p has the same asymptotic bias as CO up to order T-; that is,

E[,A(Mo - p] = E[A(Co) - p] + o(T 1)

p

=-(1 + 3p)/T - E (1 - p2)G(i- p) i=2

* [T(1 - 2p)R, + p2)] ~+ o(T-1). (3.5)

The asymptotic bias of PW differs from those of the other two estimators by pIT; that is,

E[A (pW) - p] = E[fA(Co) - p] + pIT + o(T-1). (3.6)

The estimators for a2 are all biased by - (p + 1)a2/T; that is,

Ea2 _ a2] = -(p + 1)a2/T + o(T-1) (3.7)

for ML, PW, and CO.

4. DISCUSSION

We have seen in (3.1) that the fli's are all asymptotically unbiased up to order 1/IT. This result is supported by some Monte Carlo studies.

From (3.3) we see that ML and PW have the same asymptotic efficiency. This fact was found by Park and Mitchell (1980) in their experiment. The loss of efficiency of CO, especially in the case of a trended regressor, has been reported by many authors. We can explain this phenomenon from our results: Because the second term on the right side of (3.4) is always positive and increases in magnitude with X2i, CO shows the low efficiency when the regressor is trended; for a trended regressor we have large magnitude of xli measured from the mean.

From (3.5) and (3.6) the estimators of p are downwardly biased in the case when p and )i - p are positive. This as- sumption seems realistic, since a regressor showing smooth trend can be regarded to be highly correlated-that is, high )i-and the serial correlation of error terms is often positive. Beach and MacKinnon (1978) reported this downward bias of

A in their experiment. We can easily obtain unbiased estimators for a2 Up to order

T` by multiplying them by T/(T - p - 1). In some problems, variances of the estimators do not exist,

though their asymptotic variances are finite. I suppose our es- timators have finite variances based on the Monte Carlo ex- peniments.

Similar results might be obtained for the linear model with higher-order AR errors, since the difference in efficiency is supposed to arise from the first term in (A.5).

APPENDIX

This Appendix gives the general formulas for the asymptotic bias up to order T-' and the asymptotic variance up to order T-2 of mul- tiparameter ML or quasi-ML estimators. Their algebraic details are so lengthy that only their outline is given here. The full details are avail- able from me on request.

Here we extend the method employed by Nishio (1981), who ob- tained asymptotic biases and variances of several estimators for AR(1) and MA(1) up to the second order. Robertson and Fryer (1970) em- ployed an almost similar procedure in order to obtain asymptotic mo- ments for moment estimators. Shenton and Bowman (1977) gave for- mulas for higher-order asymptotic bias and variance of the multiparameter ML estimator for independent and identical distributions; for ML, our formulas coincide formally with the results obtained by them.

Note that our estimators are obtained by maximizing either the likelihood function (LF) or some modified form of LF. Let the solution of the equations

dL(i,. ..,aK)/da, = 0, i = 1, . . . , K, (Al1) be the estimates al, . . ., dK, where al, . .. , aK, are the pa- rameters.

This content downloaded from 62.122.79.21 on Mon, 16 Jun 2014 00:48:17 AMAll use subject to JSTOR Terms and Conditions

Kobayashi: Estimators for the Autocorrelated Error Model 953

Taylor's expansion of (A. 1) about the true values of the parameters gives

k

O = Li + , LijAa, + 2 E LijkajAa,k j= j k

+ 6 E LjkAajAakAa, + . . . , i = 1. K, (A.2) j k I

where Lij, for example, is defined as d2L(a, aK)Idaiaaj eval-

uated at the true values of the parameters, and Jij = E[Lij], [Ji7l =

[Jij] -', Aai = ai - ai; the function L( ) coincides with the log- likelihood function (2.3) for the ML method, (2.5) for CO, and (2.6) for PW.

We now invert the series (A.2) in order to express the deviation of the estimator from the parameter value (namely, Aa, = ai - ai) explicitly in terms of Lij, and so forth.

We define X as a term of O(Tc) if and only if XITc is bounded when T approaches infinity, and we define Xi as a term of Om(TCli) if and only if E[X, *-- Xp] is O(T1c(W+ +c(P)I) for any p, where [N] is the

largest integer smaller than or equal to N. Then we have

ALi,...ip = Li,...ip - E[Lil...ip] = 0.(T- 1/2),

Ji, ip = E[Li...ip] = 0(1), p = 1

- 0(T), p 2, (A.3)

which are verified in the supplement obtained from me on request. Note that the Fisher information matrix coincides with [-Ji] for the ML.

Putting Aah = Al + A2 + A3 -*, where Ai = O,T(T-i2), and equating the terms of Om(T"2), Om(l), and Om(T-"2), respectively, we have

A= - _ Jhi jLi,

A2 = JhkALkIJliALi - 2 JhiJ) - 2 JhiJijisJktALsALt,

A3 = J jhiALJkiJi - E JhIALjkl ALJmiALi

-a E JhiA JisALJk1AL + 2 E JhALkmJmJJisALJiLt

- E JhiJgisJJktAL + E JhiJtJipALpJqSALJ ktAL

- E JhiJJisJSJkjlPALpJmALqJktAL,

+ t E JhiJjmJknJlpIkALmDLnALp,

where the suffix h is fixed and the other suffixes range from 1 to K under the summations. Note that we need A3 in order to obtain the mean squared error up to order 1/ T2, since

E[AaJ2] = E[A2] + 2E[A1A2] + E[A 2] + 2E[A1A3] + o(T-2).

Defining Hjj,k,l, for example, as E[ALjjALkALj, we have the following formulas for the asymptotic bias and variance:

E[a- - ah] = E JhiJjkHi,jk + 2 J4 J JJk

+ , JhiJi + o(T '), (A.4)

var[^h] = E JhiJiJH1j + , JhiJhiJkl[2Hi, j k + Ji0kI

+ 3Hik,j, + Hiklj + 2JIjkJj + 2H ,jkJII

+ E JhiJhiJkIJmn[Jikni,H

+ Jijkjlmn + RJiknJijln

+ 2JiknHj,Im + 2JijnHkm,l + 6Hin,lJjkm

+ Hi.jnJklm + 2HimnIHknJ

+ 2HimjHkn,I + Him kHjl,n] + o(T-2). (A.5)

In deriving (A.4) and (A.5) we have assumed that

Hi,jl ...,i2j2,...,i3j3,...,i4j4,....-H l..i .H i3,..,i4,--- + Hi ,,..,i3,..H i2,...,i4,....

+ Hi,,...,i4 .... Hi2....,i3,. + o(T2),

Hi,j - Jii + o(T), (A.6)

which are verified in the supplement. Then substituting Hijk, and so forth, evaluated for our present model,

into (A.4) and (A.5), we can have the main results.

[Received August 1983. Revised March 1985.]

REFERENCES

Beach, C. M., and MacKinnon, J. G. (1978), "A Maximum Likelihood Pro- cedure for Regression With Autocorrelated Errors," Econometrica, 46, 51- 58.

Maddala, G. S. (1977), Econometrics, New York: MacGraw-Hill. Nishio, Atushi (1981), "Parametric Estimation of Time Series Models," un-

published doctoral dissertation, University of Tokyo, Faculty of Engineering (in Japanese).

Park, R. E., and Mitchell, B. M. (1980), "Estimating the Autocorrelated Error Models," Journal of Econometrics, 13, 185-201.

Rao, P., and Griliches, Z. (1969), "Small Sample Properties of Several Two- Stage Regression Methods in the Context of Autocorrelated Errors," Journal of the American Statistical Association, 64, 251-272.

Robertson, C. A., and Fryer, J. D. (1970), "The Bias and Accuracy of Moment Estimators," Biometrika, 57, 57-65.

Shenton, L. R., and Bowman, K. 0. (1977), Maximum Likelihood Estimation in Small Sample, London: Charles Griffin.

Spitzer, J. J. (1979), "Small-Sample Properties of Nonlinear Least Squares and Maximum Likelihood Estimators in the Context of Autocorrelated Errors," Journal of the American Statistical Association, 74, 41-47.

Zellner, A. (1971), An Introduction to Bayesian Inferences in Econometrics, New York: John Wiley.

This content downloaded from 62.122.79.21 on Mon, 16 Jun 2014 00:48:17 AMAll use subject to JSTOR Terms and Conditions