list of contents 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/i2kursus2007/i(2)slides3.pdf · 1 lecture 2...

27
1 Lecture 2 LIST OF CONTENTS 1. G ENERATION OF I(2) VARIABLES .S OME EXAMPLES 2. T HE I(1) AND I(2) MODELS AND THEIR SOLUTIONS 3. D IFFERENT PARAMETRIZATIONS OF THE I(1) AND I(2) MODELS 3a. T HE MLE PARAMETRIZATION 3b. T HE P ARUOLO -R AHBEK PARAMETRIZATION 3c. T HE I(2) MODEL AS A NONLINEAR REGRESSION MODEL 4. N ORMALIZATIONS 5. T RANSFORMATION TO I(1) SPACE 6. W EAK EXOGENEITY 7. A N ALGORITHM FOR ESTIMATING THE I(2) MODEL 8. D ETERMINISTIC TERMS 9. T ESTING HYPOTHESES IN THE I(2) MODEL

Upload: others

Post on 08-Oct-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

1

Lecture 2LIST OF CONTENTS

1. GENERATION OF I(2) VARIABLES. SOME EXAMPLES2. THE I(1) AND I(2) MODELS AND THEIR SOLUTIONS3. DIFFERENT PARAMETRIZATIONS OF THE I(1) AND I(2) MODELS3a.THE MLE PARAMETRIZATION3b.THE PARUOLO-RAHBEK PARAMETRIZATION3c.THE I(2) MODEL AS A NONLINEAR REGRESSION MODEL4. NORMALIZATIONS5. TRANSFORMATION TO I(1) SPACE6. WEAK EXOGENEITY7. AN ALGORITHM FOR ESTIMATING THE I(2) MODEL8. DETERMINISTIC TERMS9. TESTING HYPOTHESES IN THE I(2) MODEL

Page 2: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

2

The I(2) models, Hr;s1 ; and their relations to the I(1) model Hrp� r r4 0 H0;1 � H0;2 � H0;3 � H0;3� H0;4= H0

3 1 H1;1 � H1;2� H1;3� H1;4= H1

2 2 H2;1� H2:2� H2;2= H2

1 3 H3;1� H3;2= H3

0 4 H4;1= H4

s2 4 3 2 1 0

1.TESTING HYPOTHESES IN THE I(2) MODEL

TEST OF THE RANKS r AND s1

Hr;s1 : �2Xt = �(�0� 0Xt�1 +

0�Xt�1) + �?(�0?�?)

�1�0� 0�Xt�1 + "t;

Hr;p�r is the I(1) model Hr:Hp is the unrestricted VAR modelH0;p is the VAR in differences

Page 3: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

3

THE LIKELIHOD RATIO TEST OF RANK

LR(Hr;s1jHp) =Lmax(Hr;s1)

Lmax(Hp)=Lmax(� = ��0; �0?��? = ��0)

Lmax(�;� unrestricted)

This is not a trace test as in the I(1) model, but "almost" a sum of two trace tests. Let

Lmax(Hr;s1)

Lmax(Hp)=Lmax(Hr)

Lmax(Hp)

Lmax(Hr;s1)

Lmax(Hr)

The �rst factor is the usual trace testLmax(Hr)

Lmax(Hp)=L(�; �; �; )

L(~�; ~�; ~);

� is the I(1) estimator~� is the VAR estimator.

Page 4: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

4

Lmax(Hr;s1)

Lmax(Hp)=Lmax(Hr)

Lmax(Hp)

Lmax(Hr;s1)

Lmax(Hr)

The second factor isLmax(Hr;s1)

Lmax(Hr)=L(��; �� = ����; ��; � ; �)

L(�; �; �; );

�� is the I(2) estimator.��; �� and �; �; are almost the same. Now approximate

L(��; �� = ����; ��; � ; �) = max�;� ;�; ;:�=��;��=��

L(�; �; � ; �; ;)

� max�;� ;�; ;:�=�;��=�

L(�; �; � ; �; ;)

replacing ��; �� by �; � and �nd second factor

Lmax(Hr;s1)

Lmax(Hr)�max�;� ;�; ;:�=�;��=� L(�; �; � ; �; ;)

max�;:�=�;�=� L(�; �;�;):

Page 5: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

5

Lmax(Hr;s1)

Lmax(Hp)=Lmax(Hr)

Lmax(Hp)

Lmax(Hr;s1)

Lmax(Hr)

The �rst factor is the test for Hr in Hp; and the second is almost the test for reducedrank of �0?��? when we assume that (�; �) = (�; �). That is, the reduced rank analysisof

�0?�2Xt = ��0(��

0?�Xt�1) + �

0?���(�0�Xt�1) + �

0?"t:

This shows that the test for model Hr;s1 is (almost) the sum of two trace tests.

Page 6: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

6

HYPOTHESES ON THE I(2) PARAMETERS

Hypotheses on the cointegrating parameters �; � ; � in the model Hr;s1

Hr;s1 : �2Xt = �(�0� 0Xt�1 +

0�Xt�1) + �?(�0?�?)

�1�0� 0�Xt�1 + "t;

Hr;s1 : �2Xt = �(�0; �0)

�� 0Xt�1� 0?�Xt�1

�+ �� 0�Xt�1 + "t:

THE HYPOTHESES ON � , AND �The usual formulation

� = H�

� = (b; )

�i = hi +Hi�i; i = 1; : : : ; r

Same for � :The easy way out for �; is to use the I(1) analysis to test hypotheses on �:

Page 7: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

7

HYPOTHESES ON ��0Xt is stationary when � = 0: Paruolo (1996) found the asymptotic distribution of �; isnot mixed Gaussian

Page 8: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

8

Example 7 Let Xt = (p1t; p2t; excht) be I(2): Assume further that r = 1; s1 = 1; s2 = 1;and that �0?2 = (1; 1; 0); so that p1t; p2t � I(2); p1t � p2t � I(1), excht � I(1): Then �and �?2 are completely known:

� 0 =

�0 0 11 �1 0

�; �0?2 = (1; 1; 0)

and the polynomial cointegrating relation becomes

�1p1t + �2p2t + �3excht + 1�p1t + 2�p2t + 2�excht

Now �excht; �p1t ��p2t are stationary. �0Xt are stationary when

1 = � 2:

Thus � = 0 means 1 = � 2: If � is completely speci�ed, � = 0 is a test on thecointegration relations in an I(1) model.

Hr;s1 : �2Xt = �(�0; �0)

�� 0Xt�1� 0?�Xt�1

�+ �� 0�Xt�1 + "t:

Page 9: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

9

Example 8 Xt = (mt; pt; yt; Rt)0;

mt the log nominal money stock� I(2)pt the log price level� I(2)yt log real income� I(1)Rt a long-term interest rate� I(1).Assume we have cointegration�mt � pt � I(1) log real money is I(1))

�mt � pt � yt + �RRt � I(0) (stationary money demand relation with unit incomeelasticity)

� Rt ��pt � I(0) (the �Fisher effect� holds, i.e. real interest rate is stationary)

Page 10: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

10

Formulated as

�mt ��pt � I(0)

mt � pt � yt + �RRt � I(0)

Rt ��pt � I(0)

we �nd that � is 4 � 3; so that r + s = 3: We see that there are two relations whichinvolve levels, so that r = 2:

� =

0BB@1 1 0

�1 �1 00 �1 00 0 1

1CCA ; �? =

0BB@1100

1CCA ; � =

0@ 0 01 00 1

1A ; � =

0BB@1 0

�1 0�1 00 1

1CCA� = 0��? =

��1�2

�0Xt + ��0?�Xt =

�mt � pt � yt + �1(�mt +�pt)Rt + �2(�mt +�pt)

Page 11: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

11

equivalently

�0Xt + �0� 0?�Xt =

�mt � pt � yt � �1

�2Rt

Rt + �2(�mt +�pt)

�compare to

�mt ��pt � I(0)

mt � pt � yt + �RRt � I(0)

Rt ��pt � I(0)

Rt + �2(�mt +�pt) = Rt + 2�2�pt + �2(�mt ��pt)

Enough to test that �2 = �1=2; in the nominal to real trasnformation

Hr;s1 : �2Xt = �(�0; �0)

�� 0Xt�1� 0?�Xt�1

�+ �� 0�Xt�1 + "t:

to test that the cointegrating relations are

(�0; �0) =

��11 �21 �31 �1�12 �22 �32 �2

�=

�0 1 0 �10 0 1 �1=2

Page 12: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

12

Example 3(continued)�2X1t = �0:5[X1t�1 + �2X2t�1 + (� 1 + �� 2)X3t�1

+2�X1t�1 +�X2t�1 + ( 3 + 2� 1)�X3t�1] + "1t;�2X2t = ��X2t�1 � � 2�X3t�1 + "2t;�2X3t = "3t:

The MLE parameters are

� =

0@ 1 00 1� 1 � 2

1A ; � =

�1�2

�; � =

0@ 1�2

� 1 + �2� 2

1A ; =

0@ 21

3 + 2� 1

1A ;

and the true value is � 1 = � 2 = �2 = 3 = 0: � and � normalized on �� 0 and ��0:

The parameters � = (�0:5; 0; 0)0;3 = I3; 0�� 0 = (2; 1); do not enter the cointegrating

relations.

Page 13: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

13

� =

0@ 1 00 1� 1 � 2

1A ; � =

�1�2

�; � =

0@ 1�2

� 1 + �2� 2

1A ; =

0@ 21

3 + 2� 1

1A ;

1. The hypotheses � 2 = 0 or � 2 = 0The hypotheses are special cases of test on a coef�cient in � , but it is also a specialcase of testing that a given vector is contained in sp(� ).The �rst gives asymptotic �2 inference the second not.2. The test that 3 = 0;The test gives asymptotic �2 inference3. The test that � = 0

�0 = 0��? = (2; 1; 3 + 2� 1)(�� 1;�� 2; 1)0 = �� 2 + 3We cannot prove that inference is �2 but simulations indicate that it is.

Page 14: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

14

1 Asymptotic distributions

1.1 The linear regression model with I(2) variablesWe �rst discuss the linear regression model with I(0); I(1); and I(2) regressors, to setthe scene for the more complicated I(2) model

�2Xt = �00Z0t + �01Z1t + �

02Z2t + "t; (1)

where "t is independent of Zt = (Z 00t;Z 01t;Z 02t)0: The estimator of �0 = (�00; �

01; �

02) is the

regression of �2Xt on Zt :

� = (TXt=1

ZtZ0t)�1

TXt=1

Zt�2X 0

t

so that

� � � = (TXt=1

ZtZ0t)�1

TXt=1

Zt"0t:

We want the asyptotic distribution of � � � suitably normalized, and for that we need

Page 15: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

15

many limit results from the theory of weak convergence of stochastic processes. Wede�ne (n0; n1; n2) = (12; 1; 2) and introduce the product moments

M"j = T�njTXt=1

"tZ0jt; Mij = T�ni�nj

TXt=1

ZitZ0jt; (2)

which are normalized to converge. We next assume that the errors and regressorssatisfy

T�12

0@ P[Tu]t=1 "tZ1[Tu]

T�1Z2[Tu]

1A w!

0@ W (u)H1(u)H2(u)

1A ; u 2 [0; 1]; (3)

where W (u) is Brownian motion with variance ; and H1 and H2 are de�ned as thelimits of T�1=2Z1[Tu] and T�3=2Z2[Tu] respectively and will be a Brownian motion and anintegrated Brownian motion respectively in the application to the I(2) model.Thus the �rst result states that a random walk normalized by T 1=2 converges to a Brown-ian motion, and the next result states that an I(1) variable, Z1t; normalized the sameway, also converges to a Brownian motion, H1; and �nally that an I(2) variable, Z2t;need to be normalized by T 3=2 to get convergence. It turns out, however, that the limit

Page 16: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

16

is an integrated Brownian motion in the I(2) model below.

Theorem 1 Under the assumptions above the unrestricted estimators of the regressionmodel (1) has an asymptotic distribution given by

T 1=2(�0 � �0)w! N(0; ��1)�

T (�1 � �1)

T 2(�2 � �2)

�w! Z 1

0

�H1

H2

��H1

H2

�0du

!�1 Z 1

0

�H1

H2

�(dW )0

and T 1=2(�0 � �0) and (T (�1 � �1); T2(�2 � �2)) are asymptotically independent.

2 The asymptotic distribution of the estimators in theI(2) model

We saw in section ?? that the I(2) model is a submodel of the linear regression model

Page 17: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

17

with I(2) regressors de�ned as

Z 00t = (X0t�1�

0 +�X 0t�1

0; �X 0t�1�

0; �2Xt�1; : : : ;�2Xt�k+1);

Z 01t = (�X0t�1�

0?2;X

0t�1�

0?1);

Z 02t = X 0t�1�

0?2;

of dimensions (2r+s1; p�r; s2) respectively. The regressors are constructed so that theorder of integration of Zit is I(i); i = 0; 1; 2: Corresponding to this choice of regressorswe �nd that the I(2) model is given by the parametrization

�00 = (�;�( � 0)0�� 0 + �?(�0?�?)

�1�0);

�01 = (�( � 0)0��0?2 + �?(�

0?�?)

�1�0� 0��0?2;��

0��0?1);

�02 = ��0��0?2:

These are obviously not freely varying and it is convenient to introduce the freely vary-ing parameters

A = �( � 0)0�� 0 + �?(�0?�?)

�1�0;

B0 = ��00?2( � 0); B1 = ��

00?1�; B2 = ��

00?2�;

C = ��00?2��?;

Page 18: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

18

so that�00 = (�;A);

�01 = (�B00 + �?(�

0?�?)

�1�0� 0��0?2;�B

01) = �01(�0; B1; C;B2);

�02 = �B02 = �02(�0; B2):

(4)

Note that at the true value we have B0 = 0; B1 = 0; B2 = 0; C = 0; and hence that�01 = 0 and �

02 = 0: To show that �1 = �1(B0; B1; C;B2) we need to �nd � as a linear

function of B1 :

� = �� 00�� = �� 00�� = �� 00�

= �� 00(�0��00+ �0?1

��00?1 + �

0?2��00?2)�

= �0��00� + �� 00�0?1

��00?1� + ��

00�0?2��00?2�

� = �0 + �� 00�0?1B1

and hence that �?(�0?�?)�1�0� 0��

0?2 is a function of �0; B1; B2; and C :

�?(�0?�?)

�1�0� 0��0?2

= �?(�0?�?)

�1�0���0� 0��0?2 + �?(�

0?�?)

�1�0��?�0?�

0��0?2

= �?(�0?�?)

�1�0��(B1)B2 + �?(�0?�?)

�1�0��?(B1)C0:

Page 19: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

19

Thus a rather complicated nonlinear parameter restriction is found in terms of (A;B;C).It turns out, see below, that T 1=2(�0��0); T (B0; B1; C) and T 2B2 are bounded in proba-bility and we therefore call �0 an I(0) parameter, B0; B1; and C are the I(1) parametersand �nally B2 is the I(2) parameter.In order to describe the asymptotic distributions and the limits of the regressors, wede�ne the independent Brownian motions

W1(u) = (�0�1�)�1�0�1W (u);W2(u) = (��0?�(�

0?�?)

�1�0��?)�1��0?�(�

0?�?)

�1�0?W (u):

and the processes H1 = (H01C; H

01B)

0; and H2 de�ned as the limit of the regressors

T�1=2Z1[Tu] = T�1=2��0?2�X[Tu]�1�0?1X[Tu]�1

�d!��0?2C2W (u) = H1C(u)�0?1C1W (u) = H1B(u)

�T�3=2Z2[Tu] = T�3=2�0?2X[Tu]�1

d! �0?2C2R u0 W (s)ds = H2(u):

Based on these we de�ne the mixed Gaussian distribution ofB1 = (B100 ; B101 ; B102 )0 and

C1

B1 = [

Z 1

0

HH 0dt]�1Z 1

0

H(dW1)0; (5)

Page 20: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

20

where H = (H 01C; H

01B; H

02)0 and

C1 = [

Z 1

0

H1CH01Cdt]

�1Z 1

0

H1C(dW2)0: (6)

With this notation we can �nd the asymptotic distribution of the estimators to be

Theorem 2 When � is normalized on �� and � is normalized on �� ; the asymptotic dis-tributions of the matrices ; �, and � are given by

T ��0?2( � )

w! B10 ; (7)T ��

0?1(� � �)

w! B11 ; (8)T 2��

0?2(� � �)

w! B12 ; (9)T ��

0?2(� � � )�?

w! C1: (10)

Further for

�00 = (�;A;�1; � � � ;�k�2) (11)

Page 21: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

21

we �nd

T12(�0 � �0)

d! N(0; ��1); (12)

where

� = V ar(X 0t�1� +�X

0t�1 ;�X

0t�1� ;�

2X 0t�1; � � � ;�2X 0

t�k+2):

This result is taken from Johansen (1997).

3 The asymptotic distribution of the test on thecointegration parameters of the I(2) model

We here discuss hypotheses of the form � = �(�); expressed as (B0; B1; B2; C) =(B0; B1; B2; C)(�0; �1; �2); where �1 = (�1B; �1C): We have seen above that althoughwe basically get asympototic mixed Gaussian inference we should be careful when Cis mixed up with the B parameters, and we therefore split �1 into parameters identi�edfrom B1 and those identi�ed from C; see the rank condition (15).In order to prove asymptotic �2 inference for tests on the parameters of the I(2) modelwe introduce some assumptions on the parametrization of (B0; B1; B2; C) in terms of

Page 22: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

22

parameters (�0; �1 = (�1B; �1C); �2): All conditions are evaluated at the true value where�1 = 0; and �2 = 0:

Weak dependence:@B2@�1

= 0;@2B2

@�21= 0; (13)

Separation :@(B0; B1)

@�1C= 0;

@C

@�1B= 0: (14)

Full Rank : rank(@(B0; B1)

@�1B) = q1B; rank(

@C

@�1C) = q1C; rank(

@B2@�2

) = q2 (15)

Theorem 3 Under the assumptions that the parameters B0; B1; B2; C are smoothlyparametrized by the continuously identi�ed parameters �0; �1; and �2; of dimensionq1 and q2; which for �1 = 0 and �2 = 0 satisfy (13), (14) and (15) the likelihood functionis local asymptotic quadratic. Furthermore the asymptotic distributions of T �1C and(T �1B; T 2�2) are mixed Gaussian, so that with KC =

@Cv

@�v1C0 ; of dimension nC � q1C; we

Page 23: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

23

have

T �v

1Cw!�K 0C(H00 �12 )KC

��1K 0C(H00 �12 )(C1)v;

and with the (nB � (q1B + q2)) matrix KB de�ned by

KB =

@(Bv00 ;B

v01 )0

@�v1B0 0

0 @Bv2@�v2

0

!we have

(T �1B; T2�2)

v w!�K 0B

�H�� �11

�KB

��1K 0B

�H�� �11

�(B1)v:

The asymptotic distribution of the estimates of the remaining parameters

�00 = (�;A;1; � � � ;k�2)

is Gaussian N(0;��1); and the asymptotic distribution of the likelihood ratio test forthe hypothesis � = �(�) is asymptotically �2(nB + nC � (q1 + q2)):

This result is taken from Johansen (2007).

Page 24: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

24

4 A simulation exampleExample 7We consider again the example

�2X1t = �0:5[X1t�1 + �2X2t�1 + (� 1 + �� 2)X3t�1+2�X1t�1 +�X2t�1 + ( 3 + 2� 1)�X3t�1] + "1t;

�2X2t = ��X2t�1 � � 2�X3t�1 + "2t;�2X3t = "3t:

The example is so constructed that the cointegration parameters are

� =

0@ 1 00 1� 1 � 2

1A ; � =

�1�2

�; � =

0@ 1�2

� 1 + �2� 2

1A ; =

0@ 21

3 + 2� 1

1A ;

and the true value is taken to be � 1 = � 2 = �2 = 3 = 0; so that � and � are normalizedon �� 0 and ��0:We have �xed the parameters � = (�0:5; 0; 0)0;3 = I3; and 0�� 0 = (2; 1);as these parameters do not enter the cointegrating relations. The new parameters (??)become

B0 = 3 + 2� 1; B1 = �2; B2 = � 1 + �2� 2; C = ��2� 1 + � 2;

Page 25: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

25

which are variation free. This shows that the model can be tested by an asymptotic �2test since the conditions of Theorem 3 are satis�ed.We now consider four hypotheses and for each check the conditions of Theorem 3. Atthe end of this section we have conducted a small simulation experiment to illustratewhat may happen when we cannot prove asymptotic �2 inference.

4.0.1 The hypothesis � 2 = 0The hypothesis � 2 = 0 is equivalent to C = �B1B2: We �nd @C=@B1 = �B2; whichis zero at the true value so that inference is asymptotic �2(1) by Theorem 3. Thehypothesis is a special case of test on a coef�cient in � , but it is also a special case oftesting that a given vector is contained in sp(� ).

4.0.2 The hypothesis � 1 = 0The hypothesis � 1 = 0 is equivalent to B2 = B1C: In this case we have @B2=@B1 = Cand @B2=@C = B1; which are both zero at the true value. We also �nd, however, that@2B2=@C@B1 = 1; so that condition (??) is not satis�ed. The simulation in Table 2indicates that indeed the asymptotic distribution is not �2(1) as the mean, variance and95% quantile are all too small.The hypothesis is a special case of test on a coef�cient in � ; but it is also a special

Page 26: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

26

case of testing that a given vector b is contained in sp(� ), and the example has beenconstructed so that b = �0; so that the condition for asymptotic �2 infernece is notsatis�ed.

4.0.3 The test that 3 = 0;The test that 3 = 0; is equivalent to B0 = 2(B2 � CB1)=(1 + B

21): In this case we have

to check the derivative @B0=@C = �2B1=(1 +B21) which is zero at the true value. Thuswe get asymptotic �2 inference.

4.0.4 The test that � = 0;Finally we consider the test that � = 0�? = 3 � � 2 = 0: The test is equivalent toB0 = (2(B2 � CB1) + B1B2 + C)=(1 + B

21): The conditions of Theorem 3 are not sat-

is�ed as we �nd @B0=@C = (�2B1 + 1)=(1 + B21) which is non-zero at the true value.Note, however, that the simulations in Table 1 indicates that we nevertheless get anasymptotic distribution that is very close to that of a �2(1): Thus the conditions givenin Theorem 3 are probably not necessary. The possibility of asymptotic �2 inference,based on simulations was also pointed out by Paruolo, see Paruolo (1995), in connec-tion with the derivation of the distribution.

Page 27: LIST OF CONTENTS 1. 2. 3. 3a. 3b.web.math.ku.dk/~sjo/I2Kursus2007/I(2)Slides3.pdf · 1 lecture 2 list of contents 1. generation of i(2) variables.some examples 2. the i(1) and i(2)

27

Hypothesis E(�2 logLR) V ar(�2 logLR) 95% quantile�1 = 0 0.731 1.119 2.84� = 0 0.993 1.926 3.85

2.The simulation has T = 500; and 10; 000 simulations. The table shows the estimatedmean, variance, and 95% quantile of the log likelihood ratio test statistic for the hy-potheses � 1 = 0 and � = 0. For the �2(1) we have E(�2(1)) = 1; V ar(�2(1)) = 2; and�2(1)0:95 = 3:84: