lecture 17 multivariate time series var & svar · rs – ec2 - lecture 17 1 1 lecture 17...

33
RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series. • We motivated time series models by saying simple univariate ARMA models do forecasting very well. Then, why we need multiple series? - To be able to understand the relationship between several variables, allowing for dynamics. - To be able to get better forecasts Example: Stock price surprises in one market (equity, NYSE) can spread easily to another market (options, Tokyo SE). Thus, a joint dynamic model may be needed to understand dynamic interrelations and may do a better forecasting job. Vector Time Series Models

Upload: lyhanh

Post on 17-May-2018

224 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

1

1

Lecture 17Multivariate Time Series

VAR & SVAR

• A vector series consists of multiple single series.

• We motivated time series models by saying simple univariate ARMA models do forecasting very well. Then, why we need multiple series?

- To be able to understand the relationship between several variables, allowing for dynamics.

- To be able to get better forecasts

Example: Stock price surprises in one market (equity, NYSE) can spread easily to another market (options, Tokyo SE). Thus, a joint dynamic model may be needed to understand dynamic interrelations and may do a better forecasting job.

Vector Time Series Models

Page 2: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

2

Vector Time Series Models

Vector Time Series Models

Page 3: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

3

Vector Time Series Models

• Consider an m-dimensional time series Yt=(Y1,Y2,…,Ym)’.

• The series Yt is weakly stationary if its first two moments are time invariant and the cross covariance between Yit and Yjs for all i and j are functions of the time difference (s-t) only.

• The mean vector:

• The covariance matrix function

kkk

kkk

kkk

ECovk

mmmm

m

m

tkttkt

21

22221

11211

YYY,Y

mtE ,,, 21 Y

• The correlation matrix function:

where D is a diagonal matrix in which the i-th diagonal element is the variance of the i-th process, i.e.

• The covariance and correlation matrix functions are positive semi-definite.

•{at}~WN(0,) if and only if {at} is stationary with mean 0 vector and

Vector Time Series Models

kkk ij// 2121 DD

.0,,0,0 2211 mmdiag D

otherwise,

0k,k

0

Page 4: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

4

• {Yt} is a linear process if it can be expressed as

where {j} is a sequence of mxT matrix whose entries are absolutely summable. That is,

For a linear process, E(Yt)=0 and

Vector Time Series Models

j

j li m.1,2,...,lfor i, ,

,0~for t

0jjtjt WNY

,...2,1,0,

kk

jjkj

• Let {Yt} be a linear process:

• For the process to be stationary, s should be square summable in the sense that each of the mxm sequence ij.s is square summable.

• This is the Wold representation.

Vector Time Series Models - MA Representation

tt L Y

0s

ssLL where

Page 5: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

5

• Let {Yt} be a linear process:

• For the process to be invertible, s should be absolute summable.

Vector Time Series Models - AR Representation

0

1s

ssLL where

ttL Y

• Let {Yt} follow a VARMA(p,q) linear process:

• VARMA process is stationary if the zeros of |p(L)| are outside the unit circle.

• VARMA process is invertible if the zeros of |q(L)| are outside the unit circle.

VARMA Representation

VMA(q)L0p tqt Y

pVARLq ttp Y0

tqtp LL Y

q

qq

ppp

LLL

LLL where

10

10

LLL qptt 1Y

Page 6: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

6

• VARMA process is stationary if the zeros of |p(L)| are outside the unit circle. That is, we can write:

• VARMA process is invertible if the zeros of |q(L)| are outside the unit circle. That is, we can write:

• Identification problem: Multiplying matrices by some arbitrary matrix polynomial may give us an identical covariance matrix. Then, the VARMA(p,q) model is not identifiable (not unique p and q).

VARMA Representation

LLL qptt 1Y

ttpq

tt

LL

L

Y

Y1

Example: VARMA(1,1) process

12

12

11

2

1

12

11

2

1

00

0

00

0

t,

t,

t,

t,

t,

t,

t,

t,

a

am

a

a

Y

Ym

Y

Y

t

t

t

t

a

amL

Y

YLm

,2

,1

,2

,1

10

1

10

1

t

t

t

t

t

t

a

aL

a

amLLm

Y

Y

,2

,1

,2

,11

,2

,1

10

1

10

1

10

1

MA()=VMA(1)

VARMA – Identification Problem

Page 7: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

7

13

VARMA – Identification Problem

• To eliminate this problem, there are three methods suggested by Hannan (1969, 1970, 1976, 1979).

• From each of the equivalent models, choose the minimum MA order q and AR order p. The resulting representation will be unique if Rank(p(L))=m.

• Represent p(L) in lower triangular form. If the order of ij(L) for i,j=1,2,…,m, then the model is identifiable.

• Represent p(L) in a form p(L) =p(L)I, where p(L) is a univariate AR(p). The model is identifiable if p0.

14

VARMA – Identification Problem

Page 8: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

8

VAR(1) Process

• Yi,t depends not only the lagged values of Yit but also the lagged values of the other variables.

- Always invertible.

- Stationary if outside the unit circle. Let =B-1.

15

ttL YI

0 LI

00 LI

The zeros of |IL| is related to the eigenvalues of .

• Hence, VAR(1) process is stationary if the eigenvalues of ; i, i=1,2,…,m are all inside the unit circle.

• The autocovariance matrix:

16

tkt1tkt

t1tkttkt

E

EEk

YYY

YYYY

101

01

k,k

k,k k

VAR(1) Process

0101 1 • For k=1,

00

100010

1010

11

1

1

Page 9: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

9

• Then,

17

BvecAC

vecIvec

ABCvec

productKronecker where

0

001

7

6

2

1

4

3

71

64

23

XX vece.g.

Ba

BB

BA

mnm1

1n

aB

aa

e.g.

11

VAR(1) Process

Example:

18

5.0,8.0

04.03.1

3.06.02.01.1det

2.06.0

3.01.1

2.06.0

3.01.1

21

2

t1tt

I

I

YY

• The process is stationary.

VAR(1) Process

Page 10: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

10

VMA(1) Process

• Yi,t depends not only the lagged own and other variables errors:

- Always stationary.- The autocovariance function:

- The autocovariance matrix function cuts of after lag 1.

• Thus, VMA(1) process is invertible if the eigenvalues of ; i, i=1,2,…,m are all inside the unit circle. 19

.,0WN~ where , t1ttt aY

0

otherwise.,0

1k,

1k,

k

• Same idea as in univariate case. We define the Sample Correlation Matrix Function (SCMF): Given a vector m series of T observations, the sample correlation matrix function is

where ’s are the crosscorrelation for the i-th and j-th component series.

• It is very useful to identify VMA(q).

• Tiao and Box (1981) proposed to use +, and . signs to show the significance of the cross correlations:+ (-) sign: the value is greater (less) than 2 times the estimated SE. sign: the value is within the 2 times estimated SE

20

kk ij ˆˆ

kij

VARMA – Identification

Page 11: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

11

Partial Autoregression or Partial Lag Correlation Matrix Function

• They are useful to identify VAR order. The partial autoregression matrix function is proposed by Tiao and Box (1981), but it is not a proper correlation coefficient.

• Then, Heyse and Wei (1985) have proposed the partial lag correlation matrix function which is a proper correlation coefficient.

• Both of them can be used to identify the VARMA(p,q).

21

Granger Causality

• In principle, the concept is as follows:

If X causes Y, then, changes of X happened first then followed by changes of Y.

• Then, if X causes Y, there are two conditions to be satisfied:

1. X can help in predicting Y. ( Regression of X on Y has a big R2.)

2. Y can not help in predicting X.

• In most regressions, it is hard to discuss causality. For instance, the significance of the coefficient in the regression

only tells there is a relationship between x and y, not that x causes y. 22

iii xy

Page 12: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

12

Granger Causality

• Vector autoregression allows a test of ‘causality’ in the previous sense. This test is first proposed by Granger (1969) and later by Sims (1972) therefore we called it Granger (or Granger-Sims) causality.

• We will restrict our discussion to a system of two variables, x and y.-- y is said to Granger-cause x if current or lagged values of y helps to predict future values of x.

-- On the other hand, y fails to Granger-cause x if for all s > 0, the MSE of a forecast of xt+s based on (xt, xt−1, . . .) is the same as that is based on (yt, yt−1, . . .) and (xt, xt−1, . . .).

• Restricting ourselves to linear functions, y fails to Granger-cause x if

23 ,,,,,ˆ,,ˆ

111 ttttstttst yyxxxEMSExxxEMSE

• Restricting ourselves to linear functions, y fails to Granger-cause x if

• Equivalently, we can say that x is exogenous in the time series sense with respect to y, or y is not linearly informative about future x.

• A variable X is said to Granger cause another variable Y, if Y can be better predicted from the past of X and Y together than the past of Yalone, other relevant information being used in the prediction (Pierce, 1977).

24

,,,,,ˆ,,ˆ111 ttttstttst yyxxxEMSExxxEMSE

Granger Causality

Page 13: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

13

• In the VAR equation, the example we proposed above (x Granger causes y) implies a lower triangular coefficient matrix:

• Or if we use MA representations,

25

t

t

pt

pt

pp

p

t

t

t

t

a

a

y

x

y

x

c

c

y

x

2

1

2221

11

1

1

122

121

111

2

1 00

t

t

t

t

a

a

LL

L

y

x

2

1

2221

11

2

1 0

.0,1, 021

022

011

2210 LLB ijijijij where

Granger Causality – VAR Formulation

• Consider a linear projection of yt on past, present and future x’s,

where E[εtx]=0 for all t and . Then, y fails to Granger-cause x iff dj

= 0 for j = 1, 2, . . ..

• Steps1) Check that both series are stationary in mean, variance and covariance (if, not, transform data via differences, logs, etc.)2) Estimate AR(p) models for each series. Make sure residuals are white noiss. F-tests and/or AIC, BIC can be used to determine p.3) Re-estimate both models, now including all the lags of the other variable.4) Use F-tests to determine whether, after controlling for past Y, past values of X can improve forecasts Y (and vice versa). 26

0 1jt

jjtjjtjt xdxbcy

Granger Causality - Test

Page 14: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

14

• Causality Model:

H0 (y does not Granger cause x): 1 = 2 = . . . = p = 0.

• Steps in practice

1) Once the lag structures are determined, estimate the causality model. Keep RSSU.

2) Estimate a restricted regression (without the y’s). Keep RSSR.

3) Construct F-test as usual:F = [(T-k)/q] * [(RSSR – RSSU)/RSSU]

where k=(1+px +py) number of parameters from model U (1+px+py), and q: number of parameters from model R =(1+px) 27

x yp

it

p

jjtjitit uyxcx

0 11

Granger Causality – Setting up the F-Test

• There are 4 possible conclusions from test:

1. X Granger causes Y, but Y does not Granger cause X

2. Y Granger causes X, but X does not Granger cause Y

3. X Granger causes Y and Y Granger causes X --i.e., there is a feedback system or bidirectional causality.

4. X does not Granger cause Y and Y does not Granger cause X

28

Granger Causality - Outcomes

Page 15: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

15

• From Chuang and Susmel (2010): Bivariate analysis of relation between stock returns and Volume in Taiwan.

Vij,t: Detrended trading volume of portfolio ij, Rm,t: Return on a value-weighted Taiwanese market index, Rij,t: Teturn of portfolio ij, DAVRm,: Detrended absolute value of market returns, and DMADij,t: Detrended mean absolute portfolio return deviation. Portfolio ij: Portfolio of size i and institutional ownership j. 29

Granger Causality - Example

, ,1 ,11 , ,12 , ,13 ,0 0 1

,11 , ,12 , ,11 1

,

A B C

ij t ij ij a m t a ij b ij t b ij c ij t ca b c

D D

ij d ij t d ij d m t d ij td d

V DAVR DMAD R

V R

, , 2 ,21 , ,22 , ,21 1

,D D

m t ij ij d ij t d ij d m t d ij td d

R V R

• Estimation SUR• Granger causality tests (Wald tests)- For any portfolio ij we test H0 : ij,12d = 0 for all d.

=> Market returns do not Granger-cause portfolio volume. - Sign of causality. If the sum of the ij,12d coefficients is significantly positive => Positive causality from market returns to trading volume

- For any portfolio ij we test H0 : ij,21d = 0 for all d.=> Porftfolio volume do not Granger-cause market returns.

- W-D statistics: Granger causality test --it follows a χ2(D).- W-1: Sum of the lagged coefficients is equal to zero (identify the sign of the causality) --it follows a χ2(1).

30

Granger Causality - Example

Page 16: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

16

31

Granger Causality - Example

Panel A: Size-institutional ownership portfolios

Pi

j

Hypothesis 1 Does causality exist? (W-D statistic)

Sum of lagged coefficients

Hypothesis 2 Sign of causality (W-1 statistic)

P

1l

1l,12d = 0 for all d Yes (19.4919***) 0.0382 Positive (19.4514***)

1l,21d = 0 for all d No (0.1566) 0.0466

P

1h

1h,12d = 0 for all d Yes (21.2543***) 0.0285 Positive (21.1123***)

1h,21d = 0 for all d No (0.0658) 0.0559

P

2l

2l,12d = 0 for all d Yes (15.8748***) 0.0446 Positive (15.7221***)

2l,21d = 0 for all d No (0.7614) 00.1864

P

2h

2h,12d = 0 for all d Yes (11.2518***) 0.0150 Positive (11.1957***)

2h,21d = 0 for all d No (1.9206) 0.2649

P

3l

3l,12d = 0 for all d Yes (39.4826***) 0.0569 Positive (35.7789***)

2

1 ,1210

l dd

2

1 ,1210h dd

2

2 ,1210

l dd

2

2 ,1210

h dd

3

3 ,1210l dd

• H0 : ij,12d = 0 for all d => rejected for all size-institutional ownership portfolios (shown in previous Table) and all volume-institutional ownership portfolios (not shown), respectively. - The cumulative effect of lagged market returns on portfolio volume is positive –i.e., Σj ij,12d,t-j >0- and significant.

• H0 : ij,21d = 0 for all d. => cannot be rejected for any size-institutional ownership portfolios (shown) and any volume-institutional ownership portfolios (not shown), respectively.

• No feedback relation between portfolio volume and market returns (consistent with the sequential information arrival orthe positive feedback trading hypotheses).

32

Granger Causality - Example

Page 17: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

17

Granger Causality - Chicken or Egg?

• This causality test is also can be used in explaining which comes first: chicken or egg. More specifically, the test can be used in testing whether the existence of egg causes the existence of chicken or vise versa.

• Thurman and Fisher (1988) did this study using yearly data of chicken and egg productions in the US from 1930 to1983.

• The results:1. Egg causes the chicken.2. There is no evidence that chicken causes egg.

33

• Granger causality does not equal to what we usually mean by causality.

• Even if x1 does not cause x2, it may still help to predict x2, and thus Granger-causes x2 if changes in x1 precedes that of x2 for some reason (usually because of a third variable, missing in the model).

• Example: A dragonfly flies much lower before a rain storm, due to the lower air pressure. We know that dragonflies do not cause a rain storm, but it does help to predict a rain storm, thus Granger-causes a rain storm.

34

Granger Causality - Remarks

Page 18: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

18

• When x1 does not cause x2, we say that x2 is strongly exogeneous and thus Granger-causes x2 if changes in x1 precedes that of x2 for some reason (usually because of a third variable, missing in the model).

• Example: A dragonfly flies much lower before a rain storm, due to the lower air pressure. We know that dragonflies do not cause a rain storm, but it does help to predict a rain storm, thus Granger-causes a rain storm.

35

Granger Causality - Exogeneity

• It is a simultaneous equations model. It is used to described dynamic effects in a multivariate system. For example,

where

• Note:

- 1t, 2t,..., nt are called structural errors. Σ is a diagonal matrix.

- In general, cov(yit,jt )≠ 0for all i,j.

- All variables are endogenous - OLS is not appropriate

• From this model, we can move to a reduced form, say

• The at’s are called reduced form errors, a linear combination of t’s.

Structural VAR (SVAR)

tptpttt aYYYY ..... 22110

tptpttt YYYY .....B 22110

),0(...~ Ddiit

Page 19: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

19

• Like in SEM, we have identification issues. To recover the structural parameters (B, Γ, Σ) we need to impose restrictions.

• Many applications in finance:

- The effect of financial news on stock prices (or returns):

- [r, y, , d, c P (or P)] - see Chen et al (JB, 1986)

- Analysis of policy effects (Ms, G) on stock market.

- Relative importance of markets (stock vs. options markets).

Structural VAR (SVAR)

• Long history in economics. Formalized by Sims (Econometrica, 1980), as a generalization of univariate analysis to an array of RV. Sims analyzed: Money supply (Z), interest rates (X) & income (V) in reduced form:

tptpttt

t

t

t

t aYYYcY

V

X

Z

Y

..... 2211

t

taaEaE tt 0

)'(0)()1(

333231

232221

131211

1

SVAR

=> VAR(p)

Φi are matrices. For the 1st lag matrix =>

• A typical equation of the system is:

tptp

ptp

ptp

tttt aVXZVXZcZ 1)(

13)(

12)(

11113)1(

112)1(

111)1(

1 .....

Note: Each equation has the same regressors.

Page 20: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

20

• VARMAX Models, like a VAR, but allows exogeneous variables, Xt:

• Structural VAR Models:

where t,~ i.i.d. D(0, Σ), where Σ is a diagonal matrix.

- Some theory to determine Yt, but all variables are endogeneous.

• VAR Models (reduced form):

where the error term is a WN vector:

• Yt is a function of predetermined variables (Yt-j’s) and errors are well behaved: OLS is possible.

SVAR – Multivariate Models

'( ) if s t

0 otherwiset sE a a

tptpttt aYYYY ..... 22110

tptpttt YYYY .....B 22110

tqttp LL XLGY m )(

• Consider a bivariate Yt=(yt, xt), first-order VAR model:

• The error terms (structural shocks) yt and xt are uncorrelated WN innovations with standard deviations y and x (& zero covariance).

• Note:

- y & x are endogenous. yt affects y directly and x indirectly.

- Many parameters to estimate: 10.

• The structural VAR is not a reduced form. In a reduced form representation y and x are just functions of lagged y and x.

SVAR – VAR(1)

t 1 0 1 2 t 1 1 t 1 1 2 t 1 y t

t 2 0 2 1 t 2 1 t 1 2 2 t 1 x t

y b b x y x

x b b y y x

Page 21: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

21

• To get a reduced form write the structural VAR in matrix form as:

• Premultipication by B-1 allow us to obtain a standard VAR(1):

• This is the reduced form to estimate (by OLS equation by equation). Now, we have only 9 (6 mean, 3 variance) parameters.

10 112 11 12

20 121 21 22

0 1 1

1

1

t t yt

t t xt

t t t

y b yb

x b xb

BY Y

SVAR – VAR(1): Reduced Form

0 1 1

1 1 10 1 1

0 1 1

t t t

t t t

t t t

BY Y

Y B B Y B

Y Y a

ji

jiLLL

ijL

acYL

acYLLLI

acYYYY

ijpp

ijijij

tt

ttp

p

tptpttt

0

1]....[

element with theL,in polynomialmatrix nxn a is )(

)(

)......(

......

)(2)2()1(ij

221

2211

• A VAR(p) for Yt is stable if the pxn roots of the characterisitic polynomial are outside the unit circle.- The characteristic polynomial:

• Then, we define the constant: cI pn1

21 ).....(

SVAR – Stability Conditions

• From the a reduced form:

0|......| 221 p

pn zzzI

Page 22: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

22

• If the VAR is stable then a MA(∞) representation exists.

• This representation will be the “key” to study the impulse response function (IRF) of a given shock.

Example: For the VAR(1), multiply both sides of reduced form by (I-Φ1L)-1. Then,

Ψ(L) = (I-Φ1L)-1

=> Ψ0 = IΨk = Φ1

k

......][)(

)(......2

21

2211

LLIL

aLaaaY

n

ttttt

SVAR – Stability Conditions

c1)1(

• The SMA of Yt is based on an infinite moving average of the structural innovations, εt. Using at = B-1 t in the Wold representation gives

That is, Θ0 = B-1≠IΘ1 = Ψ1 B-1 = Φ1

....Θk = Ψk B-1 = Φ1

k

...)()(

)()(1

111

1

LBBBLL

LBLY ttt

Structural MA (SMA) Representation

Page 23: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

23

• Re-writing the system in deviations from its mean

- Stack the vectors astptpttt aYYYY )(...)()( 2211

0

0

0..........00

0...............0

0...............0

...... 121

1

1

t

t

n

n

n

pp

pt

t

t

t

a

v

I

I

I

F

Y

Y

Y

(nxp)x1 (nxp)x(nxp) (nxp)x1

1 ( ')0

0.....0

0 0......0where

0 0......0

t t t t

H tF v E v v

t

H

(nxp)x(nxp)

SVAR – VAR(p) to VAR(1)

- Write the VAR(1):

• We assume normality for the errors. Then, we use the conditioning trick to write down the joint likelihood. That is,

1 1 0 1 1 1 2 11

1 2 1 1

1 2

1 2

( , ..... | , .... ; ) ( | , .... ; )

| , .... ( .... , )

' [ ..... ]

[1 ...... ] '

'

( ) log

T

T T p t t t t pt

t t t t p t p

p

t t t t p

t t t

t

f Y Y Y Y Y Y f Y Y Y Y

Y Y Y N c Y Y

c

X Y Y Y

Y X a

1

1 1

1

( | ; )

1log(2 ) log ' ' '

2 2 2

T

t

T

t t t tt

f Y past

Tn TY X Y X

n x (np+1)

(np+1) x 1

SVAR – Estimation

Page 24: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

24

1

1 1

ˆ ˆ ˆ ' ' 'T T

mle ols ols t t t tt t

Y X X X

t t ttolsttolsolsttt

T

ttolsttolst

ttolstolst

T

tttolstolst

T

ttttt

XaXXaa

XaXa

XXXYXXXY

XYXY

)'ˆ('ˆ2)'ˆ()ˆ('ˆ'ˆ

)'ˆ(ˆ')'ˆ(ˆ

''ˆ'ˆ'''ˆ'ˆ

'''

111

1

1

1

1

1

1

0'ˆ)ˆ('ˆ)'ˆ(

)'ˆ('ˆ)'ˆ('ˆ(*)

11

11

tttolst

ttols

ttolst

ttolst

aXtraXtr

XatrXa

SVAR – Estimation

• Under the previous assumptions, we get that OLS=MLE. That is,

• Proof:

ols

t ttolsolsttt

T

ttttt

XXaa

XYXY

ˆwhen

achieved is aluesmallest v thep.d., is matrix p.d. is because

)'ˆ()ˆ('ˆ'ˆmin

'''min

1-

11

1

1

SVAR – Estimation

• Then, the (reduced form) VAR can be estimated equation by equation by OLS.

Page 25: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

25

• Testing as usual. For example, the LR in a VAR. We need to estimate the restricted model (under H0) and unrestricted (under H1).The likelihood is given by:

2ˆlog

2)2log(

2)ˆ,ˆ(

22

1ˆˆ2

1

'ˆˆˆ2

1ˆˆ'ˆ

2

1ˆˆ'ˆ

2

1

ˆˆ'ˆ2

1ˆlog2

)2log(2

)ˆ,ˆ(

1

1

1

1

1

1

1

1

1

11

TnTTn

TnTItraceTtrace

aatraceaatraceaa

aaTTn

n

T

ttt

T

ttt

T

ttt

T

ttt

)(:

)(:

11

00

pVARH

pVARH

SVAR – Testing

• Suppose we want to test p1> p0:

)(nsrestrictio ofnumber

ˆlogˆlogˆlogˆlog)(2

0122

101

01

1*

01*

ppnmLR

TTLR

m

equation each in )(

variableeach on nrestrictio has equation each

01

01

ppn

pp

2ˆlog

2)2log(

2)ˆ,ˆ( 1

0

TnTTn

• First, we estimate the model under H0, with p0 parameters. The estimation consists of n OLS regression of each variable on a constant and p0 lags:

• Second, we estimate the model under H1, with p1 parameters:

2ˆlog

2)2log(

2)ˆ,ˆ( 1

1

TnTTn

SVAR – Testing

• Construct LR, as usual:

Page 26: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

26

Let ( ) denote the (nk 1) (with k=1+np number of parameters T

estimated per equation) vector of coef. resulting from OLS regressions of each

of the elements of y on x for a sample of size T: t t

vec T

-11.T T T' . , where = x x x y t t t itT iT

t=1 t=1n.T

ˆAsymptotic distribution of is

1( ) (0, ( )), and the coef of regression iT

2 1 ˆˆ( ) (0, ) with lim(1 / )

T N M

T N M M p TiT i i

'X Xt tt

In general, linear hypotheses can be tested directly as usual and their asymptotic distribution follows from the next asymptotic result:

SVAR – Testing

2

2

2(n p n)AIC ln

T

(n p n) ln(T)SBC ln

T

• In the same way as in the univariate AR(p) models, Information Criteria (IC) can be used to choose the p in a VAR:

• Similar consistency and efficiency results to the ones obtained in the univariate world apply here.

• The main difference is that as the number of variables gets bigger, it is more unlikely that the AIC ends up overparametrizing --see Gonzalo and Pitarakis (2002).

SVAR – IC

Page 27: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

27

• Testing for Granger causality, as usual. Assume a lag length p. Then,

1 1 1 2 2 1 1 2 2..... ....t t t p t p t t p t p tX c X X X Y Y Y a

• Estimate model by OLS and test for the following hypothesis

0any :

) cause-Grangernot does ( 0......:

1

210

i

ttp

H

XYH

• Get RSS for the restricted and unrestricted model. Then, calculate the F-test:

F = [(T-2p-1)/p] * [(RSSR – RSSU)/RSSU]

SVAR – Granger Causality

• Goal: Analyze the reaction of the system to a shock

1 1 2 2

1 1 2 2

1

1 1 2 2 1 1

....

If the system is stable,

( ) ....

( ) [ ( )]

Redating at time :

.... ....

t t t p t p t

t t t t t

t s t s t s t s s t s t

Y c Y Y Y a

Y L a a a a

L L

t s

Y a a a a a

)(,

)(

'

sij

jt

sti

sijs

t

st

a

y

a

Y

n x nReaction of the i-variable to a unit changein innovation j.

(multipliers)

SVAR – IRF

Page 28: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

28

• Impulse-response function: response of to one-time impulse inwith all other variables dated t or earlier held constant.

stiy ,

jty

ijjt

sti

a

y

,

s

ij

1 2 3

SVAR – IRF

2212

122

1

2

1

2

11

2221

1211

2

1 ;1

at

tt

t

t

a

a

y

y

y

y

t

• Example: At time t=0, we shock the VAR(1) model below, by creating a a20=1.

1 2

10 20 2

0 0

0 0, 1 ( increases by 1 unit)

(no more shocks occur)

t t

t

t y y

t a a y

• Reaction of the system 10

20

11 11 12 12

21 21 22 22

2

12 11 12 11 11 12

22 21 22 21 21 22

1 11 121

2 21 22

0

1

0

1

0

1

0 0

1 1

s

s s

s

y

y

y

y

y y

y y

y

y

(impulse)

SVAR – IRF

• We start at a point of equilibrium (t<0). Then, at t we shock one variable in the system. Usually the size of the shocks are in SD units. In our example,

Page 29: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

29

• If you work with the MA representation:

ss

LL

1

212

11

1)()(

• In this example, the variance-covariance matrix of the innovationsis not diagonal -i.e., σ12≠0. There is contemporaneous correlation between shocks, then

1

0

20

10

y

y

• To avoid this problem, the variance-covariance matrix has to be diagonalized (the shocks have to be orthogonal) and here is where the serious problems appear.

Not very realistic.

SVAR – IRF

• Reminder: A is a p.d. symmetric matrix. Then, Q A-1Q’ = I.

• Then, the MA representation:

00

1

0

1

0

Let us call ;

[ ' ] [ ' '] [ ' ] ' '

has components that are all uncorrelated and unit variance

t i t i ni

t i t ii

i i t t t i t ii

t t t t t t n

t

Y a I

Y Q Qa

M Q w Qa Y M w

E w w E Qa a Q QE a a Q Q Q I

w

1t ss s

t

YM Q

w

• Orthogonalized IRF:

Problem: Q is not unique.

SVAR – IRF

Page 30: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

30

• Contribution of the j-th orthogonalized innovation to the MSE of the s-period ahead forecast

1 1 1 1

1 1 1 1

1 1' 1 1 '1 1

1 1'1 1

1 1'

ˆ ˆ ˆ( ( )) ( ( ))( ( )) '

ˆ( ) ( ) .....

[ ( ) ( ) '] ' .... '

( ) ' ' ' ....

' '

t t s t t s t

t t s t t s t s s t

t t a a s a s

a a

s a s

MSE Y s E Y Y s Y Y s

e s Y Y s a a a

E e s e s

MSE s Q Q Q Q Q Q Q Q

Q Q Q Q

Q Q

1 1' 1 1 '1 1 1 1

0 0 1 1 1 1

' ....... '

' ' ......... 's s

s s

Q Q Q Q

M M M M M M

1

10 0

recall that

and ,

i iM Q

M Q I

• Contribution of the first orthogonalized innovation to the MSE. (Do this for a two variables VAR model!)

SVAR – Variance Decomposition

Example: Variance decomposition in a two variables (y, x) VAR- The s-step ahead forecast error for variable y is:

y E y M (1,1) M (1,1) ... M (1,1)t s t t s yt s0 1 yt s 1 s 1 yt 1

M (1, 2) M (1, 2) ... M (1, 2)xt s0 1 xt s 1 s 1 xt 1

SVAR – Variance Decomposition

- Denote the variance of the s-step ahead forecast error variance of yt+s as for y(s)2:

2 2 2 2 2(s) [M (1,1) M (1,1) ... M (1,1) ]y y 0 1 s 1

2 2 2 2[M (1, 2) M (1, 2) ... M (1, 2) ]x 0 1 s 1

- The forecast error variance decompositions are proportions of y(s)2.

Page 31: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

31

SVAR – Relative Importance of Variables

- The forecast error variance decompositions are proportions of y(s)2.

Note: We can use these proportions to measure relative importance of a variable in the system.

Q: Which markets have more relevance (shocks are more relevant to the system)? Are stock or options markets more important?

2y

2y

2 2 2 2[M (1, 1) M (1, 1) ... M (1, 1) ]y 0 1 s 1

2 2 2 2[M (1, 2) M (1, 2) ... M (1, 2) ]x 0 1 s 1

due to shocks to y / (s)

due to shocks to x / (s)

ytt 10 t 111 1212

t 20 21 22 t 1 xt

y b y1 b

x b x0 1

• We started with a SVAR model, and transformed into the reduced form or standard VAR for estimation purposes.

• Q: Is it possible to recover the parameters in the SVAR from the estimated parameters in the standard VAR? No!!

• There are 10 parameters in the bivariate SVAR(1) and only 9estimated parameters in the standard VAR(1). Thus, the VAR is underidentified. We need restrictions (exclusion, linear restrictions, etc.)

• If one parameter in the bivariate SVAR above is restricted, say excluded, (order condition met!), the VAR is exactly identified.

• Sims (1980) suggests a recursive system to identify the model letting b21=0.

SVAR – Identification in VAR(1)

Page 32: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

32

ytt 10 t 111 1212 12 12

t 20 21 22 t 1 xt

t 10 t 1 1t11 12

t 20 21 22 t 1 2t

y b y1 b 1 b 1 b

x b x0 1 0 1 0 1

y y e

x x e

• The recursive model imposes the restriction that the value yt does not have a contemporaneous effect on xt.

• Now, the parameters of the SVAR are identified -we have 9 equations.

2 2 210 10 12 20 20 20 1 y 12 x

211 11 12 21 21 21 2 x

212 12 12 22 22 22 1 2 12 x

b b b b var(e ) b

b var(e )

b co v(e , e ) b

• b21=0 implies

SVAR – Identification in VAR(1)

• Note both structural shocks can now be identified from the residuals of the standard VAR.

• b21=0 implies y does not have a contemporaneous effect on x.

• This restriction manifests itself such that both yt and xt affect y

contemporaneously but only xt affects x contemporaneously.

• The residuals of ext are due to pure shocks to x.

• The SVAR representation based on a recursive causal ordering may

be computed using the Choleski factorization of .

• Other restriction could have been used. For example, b12 + b21= 1.

• There are other methods used to identify models, like the Blanchard and Quah (1989) decomposition.

SVAR – Identification in VAR(1)

Page 33: Lecture 17 Multivariate Time Series VAR & SVAR · RS – EC2 - Lecture 17 1 1 Lecture 17 Multivariate Time Series VAR & SVAR • A vector series consists of multiple single series

RS – EC2 - Lecture 17

33

• A VAR model can be a good forecasting model, but in a sense it is an atheoretical model (as all the reduced form models are).

• To calculate the IRF, the order matters: Q is not unique. The Cholesky decomposition (Q is lower triangular) imposes an order in the recursive causal structure of the VAR. It is not a trivial issue.

• Sensitive to the lag selection.

• Dimensionality problem.

SVAR – Criticisms

• Ordering of the variables in Yt determines the recursive causal structure of the structural VAR.

• This identification assumption is not testable.

• Sensitivity analysis is performed to determine how the structural analysis based on the IRFs and FEVDs (forecast error variance decomposition) are influenced by the assumed causal ordering.

• This sensitivity analysis is based on estimating the structural VAR for different orderings of the variables.

• If the IRFs and FEVDs change considerably for different orderings of the variables in Yt, then it is clear that the assumed recursive causal structure heavily influences the structural inference.

SVAR – Sensitive Analysis