Transcript
Page 1: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Forecasting Volatility and Correlation: The Role of

Option Implied Measures

Christopher Andrew Coleman-Fenn

B. Bus. Hons. (Banking and Finance)

Grad. Dip. Sci. (Mathematics)

Supervisor: Professor Adam Clements

March 6, 2012

Submitted as partial requirement

for the degree of Doctor of Philosophy

Queensland University of Technology

School of Economics and Finance

Brisbane, Australia

Page 2: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Keywords

• Volatility risk premium

• Implied volatility

• Implied correlation

• Model confidence set

• Intraday volatility

• Equicorrelation

• Realised equicorrelation

i

Page 3: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Abstract

Forecasts of volatility and correlation are important inputs into many practical

financial problems. Broadly speaking, there are two ways of generating fore-

casts of these variables. Firstly, time-series models apply a statistical weighting

scheme to historical measurements of the variable of interest. The alternative

methodology extracts forecasts from the market traded value of option con-

tracts. An efficient options market should be able to produce superior forecasts

as it utilises a larger information set of not only historical information but also

the market equilibrium expectation of options market participants. While much

research has been conducted into the relative merits of these approaches, this

thesis extends the literature along several lines through three empirical studies.

Firstly, it is demonstrated that there exist statistically significant benefits to

taking the volatility risk premium into account for the implied volatility for

the purposes of univariate volatility forecasting. Secondly, high-frequency op-

tion implied measures are shown to lead to superior forecasts of the intraday

stochastic component of intraday volatility and that these then lead on to supe-

rior forecasts of intraday total volatility. Finally, the use of realised and option

implied measures of equicorrelation are shown to dominate measures based on

daily returns.

ii

Page 4: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Contents

1 Introduction 4

1.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2 Key Research Questions . . . . . . . . . . . . . . . . . . . . . . . 6

1.3 Thesis Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

1.4 The Contributions of this Thesis . . . . . . . . . . . . . . . . . . 11

2 Literature Review 13

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Defining and Measuring Volatility . . . . . . . . . . . . . . . . . . 14

2.2.1 Realised Volatility . . . . . . . . . . . . . . . . . . . . . . 16

2.2.2 Stylised Facts of Volatility . . . . . . . . . . . . . . . . . . 28

2.3 The Value of an Option . . . . . . . . . . . . . . . . . . . . . . . 35

2.3.1 Black-Scholes-Merton Model . . . . . . . . . . . . . . . . 38

2.3.2 Black-Scholes-Merton Implied Volatility . . . . . . . . . . 42

2.3.3 Challenges to the Black-Scholes-Merton Model . . . . . . 44

2.4 The Volatility Index . . . . . . . . . . . . . . . . . . . . . . . . . 50

2.5 Forecast Performance of Implied Volatility . . . . . . . . . . . . . 55

2.6 Univariate Time-series Forecasts . . . . . . . . . . . . . . . . . . 65

iii

Page 5: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

2.6.1 GARCH class conditional volatility models . . . . . . . . 66

2.6.2 Stochastic volatility models . . . . . . . . . . . . . . . . . 74

2.6.3 Models for forecasting Realised Volatility . . . . . . . . . 78

2.6.4 Hybrid Models . . . . . . . . . . . . . . . . . . . . . . . . 81

2.7 Multivariate Time-Series Forecasts . . . . . . . . . . . . . . . . . 84

2.7.1 Multivariate GARCH models . . . . . . . . . . . . . . . . 85

2.7.2 Multivariate Stochastic Volatility Models . . . . . . . . . 92

2.7.3 Multivariate Realised Volatility Models . . . . . . . . . . 93

2.8 Implied Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . 97

2.9 Comparing Forecast Performance . . . . . . . . . . . . . . . . . . 100

2.9.1 Regression based measures . . . . . . . . . . . . . . . . . 100

2.9.2 Statistical loss functions . . . . . . . . . . . . . . . . . . . 101

2.9.3 Distinguishing relative forecast performance . . . . . . . . 104

3 Implied Volatility and the Volatility Risk Premium 111

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

3.2 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

3.3 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

3.3.1 Model based forecasts . . . . . . . . . . . . . . . . . . . . 117

3.3.2 A risk-adjusted VIX forecast . . . . . . . . . . . . . . . . 119

3.3.3 Estimation of the Volatility Risk-Premium . . . . . . . . 121

3.3.4 Evaluating forecasts . . . . . . . . . . . . . . . . . . . . . 123

3.4 Empirical results . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

3.4.1 22-day-ahead forecasts . . . . . . . . . . . . . . . . . . . . 127

iv

Page 6: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

3.4.2 5-day-ahead forecasts . . . . . . . . . . . . . . . . . . . . 130

3.4.3 1-day-ahead forecasts . . . . . . . . . . . . . . . . . . . . 132

3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

4 Forecasting Intraday Volatility: The Role of VIX Futures 136

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

4.2 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

4.2.1 An intraday volatility framework . . . . . . . . . . . . . . 146

4.2.2 A semi-parametric framework . . . . . . . . . . . . . . . . 155

4.3 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

4.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

4.4.1 In-Sample Results . . . . . . . . . . . . . . . . . . . . . . 169

4.4.2 Out-of-Sample Results . . . . . . . . . . . . . . . . . . . . 174

4.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

5 Forecasting Equicorrelation 182

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

5.2 General Framework and Models Considered . . . . . . . . . . . . 190

5.2.1 The Linear Dynamic Equicorrelation Model . . . . . . . . 193

5.2.2 Incorporating Implied Equicorrelation . . . . . . . . . . . 201

5.3 Forecast Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . 205

5.3.1 Generating Forecasts . . . . . . . . . . . . . . . . . . . . . 206

5.3.2 Statistical Evaluation of Forecasts . . . . . . . . . . . . . 208

5.4 Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209

5.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212

v

Page 7: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

5.5.1 In-Sample Estimation Results . . . . . . . . . . . . . . . . 213

5.5.2 Out-of-sample Forecast Results . . . . . . . . . . . . . . . 221

5.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 223

6 Conclusion 227

A Appendix 232

A.1 Decomposition of Expectation of Squared Realised Volatility . . 232

A.2 Statistical loss Functions and their derivatives . . . . . . . . . . 233

A.3 Model Confidence Set results for remaining equicorrelation mea-

sures, loss functions, and test statistics. . . . . . . . . . . . . . . 234

vi

Page 8: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

List of Tables

3.1 MCS results for 22-day-ahead daily volatility forecasts . . . . . . 129

3.2 MCS results for 5-day-ahead daily volatility forecasts . . . . . . . 131

3.3 MCS results for 1-day-ahead daily volatility forecasts . . . . . . . 133

4.1 In-sample estimation results for intraday volatility models . . . . 173

4.2 MCS results for forecasts of q{t,i} . . . . . . . . . . . . . . . . . . 176

4.3 MCS results for forecasts of r2{t,i} . . . . . . . . . . . . . . . . . . 179

5.1 Xt measures descriptive statistics . . . . . . . . . . . . . . . . . . 212

5.2 In-sample estimation results of equicorrelation models . . . . . . 215

5.3 Vuong statistics for Xt measures . . . . . . . . . . . . . . . . . . 216

5.4 Vuong statistics for restricted against unrestricted models . . . . 218

5.5 MCS results for ρt forecasts; MSE, TR, Xt = REC . . . . . . . . 226

A.1 Statistical loss functions and their derivatives . . . . . . . . . . . 233

A.2 MCS results for ρt forecasts; MSE, TSq, Xt = REC . . . . . . . . 235

A.3 MCS results for ρt forecasts; QLIKE, TR, Xt = REC . . . . . . . 236

A.4 MCS results for ρt forecasts; QLIKE, TSq, Xt = REC . . . . . . . 237

A.5 MCS results for ρt forecasts; MSE, TR, Xt = DREC . . . . . . . 238

vii

Page 9: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

A.6 MCS results for ρt forecasts; MSE, TSq, Xt = DREC . . . . . . . 239

A.7 MCS results for ρt forecasts; QLIKE, TR, Xt = DREC . . . . . . 240

A.8 MCS results for ρt forecasts; QLIKE, TSq, Xt = DREC . . . . . . 241

A.9 MCS results for ρt forecasts; MSE, TR, Xt = SREC . . . . . . . . 242

A.10 MCS results for ρt forecasts; MSE, TSq, Xt = SREC . . . . . . . 243

A.11 MCS results for ρt forecasts; QLIKE, TR, Xt = SREC . . . . . . 244

A.12 MCS results for ρt forecasts; QLIKE, TSq, Xt = SREC . . . . . . 245

A.13 MCS results for ρt forecasts; MSE, TR, Xt = ut . . . . . . . . . . 246

A.14 MCS results for ρt forecasts; MSE, TSq, Xt = ut . . . . . . . . . . 247

A.15 MCS results for ρt forecasts; QLIKE, TR, Xt = ut . . . . . . . . . 248

A.16 MCS results for ρt forecasts; QLIKE, TSq, Xt = ut . . . . . . . . 249

viii

Page 10: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

List of Figures

2.1 Sample volatility of the S&P 500 Index . . . . . . . . . . . . . . 29

2.2 Sample autocorrelation of squared daily returns . . . . . . . . . . 31

2.3 Sample autocorrelation of realised volatility . . . . . . . . . . . . 32

2.4 Volatility asymmetry . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.5 S&P 500 Index implied volatility smirk . . . . . . . . . . . . . . . 45

3.1 Sample Volatility Index and Realised Volatility levels . . . . . . . 116

3.2 Unadjusted V IX and target realised volatility . . . . . . . . . . . 125

3.3 Risk-adjusted V IX and target realised volatility . . . . . . . . . 126

4.1 Autocorrelation of S&P 500 Index 5-minute log-returns . . . . . 162

4.2 Autocorrelation of S&P 500 Index 5-minute squared log-returns . 163

4.3 S&P 500 Index intraday periodicity . . . . . . . . . . . . . . . . . 164

4.4 S&P 500 Index intraday periodicity excluding opening period . . 165

4.5 Intraday periods’ mean of level of the V IX . . . . . . . . . . . . 166

4.6 Trading volume in V IX futures . . . . . . . . . . . . . . . . . . . 167

4.7 Order flow in V IX futures . . . . . . . . . . . . . . . . . . . . . . 168

4.8 Target q{t,i} . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

ix

Page 11: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

5.1 Sample levels of equicorrelation measures . . . . . . . . . . . . . 210

5.2 In-sample fitted equicorrelations . . . . . . . . . . . . . . . . . . 220

x

Page 12: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option
tbilek
Typewritten Text
tbilek
Text Box
22 March 2012
Page 13: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Acknowledgements

Completing this dissertation has been similar to a road trip across the desert

to a fabled oasis of wonder and enlightenment. There have been detours, road

blocks, pot holes, almost interminably slow periods of non-progress, break-

downs, and fits of rage along the way; one may even question if I should ever

have been given the keys in the first place. However, with the destination

within sight and the journey nearly complete, it must be said that it has all

been worth it. Along the way, there have been many important people acting

as guides whose help has been invaluable.

Firstly, thanks go to my parents for funding my adventure! Your moral sup-

port and unerring faith has kept my spirits up when I otherwise felt doomed to

fail. None of this would have been possible without you and I will forever be

appreciative. To my sister, Kate, thank you as well for listening to my rants

and talking sense into me when I needed it most. Many successes lie ahead of

you and I will be there to ride your coat tails. Although, if you ever need my

help I will be there for you as you have unerringly been there for me.

I would never have begun this undertaking had it not been for Peter Whe-

lan showing faith in my abilities and encouraging me to complete my Honours

year. During that year I had the pleasure of sharing a windowless room with An-

drew Blackman, James Bond(io), Katrina Brooks, Brooke Cowper, and Stephen

Thiele1. All of you were fantastic fun as well as being great sources knowledge

as I struggled through the year and you all continue to be dear friends. I also

learned a great deal in this year from Robert Bianci, Michael Drew, and Evan

Reedman.

1Hey Steve, since I came up with the road trip analogy, I’ve been referring to the thesis asthe Thundercougarfalconbird, thought you’d like that!

1

Page 14: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

After finishing Honours, I took an eighteen month long detour to UQ. This

diversion was made all the more enjoyable and productive by Elliot Dovers,

without his help PDEs would have been impenetrable and as dull as rugby

league without SBW. You remain a true friend and I wish you all the luck in

your chosen career, whether that ends up being a statistician or surfing instruc-

tor.

In my time completing this thesis, the support (and patience!) from my su-

pervisor Adam Clements has been invaluable. I must thank him for all of the

time and effort he has put in over the last few years to help me learn and de-

velop the academic skills that I will rely on throughout my career. I know I

should have done more, and done it earlier, but thank you for keeping the faith

that it would get done eventually! You have taught me a great deal, as well as

simply being a good mate, thank you.

Special thanks must also go to Professors Stan Hurn and Daniel Smith. Your

insights and probing questions always left me slightly dazed as I queried how

much I really knew of my topic. However, it was all for the better as I learned

to think and analyse from new perspectives. It was an honour to learn from

you, even if it was damaging for the ego.

My fellow students have also been a great source of wisdom in my time com-

pleting this journey. In particular, I have spent many hours learning from Mark

Doolan, Doureige Jurdi, Andrew McClelland, and Ayesha Scott. Ayesha de-

serves special thanks for having the patience to share an office with me for such

an extended period of time.

Of course, all of the other academic and professional staff of the School of Eco-

2

Page 15: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

nomics and Finance have provided fantastic help and support over the years,

as well as many fun times. The School would not be the same without the

institution that is John Polichronis, whose wisdom is highly valued. Joanne

Fuller has also been a delight to work with, and I was glad to find someone else

as into Stargate as me! Special mentions must go to Stefan Trueck and Nedda

Cecchinato for some fun nights out. Finally, I have received a great deal of

inspiration from two external academics, Professor Hubert J. Farnsworth and

Dr. Ogden Wernstrom; thanks go to them for their exciting new approaches to

existing ideas and being a great source of entertainment.

Aside from academic guidance, there are many people along the way who have

given me support. I cannot name them all, nor can I give due credit for their

patience in hearing me vent my frustrations regarding university. In their own

ways, each has contributed to me finally making it to the end of this journey, I

owe them all a great deal, even if some of us have since gone our separate ways.

These are all true friends with whom I have shared many valued memories and

outstanding times: Leon Andrews, Enzo Anselmo, Cam Ayre, Peter Brauer,

Jackie Campbell, Shannon Cowper, Theodora Dupont-Courtade, Chris Gracey,

Ayla Jade Davis, Martin Fitzgerald, Katerina Gagliostro, Nick Gilchrist, Zara

Gray, Claire Greenwell, Kirsty Knox, Pauline Lez, Heather Loveband, Sam

MacAulay, Reece Mclean, Daniel Moffet, Emily O’Neill, Nathan Robertson,

Mark Sherwood, Ben Smith, Tiana Speter, Blair Studley, Andrew Swain, Ge-

offery Tolo, Tim Trudgian, and Darren Wells.

I also would like to acknowledge the funding received from the Australian Gov-

ernment through the Australian Postgraduate Award, and also the School of

Economics and Finance at the Queensland University of Technology for their

Top-Up scholarship, and also for funding several trips to valued conferences and

workshops both domestically and internationally.

3

Page 16: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Chapter 1

Introduction

1.1 Overview

Many financial decisions of economic agents are influenced by the level of risk

involved in each of their potential investments; the typical investor demands

a higher level of return if they are to take on a higher level of risk. Further,

the interrelationships between investment options, including how their risks co-

vary, will also affect their investment decisions. For these reasons, an accurate

understanding of risk is central to an understanding of finance. Consequently,

the ability to generate reliable predictions for the level of risk of individual as-

sets, and the relationship between assets, leads to a more informed choice being

made in financial applications and a more efficient allocation of scarce financial

resources.

It is common in the financial economics and econometrics literature to attempt

to quantify the level of risk of an asset by its variance or volatility, doing so

allows financial economists to draw upon a broad range of results in the statis-

tics literature. Applying these results to financial data helps to understand,

model, and forecast the level of volatility of financial assets for use in impor-

4

Page 17: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

tant practical applications, such as optimal portfolio allocation decisions. Over

the last few decades, increasingly sophisticated time-series models have been

proposed to capture the salient features of volatility that have been revealed

through such statistical analysis. These models generate forecasts by applying

a weighting scheme to historical measurements of volatility, prior observations

are statistically evaluated for their relevance to the future volatility of the hori-

zon of interest.

While the progress of these time-series volatility models has been impressive

in their complexity and continued improvement in forecast performance, the

development of trading in financial contracts known as options has led to an

alternative approach to generating predictions of future volatility. This alterna-

tive approach is based on filtering volatility-related information from the market

traded value of option contracts.

The value of an option is dependent on the likelihood of a future event occur-

ring and the market values of a series of options priced relative to a spectrum

of possible events may be used to calculate an expectation of the volatility of

the relevant asset. This expectation of volatility is then a competing forecast

to the expectation generated by the time-series model approach, or the two

approaches may be combined.

If options markets are efficient in determining the fair value of these financial

assets, market participants should incorporate not only the historical informa-

tion that time-series models are estimated on but also their own predictions

of future events. That is, the information set used in pricing options should

be larger than that on which time-series models are based on. Efficient use of

this larger information set would imply that options based forecasts may be su-

perior to those from time-series models that utilise historical information alone.

5

Page 18: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

The relative merits of these two broad categories of forecasting have been

studied in great detail over the last three decades. However, despite all of

this progress, there are still some interesting and practical avenues for research

that invite exploration. This thesis provides new insights into three empirical

problems in the forecasting of volatility and correlation, focusing on the role

of option implied measures. Broadly, the problems analysed are (i) how the

premium demanded for accepting additional volatility risk influences options

market based volatility forecasts, (ii) whether intraday volatility levels may be

predicted by movements in intraday options trading, and (iii) a multivariate

case of forecasting the mean level of interrelatedness (mean correlation) of a

large stock market index.

The remainder of this Chapter completes the introduction to the thesis. Sec-

tion 1.2 outlines the key research questions that will be examined. Section

1.3 outlines the structure of the thesis including the literature review, research

chapters and conclusion. Finally, Section 1.4 highlights the key contributions

of this thesis.

1.2 Key Research Questions

The overarching theme of this dissertation is an examination of whether in-

formation from derivatives markets allows for improved forecast performance

for the volatility of (and correlation between) various financial assets. This

problem is addressed in three empirical exercises where novel methodologies for

generating forecasts are tested against existing benchmarks in the literature.

The specific key research questions that are investigated here, and inform the

general question of the value of derivatives markets’ information for the pur-

poses of forecasting volatility and correlation, are now detailed.

6

Page 19: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Does taking into account the volatility risk premium improve the

forecast performance of implied volatility?

There are myriad investigations into the relative forecast performance of time-

series models of volatility and the implied volatility from options markets, in-

cluding whether a combination of their information sets may yield superior

forecasts to either used independently. However, it is well known that the im-

plied volatility is a risk-neutral forecast of volatility and the object of interest

is typically the volatility under the physical measure. This discrepancy in the

measures under which the forecasts are generated and the target evaluated

may be removed if one accounts for the volatility risk premium. Hence, the

first question investigated in this dissertation is whether taking the volatility

risk-premium into account for the risk-neutral implied volatility improves fore-

cast performance over, firstly, the unadjusted implied volatility and, secondly,

to existing time-series models of volatility.

Do intraday movements in a publicly available volatility index or

the trading activity in futures on that index predict intraday volatil-

ity?

In addressing the previous research question, it is noted that numerous mod-

els exist for forecasting the level of volatility over daily, weekly, or monthly

horizons. However, few alternatives exist for forecasting volatility at an intra-

day level. This is surprising as accurate forecasts of volatility over such short

time horizons may be of use to derivatives traders and portfolio managers in

managing time-varying hedge ratios and are related to the increasing level of

algorithmic trading. It is then worth investigating if an alternative and novel

semi-parametric framework that has had success at forecasting volatility over

the daily horizon may also offer improvement at the intraday level.

7

Page 20: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Further, prior research has demonstrated that movements in both the op-

tions and futures market lead spot market prices, and are related to subse-

quent volatility. However, to the best of the author’s knowledge, these intraday

movements have not been used directly to forecast volatility at high frequen-

cies. Hence, the second question of this thesis addresses whether high-frequency

movements in derivatives market variables lead intraday volatility in the spot

market. Specifically, whether movements in the intraday level of the Volatility

Index, or shocks to the trading activity of futures written on the Volatility In-

dex lead spot market volatility of the S&P 500 Index.

Does the use of implied or realised measures lead to superior forecast

performance of mean correlation?

Again, while there are many studies into the merits of utilising implied volatility

in a univariate framework, there is a dearth of research into the use of option

implied information in a multivariate setting. Some recent time-series models

of conditional covariance matrices (which are of use in, for example, portfo-

lio allocation problems) make the simplifying assumption that the correlation

between all assets is equal. This assumption may also be used in the options

market to calculate a measure of implied mean correlation. Similar to the case

of including implied volatility in univariate volatility models, this implied mean

correlation may be used in conditional covariance models as an alternative to

traditional time-series models that use historical returns alone. Further, as re-

alised volatility is now typically preferred to daily squared returns as a proxy

measure of univariate volatility, the realised covariance or correlation between

assets based on intraday data may be preferred to daily measures of correlation

as a proxy for the interrelationship between financial assets. Whether these

two alternatives, implied and realised mean correlation, offer improved forecast

performance over existing time-series models is investigated.

8

Page 21: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

1.3 Thesis Structure

How the research questions just outlined will be addressed is now discussed.

To begin, some of the prior literature that informs the current work will be

reviewed in Chapter 2, this Chapter contains four main components. Firstly,

Section 2.2 defines the concept of volatility and how it may be measured. Sec-

ondly, as the main theme of this thesis is the role of options based information

in generating forecasts, Section 2.3 discusses how option prices may be distilled

to form predictions of volatility and correlation, with previous research into

this approach’s forecast performance also presented. Thirdly, the competing

forecasting framework of using statistical time-series models of volatility and

correlation is documented in Section 2.6 by the presentation of a broad range of

some of the more popular specifications in this field; how the information sets of

the two competing forecasting methodologies may be combined is also discussed.

Finally, how the relative statistical performance of the out-of-sample forecasts

from the two competing approaches may be evaluated is detailed in Section 2.9.

Chapter 3 conducts an examination of the potential forecasting benefits from

including the volatility risk premium into the implied volatility from options

markets. There exist many prior studies that compare the relative forecast

performance of univariate time-series models of volatility with option implied

volatility. However, the implied volatility is a risk-neutral forecast and the tar-

get is the volatility under the physical measure; no prior studies were found

that accounted for this discrepancy in measures between forecast and evalua-

tion. To fill this shortcoming in the literature, this Chapter presents a method-

ology for estimating the volatility risk premium, which may then be used to

transform the risk-neutral implied volatility into its risk-adjusted equivalent.

This risk-adjusted forecast from the options market may then be compared to

the risk-neutral forecast as well as several time-series models of volatility. It is

9

Page 22: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

found that this risk-adjusted implied volatility cannot be separated from the

best time-series models unconditionally, and dominates in low volatility periods.

Chapter 4 assesses the relevance of high-frequency measures from derivatives

markets in forecasting intraday volatility. While there exist many models of

volatility for the daily horizon or longer, far fewer models are available for

forecasting volatility over smaller time intervals such as 5-minutes. This Chap-

ter analyses a recently proposed framework specifically designed to capture

the dynamics of intraday volatility. Further, a novel semi-parametric frame-

work of volatility forecasting is investigated as an alternative method of intra-

day volatility forecasting. In both of these frameworks, the addition of high-

frequency measures of implied volatility and trading activity on this level of

implied volatility is examined for its usefulness in forecasting volatility over

small time horizons. This Chapter finds that while the inclusion of these mea-

sures leads to their forecasts being of higher rank to the standard specification,

the improvement is not statistically significant for forecasting high frequency

returns.

Chapter 5 is motivated by recent research that demonstrates the need for accu-

rate forecasts of the mean level of correlation of a portfolio, such forecasts are

of use in portfolio allocation decisions and predicting market returns. Chapter

5 contributes to this literature in two ways. Firstly, an existing specification for

modelling the evolution of this variable is adapted to include the level of mean

correlation implied by options markets. Secondly, it is investigated whether

realised measures of the mean correlation are superior to an existing measure

based on daily returns. This is analogous to the use of implied and realised

volatility in the univariate context. Both of the proposed amendments lead to

superior in-sample fit relative to the existing model. Further, the use of realised

measures dominate in out-of-sample forecasting.

10

Page 23: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Finally, Chapter 6 concludes by summarising the main results of this thesis

in the context of the key research questions outlined above, some potential

directions for future research are also outlined.

1.4 The Contributions of this Thesis

By addressing the key research questions previously outlined, this thesis con-

tributes to the volatility and correlation forecasting literature in the following

ways.

Prior studies into the forecast performance of implied volatility have some-

times been critical of the efficiency of the options market for being unable to

generate forecasts of equal or superior predictive accuracy relative to time-series

univariate models. However, these prior studies have neglected to account for

the volatility risk premium, which results in the implied volatility generally

producing upwardly biased forecasts. This thesis shows that accounting for an

unconditional level of volatility risk premium leads to a risk-adjusted implied

volatility forecast that dominates time-series models in low volatility periods

and cannot be separated from the top time-series models unconditionally. This

result is then an argument for the efficiency of options markets, contrary to

some of the prior literature in this field.

In addressing the second key research question, some important results regard-

ing intraday volatility forecasting emerge. The majority of models designed

to capture the intraday periodicity of volatility do not update their forecasts

within the trading day; two alternative frameworks that are able to update

their forecasts within the trading day are analysed. Further, derivatives market

information measures of future volatility are incorporated into these two frame-

works to analyse their potential forecasting benefit over historical information

11

Page 24: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

alone. After decomposing intraday returns into their daily, diurnal, and intra-

day stochastic components, it is shown that accurate forecasts of the intraday

stochastic component lead to superior forecasts of total volatility. That is, those

models that do not update their forecasts within the trading day are inferior to

those that do. It is also found that the alternative semi-parametric framework

of volatility forecasting that is successful at the daily horizon cannot repeat

this success for intraday periods, the traditional time-series model specification

is preferred. Finally, the derivatives market based information measures lead

to superior in-sample fit and their forecasts outrank those models that use his-

torical information alone. Further, the improvement in out-of-sample forecast

performance is statistically significant. That is, the derivatives market mea-

sures are of significant additional value for the purposes of forecasting intraday

volatility.

The final contributions of this thesis relate to forecasting the conditional cor-

relation matrix forecasting. An existing framework is adapted to include a

measure of implied mean correlation, and three alternative measures of histori-

cal mean correlation are proposed to replace a measure based on daily returns.

This is analogous to the use of implied and realised volatility in the univariate

volatility literature. All of the proposed adaptations lead to superior in-sample

fit, with the main result being that the use of implied mean correlation com-

pletely subsumes the information content of historical measures. This result

may be used to argue in favour of the efficiency of options markets. Further,

the implied mean correlation dominates for out-of-sample forecasts relative to

the daily returns based measure. However, when utilising the realised measures

of mean correlation, the implied level of mean correlation was not of incremental

value in out-of-sample forecasting.

12

Page 25: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Chapter 2

Literature Review

2.1 Introduction

This Chapter of the thesis discusses some of the prior literature that informs the

research questions of this dissertation. As the concepts of volatility and corre-

lation are of great importance in a wide range of practical problems in finance,

a great deal of research has been conducted previously in this field. Further,

the literature concerning the development of options pricing and the informa-

tion implicit in market prices of options is also vast. Finally, there has been

a significant amount of previous work done on the evaluation of the forecast

performance of various models and how to separate out their relative accuracy.

Therefore, it is beyond the scope of the current work to extensively review all

three of these distinct avenues of research. Rather, the seminal works in these

fields will be discussed as they inform the current work, as well as a review of

the most current results, technologies or developments in these fields.

The structure of this Chapter is to first define the concepts of volatility and

correlation as they are central to the current work. How these variables have

previously been modelled or forecast by utilising historical information alone,

13

Page 26: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

through time-series models, will then be discussed. An alternative to the use of

time-series models is to filter information relevant to these variables from the

options market; how this is conducted will then be discussed. Finally, a review

of the statistical methods of discerning the superior forecasting technology will

be presented.

2.2 Defining and Measuring Volatility

Volatility can be thought of as a measure of uncertainty of the prices of assets.

Forecasts of volatility are important factors in myriad financial problems such

as asset pricing, portfolio construction, risk management, and the pricing of

financial derivatives. Indeed, derivatives with volatility as the underlying asset

are now actively traded on exchanges such as the Chicago Board of Exchange

(CBOE). Accurate forecasts of volatility are then a necessity for the pricing of

such derivatives, as is a forecast of the volatility of volatility itself. Further,

as highlighted by Poon & Granger (2003), the first Basel Accord of 1996 has

effectively made the forecasting of volatility a compulsory exercise for finan-

cial institutions. This accord requires that such institutions have a minimum

capital reserve at least three times their Value-at-Risk (VaR), a measure of a

minimum loss depending on the assumed distribution of returns, a value that

depends on expected volatility. Forecasting of volatility has also been shown to

be an important input to portfolio allocation problems with strategies based on

volatility timing outperforming static models or those based on market timing

(Johannes, Polson and Stroud, 2002). It is then clear that accurate forecasts

of volatility are of high importance in a practical setting and much theoretical

and empirical work has previously been conducted on how such forecasts may

be generated. However, before one can begin discussing the generation of fore-

casts, the concept of volatility must be more accurately defined.

14

Page 27: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Andersen, Bollerslev, Chrisoffersen, and Diebold (2006, p.1), hereafter ABCD,

define volatility as “the variability of the random (unforeseen) component of a

time series”. A complicating feature of measuring and forecasting volatility

is that, unlike the price of an asset which is directly observable in a market,

volatility is a latent variable. While the true level of volatility is unobserved,

it is possible to construct measures of volatility based on historical information

which act as volatility proxies. A popular proxy for the daily level of volatility

is the squared daily return on an asset, for reasons now outlined. The work of

Engle (1982) defines returns, rt, to follow the process

rt = µ+ εt,

εt = h1/2t zt,

zt ∼ N(0, 1),

(2.1)

where µ is the deterministic drift term for the returns process, and εt is the

random component of the time series referred to by ABCD (2006) in their quote

from the previous paragraph. An important term in the above definition is ht,

which represents the variability of the random component; it is this conditional

variability that is the object of interest and is referred to as the volatility of the

asset. It is possible to estimate this quantity from squared daily returns if, as

is common in the literature, it is assumed that the drift term is inconsequential

15

Page 28: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

for short time horizons such as a day; this allows the following analysis

rt = εt = h1/2t zt,

r2t = htz2t ,

E[r2t∣∣Ft−1

]= E (ht| Ft−1) E

(z2t

∣∣Ft−1

),

E[r2t∣∣Ft−1

]= E (ht| Ft−1) ,

(2.2)

where Ft−1 is the information set available at time t − 1. Alternatively, one

could recall E[X2]

= E [X]2 + V ar [X]. In the present case it is assumed that

E [X] = 0, and V ar [X] = ht; it is then trivial that E[r2t]

= ht.

The above decomposition demonstrates that squared returns may act as a

proxy for the conditional volatility, which is otherwise unobservable. While

daily squared returns have been a traditionally popular measure of the latent

level of volatility, it is increasingly being replaced by an alternative measure of

volatility. The realised volatility of an asset is now the standard measure of

volatility and its development and properties are now discussed; as this quan-

tity is central to this thesis, a substantial amount of detail will go into this

discussion.

2.2.1 Realised Volatility

An alternative estimator for latent volatility, the Realised Volatility (RV), has

become increasingly popular over the last decade or so, even though the idea

dates back to as early as Merton (1980)1. It has been shown theoretically that

1This delay is primarily driven by the fact that high-frequency databases were not readilyaccessible until the mid-1990s.

16

Page 29: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

RV provides the natural, model-free benchmark measure of the latent volatil-

ity process and is a less noisy volatility proxy than its alternatives (see, for

example, Andersen, Bollerslev, Christoffersen, and Diebold (2006)). Further,

Patton (2011) demonstrates that this less noisy proxy for the true conditional

variance increases the power in tests of volatility forecast comparison, a result

particularly relevant for this thesis. While there is quite some detail in the

construction of the properties of the RV measure, particularly if one takes into

account market microstructure effects, the concept is actually quite straightfor-

ward. This Section briefly details the properties of RV, firstly under the simple

setting of continuous price evolution before discussing market microstructure

effects and jumps and then how it may be used in evaluating volatility forecasts.

For more technical detail, see Andersen, Bollerslev, Christoffersen, and Diebold

(2006) for an introduction; Hansen and Lunde (2006) provide a more rigorous

account that involves market microstructure effects, and McAleer and Medeiros

(/citeMcAleerMedeiros)

Defining Realised Volatility

To begin with, define the concept of integrated variance, IVar, as a measure of

the continuous component of total volatility over the period t− h through to t,

given by the cumulation of the instantaneous volatility over that time horizon,

IV ar(t, h) =

∫ t

t−hσ2

sds. (2.3)

The IVar is a natural measure of the total amount of volatility over the horizon

of interest, it simply takes the level of volatility within each instant of that

horizon and sums these amounts. However, as the instantaneous volatility is

not observable, the IVar may not be used directly; fortunately, a measure exists

that provides an asymptotically unbiased approximation to this quantity, that

measure is the RV. The link between RV and IVar is now presented, and in the

17

Page 30: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

process the importance of IVar is also reinforced2.

Assuming that the log-price, pt, process is given by the following stochastic

differential equation (SDE)

dpt = µtdt+ σtdWt, t ∈ [0, T ] , (2.4)

where µt is a drift term that is a function of time, σt is the time-varying volatility

of the process, and Wt is a standard Brownian motion. Then the return over a

small time horizon3, ∆, may be approximated by

rt,∆ = pt − pt−∆ ≈ µt−∆ · ∆ + σt−∆ · ∆Wt, (2.5)

where ∆Wt ∼ N(0,∆). These dynamics give rise to the following properties

rt,∆approx∼ N(µt−∆ · ∆, σ2

t−∆ · ∆),

E (rt,∆| Ft−∆) = µt−∆ · ∆,

E

(r2t,∆|Ft−∆

)= µ2

t−∆ · ∆2 + σ2t−∆ · ∆,

≈ σ2t−∆ · ∆ +O

(∆2),

≈ V ar (rt,∆|Ft−∆) .

(2.6)

As ∆ is assumed to be small and ∆2 is an order of magnitude smaller, it may

be ignored. The above result says that the expectation of the squared return is

2The following discussion benefited greatly from a series of lectures and discussions withAndrew Patton at the 2010 Financial Integrity Research Network Masterclass, some resultsare taken from those lectures.

3To be clear, in the notation that follows t − ∆ represents one period of length ∆ beforethe period t; the notation t − 1 represents one full trading day before period t.

18

Page 31: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

approximately equal to the conditional variance of that period’s return. This re-

sult is similar to earlier discussion for daily returns, albeit it is achieved through

an alternative method.

Assuming that ∆ is a time period of fixed length, then one trading day may be

broken up into 1/∆ intraday periods of equal length. Similarly, a discretisation

of the integral in Equation 2.3 may be executed by summing 1/∆ estimates of

a period’s variance to yield the estimate of the day’s variance. Combining these

facts with the results in Equation 2.6 yields

1/∆∑

j=1

E[r2t−1+j·∆,∆

∣∣Ft−1+(j−1)·∆]

≈1/∆∑

j=1

σ2t−1+(j−1)·∆ · ∆,

≈∫ t

t−1σ2

sds ≡ IV ar(t, 1).

(2.7)

Hence, summing one-step-ahead expected squared returns over the number of

periods in a trading day is approximate to the integrated variance of that day.

Using the final result in Equation 2.6, it can be shown that the conditional

variance of the daily return is approximate to the expected integrated variance

19

Page 32: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

for that day

V ar (rt,1|Ft−1) ≈ E(r2t,1|Ft−1

),

≈ E

1/∆∑

j=1

rt−1+j·∆,∆

2∣∣∣∣∣∣Ft−1

,

≈ E

1/∆∑

j=1

r2t−1+j·∆,∆

∣∣∣∣∣∣Ft−1

,

≈ E

1/∆∑

j=1

E[r2t−1+j·∆,∆

]∣∣∣∣∣∣Ft−1

,

≈ E [IV ar(t, 1)|Ft−1] .

(2.8)

In the process above, it is assumed in going from the second to third line that

there is a zero covariance between lagged returns (justified by the assumption

that the log-price return follows a Brownian motion), and the step from the

third to fourth line relies on the Law of Iterated Expectations.

The result in Equation 2.8 is quite important. By establishing a link between

the conditional variance of returns and the expected integrated variance, two

distinct lines of research are found to be related. That is, financial economists

have long studied the importance of the conditional variance of returns while

probability theorists have studied the notion of integrated variance. Given that

V ar (rt,1|Ft−1) ≈ E [IV ar(t, 1)|Ft−1], it may be that properties of the IVar

known to probability theorists may be quite useful in the forecasting of condi-

tional variance for financial economists. It turns out that an approximation to

the IVar has yielded a particularly productive line of research for econometri-

cians.

20

Page 33: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

It has already been noted that while IVar is an object of interest, the instanta-

neous volatility is unobservable which renders direct evaluation of the integral

in Equation 2.3 infeasible. However, given the discussion above, it should be

clear that a readily available estimator of IVar exists, this estimator is termed

the realised volatility and is defined as

RV∆(t, 1) ≡ RV(m)t ≡

m∑

j=1

r2t−1+j·∆,∆, (2.9)

where m = 1/∆, the number of periods within a trading day. This quantity

is merely the sum of squared intraday returns, which is easily calculated. It

has been shown that the RV measure provides an asymptotic approximation to

the IVar as ∆ → 0 (See, for example, Andersen, Bollerslev, Christoffersen, and

Diebold (2006)),

RV(m)t

p→ IV ar(t, 1), m→ ∞. (2.10)

As the integral of the instantaneous volatility provides a natural measure of the

latent volatility, it is clear that as the RV approximates the IVar, the RV also

provides a natural measure of the latent volatility. However, it must be noted

that RV is an estimator of IVar and is measured with error. The magnitude

of this error may be measured by the variance of the RV and is a function

of the quantity known as the integrated quarticity4 (IQ) which was discussed

as early as Jacod (1994) but more formally analysed in the current context

by Barndorff-Nielson and Shephard (2002). All of this may be summarised as

4A full discussion of integrated quarticity is beyond the scope of this discussion. Sufficeto say here that it is similar to the integrated variance, but relies on the 4th power ratherthan 2nd. The integrated quarticity may be estimated by the realised quarticity, IQ(t, 1) ≡∫ t

t−1σ4

sds ≈ RQ(m)t ≡

∑m

j=1 r4t−1+j·∆,∆.

21

Page 34: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

follows

V ar(rt|Ft−1) ≈ E [IV ar(t, 1)|Ft−1] ,

IV ar(t, 1) = E [IV ar(t, 1)|Ft−1] + ηt, E [ηt|Ft−1] = 0,

RV(m)t = IV art + ζ

(m)t , ζ

(m)t ∼ N(0, 2IQt/m).

(2.11)

These relationships may be used to imply that

RV(m)t ≈ V ar(rt|Ft−1) + ε

(m)t ,

ε(m)t ≡ ηt + ζ

(m)t ,

E

[ε(m)t |Ft−1

]= E [ηt|Ft−1] + E

[ζ(m)t |Ft−1

]= 0.

(2.12)

That is, the realised volatility may be thought of as a conditionally unbiased

proxy for the true conditional variance. However, it has already been discussed

that the squared daily returns are also a conditionally unbiased proxy, so what

motivates the extra technical detail required to come to the same conclusion

for RV? The fact is that RV is a much more accurate proxy for the conditional

variance. This may be seen by realising that the squared daily return is the RV

with just one period for the trading day. Recalling that the variance of RV is

given as a function of IQ,

V ar[RV

(m)t − IV ar(t, 1)

]= V ar

[ζ(m)t

]≈ 2IQt

m, (2.13)

comparing the relative accuracy of daily returns with 78 five-minute returns for

one trading day yields

√√√√√V ar

[ζ(1)t

]

V ar[ζ(78)t

] =

√√√√2IQt

12IQt

78

=√

78 = 8.83. (2.14)

22

Page 35: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

That is, while both the 1- and 78-period RV estimates of the IVar are con-

ditionally unbiased, the variance of the 78-period estimate is nearly 9 times

smaller than the daily estimate. This is an important result in the context of

this thesis as the aim is to distinguish between competing forecasts of volatility,

or measures which are derived from volatility. By having a less noisy measure

of the target variable, tests of empirical accuracy are more powerful as shown

in a study by Patton (2011). Hence, the RV based measure is preferred in this

thesis to the more noisy measure of volatility provided by squared daily returns.

Market Microstructure Effects and Jumps

The above discussion has been based on the SDE in Equation 2.4 which as-

sumes a continuous price evolution. However, it has been observed that the

sample paths of stocks do not move continuously but occasionally “jump” (for

example, see Andersen, Bollerslev, Diebold and Vega (2007), Johannes and

Dubinsky (2006) for jumps around company specific events such as earnings

reports; Tauchen and Zhou (2006) find evidence that jumps occur every 2 days

in the Brazilian real, 4 days for S&P 500 Index futures; 5 days for the Treasury-

bond index; and 10 days for Microsoft.). This fact is partially attributable to

market microstructure effects; transactions take place with a minimum price

movement such as one 32nd of a dollar which precludes continuous movement.

Further, as would be expected in an informationally efficient market, the ar-

rival of certain news events may cause prices to jump in adjustment to the new

information. Measuring volatility in the more realistic case is now dealt with,

where market microstructure effects and jumps are included in the discussion5.

It shall be shown that with some adjustments, the RV is still the most appro-

priate volatility measure for this thesis.

5Again, the following discussion benefited greatly from a series of lectures and discussionswith Andrew Patton at the 2010 Financial Integrity Research Network Masterclass, someresults are taken from those lectures.

23

Page 36: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Beginning with the idealised scenario where the volatility is constant within

a given trading day, returns are equally spaced, the drift term is zero, and there

are no jumps

rt = dpt = σtdWt,

στ = σt ∀ τ ∈ (t− 1 , t] ,

ri,m,t ≡∫ i/m

(i−1)/mrτdτ = σt

∫ i/m

(i−1)/mdWτ ,

{ri,m,t}mi=1

i.i.d.∼ N

(0,σ2

t

m

),

(2.15)

where ri,m,t is the return from period i − 1 to i on day t and is a function of

the number of trading periods in a day, m. In this idealised scenario,

RV(m)t ≡

m∑

i=1

r2i,m,t → IV ar(t, 1), m→ ∞,

E

[RV

(m)t

∣∣∣Ft−1

]≡ E

[m∑

i=1

r2i,m,t

∣∣∣∣∣Ft−1

]=

m∑

i=1

E[r2i,m,t

∣∣Ft−1

]= σ2

t .

(2.16)

Hence, the RV is an unbiased estimator for the level of daily variance for all

m, including m = 1. As mentioned, the RV is measured with error so will

not always provide the true level of volatility. The discrepancy between the

24

Page 37: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

estimated and true value may be measured through the MSE

MSE[RV

(m)t

∣∣∣Ft−1

]≡ E

[(RV

(m)t − σ2

t

)2∣∣∣∣Ft−1

],

= E

[(RV

(m)t

)2∣∣∣∣Ft−1

]− σ4

t ,

=2 +m

mσ4

t − σ4t ,

=2

mσ4

t .

(2.17)

The above decomposition6 implies that for a given level of IQ, the MSE of the

RV estimator is decreasing in the number of intraday periods. That is, the more

frequently the return path is sampled, the closer the estimate is to the true level

of volatility; one should then sample as often as possible to achieve the most

accurate estimate. However, it has already been said that actual transactions

do not take place under these idealised conditions.

In an important paper, Hansen and Lunde (2006) discuss the treatment of

a more realistic scenario. In their setting, one is unable to observe the true

efficient price of an asset, rather one may only observe the transacted prices.

That is, the efficient price of an asset is measured with noise. Noise has been

discussed in the finance literature since at least Black (1976) and many poten-

tial sources of noise exist7. As the RV is constructed from the observed noisy

prices, this noise flows through to the estimate of RV. In the case of i.i.d noise,

6The details of the step from line two to three are given in the Appendix A.1.7For example, the discreteness of data (Harris, 1990, 1991), and properties of the trading

mechanism Black (1976). More detail is available in Hasbrouck (2004).

25

Page 38: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Hansen and Lunde (2006) show that

E

[RV

(m)t

]= IV ar(t, 1) + 2mω2, (2.18)

where ω2 is the variance of the noise process. The key observation to be made

in the above formula is that RV is now a biased estimator of the IVar and that

this bias is increasing in m. To reiterate, in the case of i.i.d noise, the more

often the return path is sampled the worse our estimate of the IVar.

There is no contradiction in the results of Equations 2.17 and 2.18; while the

first result suggests sampling as often as possible and the second result suggests

that sampling too frequently in the presence of noise will lead to inaccurate

measurements8, a compromise is generally reached in the empirical literature.

Typically, econometricians will make use of high-frequency data to the point

that noise does not interfere with their estimates. By sampling at an appro-

priate frequency, it is hoped that the RV estimator provides more accurate

measures of latent volatility than squared daily returns without sampling too

frequently to become biased by the presence of noise.

To be concrete, Hansen and Lunde (2006) find that for the Dow Jones Indus-

trial Average stocks, noise may be ignored when intraday returns are sampled

at frequencies of 20-minutes or lower. This result coincides with the paper of

Andersen, Bollerslev, Diebold, and Labys (2000) who introduced the volatility

signature plot. This plot is a diagnostic tool whereby the RV of an asset is

calculated for a varying number of sampling frequencies within a day, the idea

being that a plot of the estimated RV should stabilise at approximately the

true level of volatility. They find that sampling too frequently captures exces-

sive amounts of noise in the RV estimator and results in an artificially high

8In the limit of sampling at infinitesimally small intervals, the estimate of RV will explode.

26

Page 39: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

calculated RV. Empirically, their plot of RV stabilises at approximately the

20-minute return interval and suggest that a sampling interval of 20-minutes

“represents a reasonable tradeoff between minimising microstructural bias and

minimising sampling error”. To allow for a round number of trading periods

within one trading day, this dissertation uses 30-minute trading windows in its

construction of any realised volatility measures.

The above discussion shows that even in the presence of noise, the RV is still

the best estimator of latent volatility for use in this dissertation; however, the

issue of jumps has still not been discussed. Fortunately, the seminal work of

Barndorff-Nielson and Shephard (2006) demonstrates that in the presence of

jumps, while the RV no longer is an asymptotically unbiased estimator of the

integrated variance, it is an asymptotically unbiased estimator of the quadratic

variation or total volatility.

It is well documented that asset prices are occasionally observed to have dis-

continuities in their price evolution, a more general SDE that can accommodate

this fact is given below

dpt = µtdt+ σtdWt + κtdqt,

pt =

∫ t

0µsds+

∫ t

0σsdWs +

Nt∑

j=1

κj ,

(2.19)

where Nt =∫ t0 dqs is a counting process, and κt is the size of the jump, con-

ditional on the jump occurring. If it is again assumed that the drift term is

negligible, the evolution of the asset price is able to be decomposed into its

27

Page 40: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

continuous and discontinuous parts

pt =

∫ t

0σsdWs +

Nt∑

j=1

κj ≡ pct + pd

t . (2.20)

Define the quadratic variation (QV) of pt as

QV (t, 1) ≡ [pt] ≡ limm→∞

m∑

j=1

r2t,j ,

= [pc]t +[pd]t,

=

∫ t

0σsdWs +

Nt∑

j=1

κj ,

≡ IV ar(t, 1) + J2.

(2.21)

That is, the total volatility, QV, of the price process of an asset is the sum of

the continuous volatility, IVar, and the squared jumps in price, J2. Barndorff-

Nielson and Shephard (2006) show that the realised volatility is equivalent to

the quadratic variation

RV(m)t ≡

m∑

j=1

r2t,jp→ [p]t =

∫ t

0σsdWs +

Nt∑

j=1

κj ≡ QV (t, 1). (2.22)

If no jumps occur, then the QV is equal to the IVar which is equal to the RV.

Hence, even in the presence of jumps, the realised volatility estimator is still

the natural choice as the measure of the latent volatility of an object of interest.

2.2.2 Stylised Facts of Volatility

The above discussion details how despite being an unobservable variable, accu-

rate proxies for the level of latent volatility may be found. Given the substan-

28

Page 41: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

tial amount of empirical and theoretical research conducted on the concept of

volatility, there are several salient facts about this quantity that are well known

in the literature. In order to generate accurate forecasts of volatility for use

in empirical applications, any model of volatility must be able to capture the

stylised facts that are now described.

Even though it has been argued that the realised volatility is a more accurate

measure of the latent level of volatility, early results regarding the properties

of volatility were in the context of using squared daily returns. Hence, the

discussion begins with plots of the estimated level of volatility for the S&P

500 Index from both the squared daily returns and realised volatility; these

plots both demonstrate the similarities and highlight the differences between

the measures.

Figure 2.1: The sample volatility of the S&P 500 Index as measured by squared dailyreturns (top panel) and realised volatility (bottom panel) using 30-minute intervals. They-axis represents the annualised standard deviations.

r2 t

Squared Daily Returns

RV

(30)

t

Realised Volatility

2/01/1990 22/12/1995 27/12/2001 31/12/20070

20

40

0

20

40

29

Page 42: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

The top panel of Figure 2.1 plots the estimated volatility of the S&P 500 Index

from the squared daily returns, while the bottom panel is the estimate from

the RV. The period examined extends from the beginning of 1990 through to

the end of 2007, a period which includes crises such as the Russian default, the

Asian financial crisis, and the technology bubble; the recent Global Financial

Crisis is not included in this plot as the levels of volatility reached there over-

whelm the remainder of the plot, rendering it uninterpretable.

The first point to note is that volatility is clearly time-varying. Figure 2.1

demonstrates that heteroscedasticity is an empirical fact and that any assump-

tion of a constant level of conditional variance is unrealistic. It can be observed

that both measures follow a very similar pattern, rising and falling around the

same periods; this result is unsurprising as RV nest squared daily returns if

only one period per trading day is used. However, it is clear that the squared

daily returns are a noisier measure and reach higher peaks relative to the RV.

This empirically verifies the theoretical findings outlined previously, where the

use of intraday information leads to less noisy estimates of the latent volatility.

Some of the more important characteristics of volatility are now discussed.

From the plots of estimated volatility in Figure 2.1, it is clear the volatility

tends to cluster. Periods of high (low) volatility are generally followed by an-

other period of high (low) volatility; this is an important feature of volatility

and is known as volatility persistence. To illustrate the persistence present in

volatility, the autocorrelation of both measures of volatility is plotted in Figures

2.2 and 2.3.

The plots of the sample autocorrelations of squared daily returns, Figure 2.2,

and realised volatility, Figure 2.3, highlight the persistent nature of volatility;

both measures of latent volatility have significant autocorrelations at over 50

30

Page 43: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 2.2: Sample autocorrelation of squared daily returns

Lag

Sam

ple

Auto

corr

elat

ion

0 5 10 15 20 25 30 35 40 45 50-0.2

0

0.2

0.4

0.6

0.8

lags or over two trading months. It should also be noted from Figures 2.2 and

2.3 that the sample autocorrelation of the RV decays at a much more steady

rate relative to the squared daily returns; this fact again reflects the excess

noise of squared daily returns relative to the RV.

The final empirical fact that is highlighted graphically in this Section is that of

volatility asymmetry for equities returns. That is, volatility responds differently

to positive and negative shocks to the returns process. Typically, a negative

shock to the returns process will lead to a level of volatility higher than if a

similarly sized positive shock had occurred; this asymmetry may be explained

by either the leverage effect or through volatility feedback.

The leverage effect first discussed in Black (1976) posits that as the market

price of equity drops and the level of debt remains constant, the financial lever-

31

Page 44: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 2.3: Sample autocorrelation of realised volatility

Lag

Sam

ple

Auto

corr

elat

ion

Sample Autocorrelation Function (ACF)

0 5 10 15 20 25 30 35 40 45 50-0.2

0

0.2

0.4

0.6

0.8

age of the firm increases which results in the firm being riskier and hence the

volatility of equity prices increases. Alternatively, volatility feedback is advo-

cated by Campbell and Hentschel (1997) as an explanation for the observed

volatility asymmetry. Upon the release of unanticipated good news a positive

shock to returns will typically occur, volatility increases and feedback yields a

decrease in prices, dampening the initial positive return. Conversely, unantic-

ipated bad news will typically lead to a negative shock to returns, volatility

will again increase leading to an additional decrease in prices, amplifying the

initial negative shock. This results in volatility asymmetry as the volatility

feedback amplifies negative news events and dampens positive ones. This fea-

ture of volatility is depicted in Figure 2.4. In this plot, the mean level of the

RV is given conditional on the previous period’s return and its distance from

the mean return scaled by the standard deviation of returns. For instance,

the mean RV is calculated for those periods where the previous period’s return

32

Page 45: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

was between -1 and -2 standard deviations from the mean return. It may be

observed that the mean return of the RV is substantially higher for negative

shocks to the returns process relative to s similarly sized positive shock.

Figure 2.4: The mean level of the realised volatility conditional on the previous day’sstandardised return. The means are calculated by grouping returns on the integer numberof standard deviations away from the mean daily return.

Standard Deviations

RV

t

Asymetric Response of Volatility to Returns

-4 -3 -2 -1 0 1 2 3 40

5

10

15

20

25

The vast literature regarding univariate volatility has found a number of im-

portant features of volatility, some of which have been presented graphically

in this Section. While more detailed reviews of the features of volatility are

available in, for example, Poon & Granger (2003) or Taylor (2005), four main

points regarding volatility are given here.

1. Volatility is not constant. The level of volatility varies through time, a

property known as heteroscedasticity.

2. While volatility varies through time, it tends to revert to its long run

33

Page 46: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

mean.

3. Volatility tends to cluster. That is, periods of high (low) volatility are gen-

erally followed by further periods of higher (lower) than normal volatility.

This property is known as volatility persistence, as illustrated by the plots

of the autocorrelation function in Figures 2.2 and 2.3.

4. For stocks, shocks in volatility are typically negatively correlated with

shocks in asset price. This fact is known as volatility asymmetry or the

leverage effect.

Having defined the concept of volatility and discussed several of its salient

features, attention now turns to its importance in derivatives pricing, and also

how its level may be forecast for use in numerous financial applications.

34

Page 47: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

2.3 The Value of an Option

Volatility is a vital input into any accurate pricing model of an option con-

tract; the level of volatility affects the range of possible outcomes as well as

the probability of tail events. For an out-of-the-money option, higher levels of

volatility result in a higher possibility that the option may actually expire in

the money, the option is then worth more. In fact, the vega of an option, which

is the sensitivity of the option price to changes in volatility, is strictly positive;

increasing the level of volatility always leads to an increase in the option value.

Hence, an accurate understanding or being able to generate accurate forecasts

of volatility is central in the options market. This Section discusses some im-

portant relationships between volatility, options and their prices.

Since the concept of option contracts became widely studied in the 1960s after

first being proposed in 1900 within the Doctoral dissertation of Louis Bachelier9,

options have grown to be one of the most heavily traded financial instruments.

One of the main reasons for their popularity is their ability to manage uncer-

tainty; i.e. options allow for protection against downside risk while retaining

the potential for unlimited gains. Even though options are widely used as an

instrument for hedging and speculating, not just on the price of an asset but

also its volatility, the accurate pricing of these financial derivatives is still a

matter of debate. Some of the fundamental properties of options are now de-

scribed, as well as a discussion of attempts to price these instruments.

An option is a contractual agreement between two parties regarding the po-

tential future transaction of an underlying asset. Specifically, a call (put) op-

tion is the right, but not the obligation, to purchase (sell) the underlying asset

for a predetermined price, known as the exercise or strike price. Calls (puts)

9Reproduced in English in Cootner (1964)

35

Page 48: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

will only be exercised by rational investors if the market price is above (below)

the exercise price. Options range from the plain or “vanilla” style, such as

European options where the transaction may only be executed at maturity, to

path-dependent exotic options.

As noted in Wilmott, Howison, and Dewynne (2005), since the option con-

fers on its holder a right with no obligation, it has value. Conversely, the writer

has a potential obligation, they must sell the asset if the holder of a call chooses

to exercise, and must be compensated for assuming this potential obligation.

The central problem in option pricing is determining the value of this right at

the time the contract is entered into. At its core this is not a difficult problem,

the fair price of an option today is given by the present value of its expected

payoff

Vt = e−rτEQ(VT ), (2.23)

where EQ is the expectation under the risk-neutral measure. As the time-to-

maturity, τ , is specified by the option contract and the relevant risk-free rate

of return, r, is assumed to be fixed over the life of the option in most models,

the difficulty in option pricing arises in generating the expectation of VT . From

statistics it is known that E(X) =∫∞−∞ xf(x)dx where f(x) is the probability

density function of x. This may be combined with the value of the call10 option

at expiry, CT = (ST −K)+ where CT is the terminal value of a European call,

ST is the terminal value of the underlying asset and K is the relevant exercise

price, to give the following equation for pricing a call option at initiation:

10This analysis is for a European call option, the analysis applies equally to put options.For more complicated payoff functions, the treatment is more involved and not relevant to thecurrent discussion.

36

Page 49: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Ct = e−rτEQ(CT ) = e−rτ

EQ[(ST −K)+],

= e−rτ

∫ ∞

−∞(ST −K)+f(ST )dST ,

= e−rτ

∫ ∞

K(ST −K)f(ST )dST .

(2.24)

The difficulty in determining the value of the option arises in generating the

density function, f(ST ), over which Equation 2.24 must be integrated; this den-

sity function is the risk-neutral distribution of the value of the underlying at

expiry11.

Prior to the work of Black and Scholes (1973) and Merton (1973), hereafter

referred to as BSM, financial economists worked under the assumption that

investor preferences were necessary in the determination of an option’s value.

As f(ST ) is implicitly a distribution of returns, which require an expectation

and standard deviation12, if investor beliefs of the expected return of the asset

and the level of uncertainty of that return are heterogeneous, distributional

assumptions of the asset’s value at expiry would also be heterogeneous, leading

to multiple “fair” values for the option. The no-arbitrage argument introduced

by BSM removed investor preferences and expectations of returns on the un-

derlying from the valuation problem and provided the first closed form solution

for the value of an option.

11An alternative method for determining the expected payoff is to numerically simulate astochastic differential equation a large number of times and take the mean; this method is notrelevant for this thesis.

12Further moments would be required if the distribution is believed to be non-Gaussian,but we limit the analysis to the Gaussian case for the moment.

37

Page 50: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

2.3.1 Black-Scholes-Merton Model

While there are numerous approaches in deriving the BSM model, the origi-

nal partial differential equation (PDE) approach from the Black and Scholes

(1973) paper shall be discussed here. This approach is chosen as the pric-

ing of options occurs in the risk-neutral environment and the PDE derivation

most clearly demonstrates their no-arbitrage argument, the choice of appropri-

ate co-efficients on risk terms in their PDE removes uncertainty leading to a

risk-neutral pricing framework.

This derivation of the BSM option pricing model is based on the insight that in-

vestors are able to create a portfolio of shares and options to generate a hedged

position that must earn the risk-free rate of return in equilibrium. From this

equilibrium, Black and Scholes (1973) derive a theoretical valuation formula

under the following assumptions13:

1. The short-term discount factor is known and constant through time

2. The returns process of the underlying follows a continuous random walk,

or Geometric Brownian Motion (GBM) with drift, with a constant vari-

ance rate proportional to the square of the stock price. The assumption

of constant variance implicitly leads to the assumption of a zero volatility

risk premium, as there is no volatility risk present. The returns process

is given by

dS

S= µdt+ σdW, (2.25)

where µ is a constant drift term or mean return, scaled by the time step,

dt; under the risk-neutral measure this is the risk-free rate of return r, σ

is the constant rate of volatility per unit of time, and dW is a standard

13Merton (1973), who is co-credited with the development of the BSM model, provides amore rigorous proof under more relaxed assumptions. Here, however, we are interested in thetheoretical exposition rather than the mathematics.

38

Page 51: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Brownian Motion. This process implies the stock price at maturity has a

log-normal distribution.

3. The stock pays no dividends or any other distributions

4. The option is European

5. The market is frictionless: there are no transaction costs in buying or

selling either the option or its underlying; it is possible to borrow any

fraction of the security’s price to purchase or hold it at the short-term

interest rate; there are no penalties to short-selling.

Black and Scholes (1973) show that it is possible for investors to construct a

portfolio of a long position in the underlying and a short position in the option

that, if continuously re-balanced, will be hedged from changes in the value of

the underlying. Provided the hedge ratio is correct, as both the call and share

have strictly positive derivatives with respect to the share price, any continuous

movement in the value of the underlying will be balanced by the opposing po-

sitions in the two assets such that the overall value of the portfolio will remain

unchanged.

Defining C(S, τ) as the value of a call option when the underlying is valued

at S and there are τ days to maturity, Black and Scholes (1973) demonstrate

that the required hedge ratio for a call is given by η = 1CS

, where η is the num-

ber of options that must be sold short against on long stock and C( · ) is the

partial derivative of the value of the call with respect to its argument. If this

hedge ratio is maintained continuously then the return on the hedged position

will be independent of the change in S and certain.

As the hedged portfolio with value Π is comprised of a unit long position of the

underlying and short η units of a call option, its value and associated change

39

Page 52: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

in value over a short time interval, ∆t, is given by:

Π = S − ηC = S − C/CS ,

∆Π = ∆S - ∆C/CS .(2.26)

Expanding ∆C using Ito calculus14 yields the stochastic differential equation

(SDE)

∆C = CS∆S +1

2CSSσ

2S2∆t+ Ct∆t. (2.27)

Substituting this result into Equation 2.26 describes the PDE for the change in

value of the portfolio

∆Π = −(

1

2CSSσ

2S2 + Ct

)∆t/CS . (2.28)

Note that through the judicious choice of the hedge ratio the change in the value

of the portfolio is now independent of the change in the value of the underlying.

Since the portfolio is hedged from changes in the value of the underlying, its

return is riskless and must in equilibrium earn a return equal to r scaled by the

length of time, ∆t. Hence

(S − C/CS)r∆t = −(

1

2CSSσ

2S2 + Ct

)∆t/CS ,

⇒ Ct = rC − rSCS − 1

2σ2S2CSS ,

(2.29)

14More details on stochastic calculus are available from, for example, Shreve (2004).

40

Page 53: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

which is the Black-Scholes PDE. From the rational theory of option pricing two

boundary conditions can be imposed on this PDE:

C(S, 0) =

= S −K, S ≥ K,

= 0, S < K.

(2.30)

Black and Scholes (1973) note that there is a unique solution of the PDE de-

scribe in Equation 2.29 subject to the boundary conditions Equation 2.30, and

with some substitution recognise that the transformed differential equation and

its associated boundary conditions is the heat equation from physics, which has

a known solution. Substituting this known solution into their transformed dif-

ferential equation and simplifying yields their now famous, Nobel-prize winning,

formula:

C(S, τ) = SN(d1) −Ke−rτN(d2),

d1 =ln(

SK

)+ (r + 1

2σ2)τ

σ√τ

,

d2 =ln(

SK

)+ (r − 1

2σ2)τ

σ√τ

,

= d1 − σ√τ

N(x) =1√2π

∫ x

−∞exp

(−u

2

2

)du.

(2.31)

It can be seen from the formulation above that the value of the call depends

on five parameters; (i) the value of the underlying, S; (ii) the exercise price, K;

(iii) time-to-maturity, τ ; (iv) the risk-free rate, r; and (v) the volatility of the

underlying, σ. Two of these, K and τ , are specified by the terms of the option;

the value of the underlying is readily observed in the market; so too is the

41

Page 54: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

risk-free rate, which Black and Scholes (1973) assume will be constant over the

life of the option15. Although the fifth parameter, σ, is the only unobservable

in the model, Merton (1973) posits that it is easily estimated from histori-

cal standard deviations. Hence, their model allows an option to be priced with

variables agreed to by all investors, regardless of their beliefs or risk preferences.

What is notable in the above specification is what it does not depend on. BSM

provided the first closed-form option pricing model which does not require ex-

pectations of the returns of the underlying nor depend on the risk-aversion level

of the individual investor, it is applicable across all stocks and all investors. The

drift term of the SDE given in Equation 2.25, µ, for the return of the underly-

ing does not appear in the BSM model; it has been replaced by the observable

risk-free rate. Thus, different investors may possess widely varying expecta-

tions of the rate of return for the underlying yet agree upon a fair price of the

option. Further, as the risk of the portfolio may be hedged by taking opposing

positions in two assets that are positively related to the value of the underly-

ing, the level of risk-aversion of individual investors is irrelevant for pricing an

option as the risk has been removed. In sum, despite the value of the option

being the discounted value of the expectation (in the real-world this may vary

across investors) of the future payoff (which depends on the return of the un-

derlying) of the option, the hedging approach of BSM provides, subject to their

assumptions, a universal option valuation formula. For this achievement, My-

ron Scholes and Robert C. Merton were awarded the Nobel Prize for Economics

in 1997.

2.3.2 Black-Scholes-Merton Implied Volatility

As previously mentioned, the BSM option valuation formula relies on five vari-

ables; four of these are easily observed, the volatility of the underlying’s return

15Merton (1973) shows that this assumption can be relaxed.

42

Page 55: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

over the life of the option is not. While Merton (1973) states that one may

estimate this parameter from historical standard deviations, determining the

σ parameter in the BSM model is worth a more through examination than a

simple historical mean.

Given that options are actively traded on open exchanges, the market is implic-

itly pricing an equilibrium level of volatility through the trading activity. It is

possible to solve for the level of volatility that equates the BSM price, CBSM , to

that of the market, CM ; this level of volatility is known as the implied volatility

(IV). If the BSM model holds in practice, then this IV will provide an unbiased

estimate of the equilibrium volatility being priced by the market.

The IV of an option is calculated through a one-dimensional numerical op-

timisation over the σ parameter in the BSM model such that

CM (S, τ,K, r) = CBSM (S, τ,K, r, IV ). (2.32)

As the first derivative of the BSM model with respect to volatility, the option’s

vega, is strictly positive, whenever a solution exists the solution is unique.

The calculated IV, if the assumptions of the BSM model hold true in the market,

can be thought of as the equilibrium risk-neutral market expectation of the con-

stant volatility of the return on the underlying over the life of the option. This

expected volatility provides an alternative volatility forecasting methodology

to the time-series models of volatility discussed shortly in Section 2.6. Rather

than relying solely on historical information in generating volatility forecasts,

as is the case with time-series methods, it is possible to predict volatility based

on the information implied by options markets.

43

Page 56: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

As options are priced with reference to a future dated payoff, market par-

ticipants should include in their trading activities their expectations of all in-

formation relevant to this delayed payoff, including information pertinent to

future volatility. Further, in an informationally efficient options market any

historical information relevant to derivatives pricing should also be included in

the prices of options. As volatility is known to be highly persistent, the prior

history of volatility would be an important input to options traders beliefs of

future volatility. Hence, theoretical arguments exist for preferring option im-

plied forecasts over time-series forecasts due to the historical information being

a subset of the information utilised in determining the IV. This theory has been

supported empirically and shall be discussed in detail in Section 2.5; first, some

of the deficiencies of, and extensions to, the BSM model are discussed.

2.3.3 Challenges to the Black-Scholes-Merton Model

Following the theoretical development of the BSM model, several challenges

emerged from the empirical literature16. Prominent among these was the volatil-

ity smile or smirk; a plot of the IV inverted from market prices across a range

of strike prices yielded a distinctly non-flat, non-linear curve. Given that the

volatility parameter of the BSM model is the constant volatility of the return

on the underlying over the life of the option, all options on the same underlying

of the same maturity should return the same IV if the model holds true in a

practical setting. An empirical example of the volatility smirk is plotted in

Figure 2.5 below.

For European options on equity indexes, such as the S&P 500 Index, plots of IV

such as Figure 2.5 produce a steady decrease across strike prices as the respec-

tive options move from out-of-the-money (for puts) through to in-the-money,

16See, for example, Rubinstein (1985, 1994) and Dumas et al (1998).

44

Page 57: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 2.5: Volatility Smirk of S&P 500 Index European call and put option impliedvolatilities with time-to-maturity 35 trading days on the 4/6/2007. The IV is the annu-alised percentage standard deviation that when substituted into the Black-Scholes-Mertonoption pricing model matches the observed market value for that strike. The vertical linedenotes the closing value of the S&P 500 Index on that day.

K

IV

Black-Scholes-Merton Implied Volatility “Smirk”

1300 1350 1400 1450 1500 1550 1600 16500.08

0.1

0.12

0.14

0.16

0.18

0.2

0.22

0.24

0.26

resembling a smirk17. Due to this anomaly, academic interest turned to the

driver of the volatility smile or smirk, which appeared to invalidate the founda-

tions of the Black-Scholes-Merton option pricing model; a brief review of this

literature is now provided.

Several of the assumptions that underpin the BSM model, though unrealis-

tic, were shown to have no effect on their theoretical results. Merton (1973)

extended the proof to include stochastic interest rates and the payment of div-

idends, and shows that as long as the underlying has a continuous sample path

with constant volatility, the no-arbitrage argument is valid; Thorp (1973) shows

17Initially, the observed anomaly was more symmetrical than depicted in Figure 2.5 andmore of a smile than a smirk. Following the market correction of October 1987, however, thesmirk has been more commonly observed.

45

Page 58: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

that restrictions on short-sales have no impact on the derivation of the model;

and Ingersoll (1975) demonstrates that differential taxes are not a factor. Fur-

ther, while trades do not occur continuously but in discrete time “ticks”, Merton

and Samuelson (1974) demonstrated that the continuous-hedge BSM approach

is an acceptable asymptotic approximation to a discrete-time problem; again

subject to the condition that the underlying follows a continuous sample path

with constant volatility.

Given that these particular assumptions were shown to have no impact on

the main result of the Black-Scholes (1973) paper, the assumptions that the

returns process followed a continuous random walk with drift, and that this

random walk possessed a constant variance (implying log-normally distributed

returns) were the remaining potential explanations to the observed empirical

anomaly. A considerable body of work had been published prior to the devel-

opment of the BSM model demonstrating that returns were not normally, nor

log-normally, distributed. Empirical studies such as Mandelbrot (1963), Fama

(1965) and Praetz (1972) provided evidence that returns distributions exhibited

excess kurtosis and negative skewness; these empirical studies are related to the

volatility smile or smirk.

The excess kurtosis implies that observed distributions of asset returns pos-

sess heavier tails than are theoretically allowed under the assumption of log-

normality. A higher probability of a tail event leads to a higher value of the

option, i.e. an increased probability of a large positive return will lead to a call

being priced higher18. As the only free parameter in calculating the BSM IV

18Even though a symmetric increase in the probability of a tail event leaves the expectationunchanged at the risk-free rate and means a large loss is now also more likely, the value of thecall increases. As can be seen in 2.24, the value of the call is found through an integrationover the positive component of the payoff only, multiplied by its density. As more weight isnow in the tails and the sum of the density must still be unity, less weight is given to valuescloser to the mean return. That is, more weight is given to the larger payoffs in the tail andless weight to the smaller payoffs, negative payoffs are still given zero weight. The overallresult is an increase in the value of the call, the argument may be reversed for puts.

46

Page 59: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

is through the volatility term, and the vega of the option is strictly positive,

the only way the BSM model can ensure its assumed log-normal density can

match the empirically documented leptokurtic distributions is through a higher

level of volatility. By construction, deep in- or out-of-the-money options are

more strongly related to the probability of tail events than those closer to being

at-the-money. Hence, the BSM IVs of these options will be biased upwards by

the fact that the model is not flexible enough, this is related to the upward

sloping curve as options move away from being ATM.

A similar argument applies to the negative skewness typically found in em-

pirical studies. As the density of the left hand tail of the distribution is thicker

than a similar magnitude on the right hand side of the distribution, the BSM

IV of the option priced with regard to the lower tail will be higher than that of

a similar magnitude on the opposite tail, partially explaining the asymmetric

nature of the volatility smirk. As has been noted, the smirk shape is more

commonly observed after the market correction of October 1987. This result

is evidence of an increase in the risk-neutral skewness since the correction (see

Singleton (2006), Bates (2000), and Jackwerth (2000)).

Another assumption of the BSM model that has been empirically falsified is

a zero volatility risk premium19. The volatility risk premium is the expected

excess return demanded by an equilibrium investor for holding an asset whose

value will change if volatility changes; this risk premium can be time-varying

(Taylor, 2005). In markets with non-zero volatility risk premium, one can ex-

pect option prices to be higher as these assets are susceptible to volatility risk20.

Again due to the strictly positive vega of the option price, those pricing models

making the erroneous assumption of zero volatility risk premium will require a

higher level of IV to match observed prices as they lack the flexibility to have a

19See, for example, Bollerslev, Tauchen, and Zhou (2009).20See Bates (1996) for references.

47

Page 60: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

lower estimate of volatility and the remainder of the price depend on the price

of volatility risk; they will, therefore, be upwardly biased estimates of future

volatility21.

Another empirical fact that counters the BSM assumptions is that the sam-

ple paths of stocks are observed to not move continuously but to occasionally

“jump” (For example, see Andersen, Bollerslev, Diebold and Vega (2007), Jo-

hannes and Dubinsky (2006) for jumps around company specific events such

as earnings reports; Tauchen and Zhou (2006) find evidence that jumps occur

every 2 days in the Brazilian real, 4 days for S&P 500 Index futures; 5 days

for the Treasury-bond index; and 10 days for Microsoft.). The assumption of

GBM implies that the asset price evolves through tiny continuous movements

across infinitesimally small time ticks. However, market microstructure effects

clearly prevent such a scenario; transactions take place with a minimum price

movement such as one 32nd of a dollar. Further, as would be expected in an

informationally efficient market, the arrival of certain news events may cause

prices to jump in adjustment to the new information; the mean of these jumps

is typically negative as markets tend to react to bad news in greater magnitude

than good news. These jumps are then important for two reasons; firstly, the

jumps violate the GBM assumption of BSM and the presence of jumps implies

that the hedging portfolio argument of Black and Scholes (1973) will no longer

be valid as the portfolio is subject to “jump risk”, which cannot be hedged.

Secondly, the typically negative mean for the jump size implies a negatively

skewed distribution, again demonstrating that the assumption of log-normally

distributed returns is invalid.

Further doubt was cast on the real-world validity of the BSM model assump-

tions through a growing literature demonstrating the non-constant nature of

21Poon and Granger (2003) discuss the exact nature and extent of this problem is significantdetail in Section 6.3 of their article.

48

Page 61: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

volatility, which exhibited mean-reversion and a tendency to cluster, see Sec-

tion 2.2 and the references therein. The publication of time-series models of

volatility in the early- to mid-1980s which allowed for heteroscedasticity, excess

kurtosis, and skewness to be built into the distribution of returns led to numer-

ous attempts in the years immediately following to translate these advances,

the stochastic volatility model in particular, into the option pricing literature.

These initial attempts22 primarily relied on the numerical solution of two-

dimensional partial differential equations, an intensive computational exercise.

Two closed-form solutions were proposed by Jarrow and Eisenberg (1991), and

Stein and Stein (1991) which extended the BSM model to allow for non-constant

variance with the restriction that the volatility of the underlying was uncorre-

lated with its level. This unrealistic assumption was relaxed by Heston (1993)

with his introduction of the characteristic function inversion approach. The

Heston (1993) Stochastic Volatility (SV) model of option valuation also al-

lows for a non-zero volatility risk-premium to be priced, and the volatilities

found through backward induction from the Heston model display a far less

pronounced volatility smile. Indeed, the work of Poteshman (2000) finds that

half of the biasness of S&P 500 futures option implied volatility was removed

when utilising the more sophisticated Heston (1993) model.

Heston’s introduction of the characteristic function inversion methodology to

option pricing led to a more flexible method of valuing derivatives and allows

more complicated dynamics to be built into the stochastic process of the un-

derlying, while still retaining analytical and numerical tractability. If one is

able to obtain the characteristic function of a random variable, it is possible to

recapture the density function of that variable through Fourier inversion; each

characteristic function defines a unique density function. The characteristic

22See Johnson and Shanno (1987), Wiggins (1987), Scott (1987), Hull and White (1987,1988).

49

Page 62: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

function itself is determined by the specification of the dynamics of the under-

lying asset. The approach adopted by Heston (1993) can then be modified to

suit the desired underlying dynamics, resulting in different risk-neutral density

functions.

Given the large body of academic research into various time-series models of

volatility, myriad dynamic structures exist that may be translated into op-

tion pricing models. Many of both the ARCH and SV classes of models, dis-

cussed in Section 2.6, have been used to incorporate the empirical facts of

heteroscedasticity, kurtosis, skewness, and jumps into option valuation models;

a non-exhaustive list includes the SV model of Heston (1993); SV-Jump (SVJ)

model of Bates (1996); SV-Double Jump (SVJJ) model of Duffie, Pan & Sin-

gleton (2000); the GARCH model of Heston & Nandi (2000); and the GARCH

with Conditional Skewness model of Christofferson, Heston & Jacobs (2006).

In a similar procedure to IV, each of these more advanced models has allowed

for a more general and complex distribution to be implied from options market

data; from these distributions volatility specific parameters may be inferred,

which allow for forecasts of the volatility of the underlying. Therefore, one may

substitute the IV found from the restrictive BSM model with a forecast from

the option pricing model that possesses the return dynamics most suited to

their view of its stochastic process. However, it is unlikely that any of these

models fully capture the underlying dynamics.

2.4 The Volatility Index

The Volatility Index (VIX) is a model-free estimate of future volatility derived

from a basket of options, out-of-the-money puts and calls with near-term expiry

are used to interpolate an at-the-money IV for a 22-trading-day horizon. Its

model-free nature is a continuation of the relaxation of assumptions of option

50

Page 63: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

pricing models that have preceded it, as has just been detailed. A brief moti-

vation for and history of the VIX is now given.

As was discussed previously, market prices for options contracts may be in-

verted to calculate the Black-Scholes-Merton implied volatility; this is a simple

one dimensional optimisation. The more complicated models may be inverted

to provide information regarding the long-term mean of volatility, the speed

of mean reversion, the volatility of volatility, and the probability of a jump in

volatility, all of which are patently important in volatility forecasting. Reli-

able estimates of this parameter vector would ostensibly ensure more reliable

forecasts of future volatility given the more realistic nature of the underly-

ing stochastic process. However, the process of backing out these values is

more complicated and involves a multi-dimensional optimisation with at least

as many different strike prices as parameters required for the problem to be

tractable.

Further, while these advancements in option pricing are impressive in their

technical details, it is overwhelmingly likely that the true dynamics of the asset

price will never be known. Indeed, it is also probable that the true dynamics

may not be analytically or numerically tractable even if they were known, given

the inherent complexities of asset price processes. Therefore, it is probable that

any option pricing model, regardless of its level of sophistication, will be mis-

specified. As a result, any implied parameters filtered from market prices will

be biased estimators of the true level of IV being factored into option prices

by market participants. To circumvent this model misspecification problem,

a model-free estimate of market IV has become widely used in the academic

literature, and has also recently become a traded asset in its own right; this

measure is known as the VIX. A brief review of the history of the development

of the VIX is now given.

51

Page 64: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Initially after the volatility smirk was discovered by Latane and Rendleman

(1978), a variety of authors proposed various weighting schemes of BSM IVs

to generate a weighted-average IV that may be used for forecasting23. Flem-

ing, Ostdiek, and Whaley (1995) developed the first version of the VIX from

BSM IVs to find an ATM IV with a fixed forecast horizon of 22-trading-days.

Methodologies for inverting market data to find a level of IV began to move away

from BSM IVs at approximately the same time as the original VIX specification

was being discussed. Similar to the more complicated option pricing models,

these new methods for finding IV were reliant on less restrictive assumptions of

the underlying asset’s return dynamics. The first such method discussed here

is that of local volatility, pioneered by Dupire (1994), and Derman and Kani

(1994). They present findings based on the SDE

dS

S= rdt+ σ(S, t)dW. (2.33)

Note that this formulation is identical to the original SDE assumed by Black-

Scholes-Merton, but that the volatility term is now a deterministic function of

the current asset price and time, rather than a constant. Dupire (1994), and

Derman and Kani (1994) note the result of Breeden and Litzenberger (1978)

that the second derivative of the value of an option with respect to its strike price

yields the risk-neutral density, φ(K,T ;S0). This density evolves according to

the Fokker-Plank equation which may be integrated to give the Dupire equation

for the evolution of the call price through time

Cτ =1

2σ2K2CKK . (2.34)

Re-arranging the Dupire equation results in the local volatility being given by:

23More detail on this strand of literature follows in the next Section.

52

Page 65: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

σ2(K, τ, S0) =Cτ

12K

2CKK. (2.35)

Both of the partial derivatives of the call price are able to be estimated nu-

merically using a finite-difference scheme applied to observed market prices of

European options.

In an important paper, Britten-Jones and Neuberger (2000) allow for stochastic

volatility in their more general SDE

dS

S= rdt+

√V (·, t)dW, (2.36)

where V (·, t) is an unknown stochastic process. Britten-Jones and Neuberger

(2000) show that the risk-neutral expectation of the integrated variance, dis-

cussed in Section 2.2.1, is given by

EQ

[∫ T

tV (·, s)ds

]= 2

(∫ St

0

P (K, τ)

K2dK +

∫ 0

St

C(K, τ)

K2dK

). (2.37)

It is important to note that in calculating the risk-neutral expectation of inte-

grated variance in the above formulation that the stochastic process of volatility

is never defined and no option pricing model is required, the only inputs are the

prices of calls and puts and their respective strike prices, which are all market

observables. That is, it is a model-free method of inverting options market

prices to find the level of volatility implicitly being priced by market partici-

pants.

This approach is still not completely general, however, as asset prices are ob-

served to occasionally jump, for example in response to the release of signifi-

cant news events; this fact was noted previously in Section 2.3.3. The results

of Britten-Jones and Neuberger (2000) rely on an SDE that is purely diffusive

53

Page 66: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

in nature, their model does not allow for these empirically observed jumps.

Fortunately, their result still holds approximately when jumps are allowed as

shown by Jiang and Tian (2005); in fact, they generalise the Britten-Jones and

Neuberger (2000) result to all martingale processes. For consistency with the

proceeding discussion, the SDE on which their results are premised is now given

dS

S= (r − λ(t)EQ[k(S)]) dt+

√V (·, t)dW + k(S)dN(t), (2.38)

where V (·, t) is again an unknown stochastic process, N(t) is a pure jump pro-

cess with time-varying intensity λ(t)dt, or EQ(dN(t) = 1) = λ(t)dt, and k(s) has

an unknown distribution F (·) with expectation EQ[k(s)]. This SDE is known as

a compensated process as its specification ensures that the deterministic drift

terms maintain the expected return of the underlying at the risk-free rate of

return, as is required under the risk-neutral measure.

The above discussion demonstrates that an alternative method for finding the

level of volatility being priced by the options market exists. One option is a

potentially multi-dimensional numerical optimisation of pricing errors to esti-

mate the risk-neutral parameters of a particular choice of SDE; the estimated

parameters may then be substituted into the particular SDE to be integrated

analytically or numerically simulated multiple times to yield an expectation

of the future volatility. The alternative option is to integrate scaled option

prices into an at-the-money, 22-trading-day, implied volatility where no as-

sumptions regarding the correct option valuation method or dynamics of the

asset’s volatility are required. The forecast performance of both methodologies

is now reviewed.

54

Page 67: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

2.5 Forecast Performance of Implied Volatility

This Section briefly recounts some of the prior literature that has examined

the out-of-sample forecasting ability of various measures of implied volatility.

It is important to note that this is not a complete review of all prior volatility

forecasting literature that involves IV, just the articles that this author believes

are most relevant or most influential; a more thorough review is provided by

Poon and Granger (2003).

In a seminal piece of work, the first published article examining IV forecast

performance was that of Latane and Rendleman (1978), who also were the first

to state that option market participants’ views of future asset price uncertainty

may be reflected in option prices. Further, their calculation of BSM IVs led

them to discover the volatility smile. As was stated in the previous Section

on the development of the VIX, Latane and Rendleman (1978) weighted the

BSM IV of individual US stocks by the option’s vega to find a mean level of IV.

In a cross-sectional study, this weighted BSM IV was found to be more highly

correlated with future volatility, measured by the standard deviation of daily

returns, than historical sample variances.

More advanced tests of forecasting ability followed through time-series based

tests, including conducting Mincer-Zarnowitz (1969) style regressions which

take the following form

V (t, T ) = αi + βihi(t, T ) + εi, (2.39)

where V (t, T ) is a measure of the actual volatility for the time period t through

to T , and hi(t, T ) is the future volatility forecast of model i over the same time

period. If model i produces unbiased forecasts, then E [αi] = 0, and E

[βi

]= 1.

For the purposes of measuring relative forecast performance, the models may

55

Page 68: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

be ranked according to the R2 from the Mincer-Zarnowitz regression. Given

the discussion of the volatility risk premium given earlier, it is unlikely that

BSM IVs would be unbiased, though they may still produce the best forecasts.

Employing a slightly different averaging scheme to Latane and Rendleman

(1978), Chiras and Manaster (1978) weight BSM IVs by their price elastic-

ity to the option vega and evaluate the forecasting performance through the

regression style just mentioned. The reported R2 value when the weighted

BSM IVs was the regressor was superior to its competitors, and the authors

contend that this result is empirical support for the superior predictive ability

(SPA) of BSM IVs relative to more simple historical standard deviations. In a

similar vein of research, Beckers (1981) evaluates the forecast performance of

several weighting schemes of BSM IVs. His research found that the ATM BSM

IV was the optimal forecasting tool, but that alternative weighting schemes

also yielded SPA to historical standard deviations; this result was later verified

empirically by Gemmill (1986), and theoretically explained by Hull and White

(1988).

Similar to the Mincer-Zarnowitz regression defined earlier, encompassing re-

gressions were employed by more recent empirical studies; these regressions

typically take the following form

V (t, T ) = αij + βihi(t, T ) + βj hj(t, T ) + εij , i 6= j. (2.40)

By placing pairwise competing forecasts within the same regression, it may be

tested whether the information content of one model subsumes the information

in a competitor. If model i provides a minimum variance, unbiased forecast of

the target level of volatility then E [αij ] = 0, E

[βi

]= 1, and E

[βj

]= 0.

56

Page 69: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Given the argument outlined earlier that options market participants should

include historical information in generating their forecasts, an encompassing

regression with both IV and historical measures as regressors should have a

statistically insignificant co-efficient for the historical measure if the options

market is informationally efficient. If both βi and βj are statistically distin-

guishable from zero, then the regression results would suggest that a combina-

tion of the forecasting models yields SPA to either of the individual forecasts.

As the forecast from a time-series model is readily observable to options mar-

ket participants, that information should already be incorporated into market

prices; this information would then be reflected in the IV backed out of these

market prices. Hence, historical information should not contain explanatory

power over and above IV as the information should already be incorporated

into market prices and therefore in IV. If historical information does contain

explanatory power over and above IV, then evidence exists that the options

market is not informationally efficient. Alternatively, given the volatility risk

premium, it may be that IV provides the best forecast of future volatility, but

the co-efficient acting on it may not be unity while the co-efficient on the his-

torical measure is not significant.

The first encompassing regression analysis performed on IV forecasting abil-

ity comes from Day and Lewis (1993). The weighting scheme they chose was a

single level of BSM IV that minimised the volume-weighted BSM pricing errors

over the spectrum of available strikes on a given day; the rationale for volume-

weighting is that more actively traded options may be regarded as being more

informationally efficient than deep out- or in-the-money options that may be

illiquid. They examined the relative forecasting ability of this weighted BSM IV

to time-series models of volatility such as the GARCH and EGARCH models;

their results were that their weighted BSM IV did not subsume the information

content of historical information as represented by those GARCH models. This

57

Page 70: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

result was countered by the work of Lamoureux and Lastrapes (1993), who used

ATM BSM IVs rather than a specific weighting scheme. Curiously, while Lam-

oureux and Lastrapes (1993) found that ATM BSM IV outperformed advanced

time-series models of volatility, they did not subsume the information in simple

historical standard deviations.

Outside of the equity market, Jorion (1995) applied the encompassing regression

framework to currency markets and found that the ATM BSM IV generated

the highest R2 in forecasting future volatility and subsumed the information

in both time-series models of volatility and historical standard deviations. An-

other encompassing regression study is that of Canina and Figlewski (1993) who

offered a set of results that suggested that BSM IV was of little use in volatility

forecasting. The R2 of ATM BSM IV in forecasting future volatility was found

to be very low, and the encompassing regression results again found that ATM

BSM IV did not subsume the information in historical measures of standard

deviation. These results are refuted by the study of Christensen and Prabhala

(1998) who conducted an updated analysis of the same market as Canina and

Figlewski (1993) but with a larger dataset, using non-overlapping time periods,

and examined volatility at the lower frequency of monthly volatility. Chris-

tensen and Prabhala (1998) found that IV does forecast future volatility both

in isolation and when the history of volatility is included in an encompassing

regression.

It is perhaps confounding that various studies have shown that on one hand

BSM IVs had been shown to outperform advanced time-series models of volatil-

ity, which are known to outperform historical standard deviations, but that

BSM IVs could not outperform historical standard deviations; the transitive

property appears not to apply to empirical econometrics! However, given that

Christensen and Prabhala (1998) employ various robustness checks of their re-

58

Page 71: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

sults, and directly address their difference in findings to prior studies, it is

believed that their conclusion of IV subsuming the information contained in

historical information should be given more weight.

A complicating factor in dissecting the findings of prior studies is that the

weighting scheme chosen for averaging the BSM IVs is not consistent. It is

plausible that some of the variety in conclusions may be attributable to differ-

ent authors choosing different waiting schemes. The development of the first

version of the V IX went some way to removing the uncertainty of which weight-

ing scheme to utilise in empirical tests.

As discussed previously, Fleming, Ostdiek, and Whaley (1995) detailed a weight-

ing scheme of BSM IVs from common maturity dates to produce a vector of

pseudo ATM BSM IVs, these are then interpolated to find a 22-trading-day

ahead forecast of volatility. Further, they outlined a new testing procedure of

forecast ability using Generalised Method of Moments (GMM). Briefly, if IVs

subsume the information contained in competing models based on historical in-

formation alone, then the errors in volatility forecasting should be independent

of the volatility forecast by time-series models, this yields the moment condition

E

[(V (t, T ) − αIV − βIV hIV

)hMBF

]= 0, (2.41)

where MBF denotes a Model Based Forecast from a time-series model of volatil-

ity. Fleming, Ostdiek, and Whaley (1995) found that the early version of the

V IX was not informationally efficient under this testing framework. Their re-

sults were updated by Fleming (1998) who found higher forecast performance

was generated by the old version of the V IX relative to a GARCH(1,1) model

and that the forecast errors were in fact independent to the volatility forecast

by competing models. These updated results should be given greater weight

59

Page 72: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

due to Fleming (1998) attempting to minimise the errors in measuring IV and

also for utilising more sophisticated econometric techniques, such as providing

GMM standard error corrections for telescopic-fixed expiry date maturity pat-

terns.

The development of the realised volatility literature, see Section 2.2.1, allowed

for more powerful tests of forecasting accuracy given the improved measurement

of true volatility that RV yields. Blair, Poon and Taylor (2001), hereafter BPT,

analysed the forecast performance of historical volatility, time-series models of

volatility, and the original V IX specification in predicting RV. The IV measure

was statistically superior to its competitors through the highest R2s. In the en-

compassing regression, the co-efficient on the GARCH(1,1) forecast of volatility

was not statistically significant, a result that suggests informational efficiency

of the previous version of the V IX. An over-arching test of predictive power

across markets was conducted by Szakmary, Ors, Kim and Davidson (2003) in

an analysis of 35 futures markets, where it was found that historical volatility

and GARCH based models possessed little incremental information above the

IV. They find that:

“Although IV was not a completely unbiased predictor of future volatility, it

performed well in a relative sense. For an overwhelming majority of the 35

commodities studied, IV outperformed historical volatility as a predictor of the

subsequent RV in the underlying futures prices over the remaining life of the

option. Furthermore, in most markets, historical volatility does not appear to

contain any information that is not already incorporated in the IV. These re-

sults appear to be robust across differing terms to maturity, and for S&P 500

options, across sample periods as well. When we replace simple 30 day moving

average historical volatilities with recursive forecasts from GARCH models, we

find little difference in predictive power.”, (pp. 2173, Szakmary, Ors, Kim and

Davidson, 2003).

60

Page 73: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

While these two papers find in favour of IV over the GARCH class of mod-

els over, other research found that it could not match the forecasting ability of

time-series models of RV. In the currency market, Pong, Shackleton, Taylor, and

Xu (2004) examined the forecast performance of near-the-money IVs relative to

several advanced time-series models. The family of time-series models studied

had by this time expanded to include auto-regressive specifications of RV, such

as ARMA and ARFIMA models, in addition to the familiar GARCH class of

models which utilise daily returns alone. This paper was interesting in that it

again highlighted the superior forecast performance of IV over the traditional

GARCH class of models, but that IV forecasts were dominated by these newer

AR models of RV. Their findings were supported by Martens and Zein (2002)

who found adding the volatility forecasts from ARMA and ARFIMA models of

RV to the old version of the V IX lead to substantial gains in forecast accuracy.

So far, the discussion of prior empirical analysis has referred, perhaps cum-

bersomely, to the ‘old’ or ‘early’ or ‘previous’ version of the V IX. These dis-

tinctions have been important as the IVs examined in those studies were still

dependent on the Black-Scholes-Merton model of option pricing. As discussed

earlier, the current methodology for calculating the V IX is independent of any

option pricing model, it is ‘model-free’. The forecasting performance of the

current V IX, hereafter simply referred to as the V IX, is now reviewed.

In their study generalising the results of Britten-Jones and Neuberger (2000),

the article of Jiang and Tian (2005) examines the forecast performance of the

V IX over the 22-trading-day period for which it is constructed24 with the ATM

BSM IV, and a one day lag of RV. Aside from the reported high correlations

24Equation 2.37 allows for the implied volatility to be calculated for the general time periodfrom date t through to T ; in practice, the CBOE calculates the V IX for the 22-trading-dayperiod as used by Fleming, Ostdiek, and Whaley (1995).

61

Page 74: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

of each of these measures with each other and future volatility, the V IX was

found to have SPA over the two competing alternatives. In the encompass-

ing regressions performed by Jiang and Tian (2005), the V IX was found to

subsume the information content of the BSM ATM IV; this was a promising

result given the success in previous studies of the ATM BSM IV outperforming

a range of time-series models, primarily the GARCH class of models.

In a study across several financial markets, Lynch and Panigirtzoglou (2003)

demonstrate that the model-free measure of IV possesses greater information

content than historical RV for the purposes of forecasting future RV for the

S&P 500 Index, the FTSE 100 Index, Eurodollar futures, and also short ster-

ling futures. Further, Giot and Laurent (2007) conduct a study under a similar

framework to Christensen and Prabhala (1998), their encompassing regression

results support the hypothesis that the V IX subsumes the information con-

tained in RV, even after decomposing RV into its diffusive and jump compo-

nents. In a recent piece of empirical research, Taylor, Yadav, and Zhang (2010)

find that in the context of individual equities that model-free IV is inferior to

a simple ARCH model estimated on daily returns over a 1-day horizon. How-

ever, at longer horizons matching option maturities the IV is superior. This

result makes intuitive sense as the VIX is constructed for an average volatility

of the proceeding 22-trading-days, while the majority of time-series models are

designed to decay to the long term mean.

Utilising a different approach to testing orthogonality of the V IX to MBF

of volatility, Becker, Clements, and White (2007) decompose the V IX into the

information content that may be captured by the competing MBF and that

which cannot. The information content not captured by the MBF may be

thought of as the information incremental to purely historical information; if

this component of the V IX is useful in forecasting future volatility, then it

62

Page 75: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

may be thought of as evidence of the superior forecasting ability of the options

market. However, Becker, Clements, and White (2007) found that the V IX

component orthogonal to the MBF was primarily noise, evidence against the

SPA of the V IX.

In another finding against the informational efficiency of the options market,

Becker and Clements (2008) examined the forecast performance of individual

models, as well as combination forecasts. Their methodology is more involved

than the encompassing regression described previously as all competing fore-

casts are combined in one regression to find the optimal weighting scheme.

Becker and Clements (2008) conclude that

“a combination of model based forecasts is the dominant approach, indicating

that the IV cannot simply be viewed as a combination of various model based

forecasts. Therefore, while often viewed as a superior volatility forecast, the

IV is in fact an inferior forecast of S&P 500 volatility relative to model-based

forecasts.” (pp. 122, Becker and Clements, 2008).

In a novel piece of research, Becker, Clements, and McClelland (2006) contend

that the V IX may be more important in the forecasting of spikes in volatility

rather than total volatility. This argument is based on the empirical fact that

volatility is highly persistent, and time-series models of volatility are largely

designed to capture this persistence. Option market participants, however, are

able to forecast volatility with reference to known future events that may cause

volatility to spike, events such as earnings announcements. Executing a similar

decomposition of the V IX, and decomposing RV into its diffusive and jump

components as discussed previously, Becker, Clements, and McClelland (2006)

report that the orthogonal component of the V IX does contain information

relevant to the future jump component of volatility, and that it is incremental

63

Page 76: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

to the information contained in MBF.

A broad range of prior results have just been canvassed, an attempt to distill

this information into a brief summary is now given. Initially, the forecasting

ability of measures of IV were studied by comparing the correlations and R2s

of linear regressions of future volatility on IV and historical standard devia-

tions, the IV examined depended on the weighting scheme chosen for the BSM

IVs over a range of strike prices. This style of empirical analysis consistently

found in favour of the superior forecasting ability of the BSM IVs across vari-

ous weighting schemes. However, the results were initially not as strong when

encompassing regression tests began being conducted.

Early encompassing regression studies generally used ATM BSM IVs or some

other weighting scheme of BSM IVs which were compared with historical stan-

dard deviations or GARCH style models of volatility. The reported findings

of these early papers were mixed, with evidence for and against IV subsuming

the information in historical returns being reported in roughly equal measure.

The variety of results may be attributable to the different weighting schemes

of the BSM IVs, the type of GARCH model chosen and the lag length used

in calculating historical standard deviations. More recent encompassing regres-

sion results that utilised more advanced econometric techniques, such as those

based on the more robust measure of future volatility that is provided by RV

or the model-free measure of IV provided by the V IX, typically found that

the information content of IV subsumed the information in historical returns

and also particular GARCH type models. However, it must be noted that sev-

eral studies found that this was not the case when compared to auto-regressive

models of RV.

An alternative route for examining the informational efficiency of the options

64

Page 77: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

market is to test for orthogonality between the forecast errors from an options

market forecast and the forecast from time-series models of volatility. Again,

early results that examined the previous version of the V IX found that it was

not informationally efficient. However, an updated paper that made use of more

advanced GMM techniques found that even the old V IX was informationally

efficient. Finally, in a decomposition of the information content of the V IX

into that which can be matched by MBF and that which is orthogonal, it was

found that the orthogonal component was not useful in forecasting the overall

level of future volatility but it was useful in forecasting the jump component of

volatility.

Overall, while there have been several studies finding against the use of IV as

a useful forecasting tool for future volatility, there is significant prior evidence

that the information implied by options markets has significant explanatory

power in forecasting future volatility. Encouragingly for advocates of utilising

IV, many of the earlier results finding against IV forecasting power have been

overturned by more recent papers that make use of improved econometric meth-

ods or larger datasets. As well as the articles surveyed here, Poon and Granger

(2003) report that in 44 of 53 studies that compare the forecasting ability of IV

relative to historical standard deviations or time-series models of volatility, the

study finds in favour of the IV measure. Hence, the collection of prior academic

research, both theoretical and empirical, for the most part suggests that the

information implied by options markets is superior to purely time-series based

models of volatility.

2.6 Univariate Time-series Forecasts

As has been discussed, the two main methods for generating forecasts of uni-

variate volatility are through either the use of implied volatilities or time-series

65

Page 78: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

econometric models. Traditionally, these time-series models have belonged to

either the Auto-Regressive Conditional heteroscedasticity (ARCH) or Stochas-

tic Volatility (SV) family of models, both of which attempt to capture the

properties of volatility discussed in Section 2.2. More recently, however, au-

toregressive models of realised volatility (RV) have been increasingly popular.

Further, some models that combine multiple measures of volatility within the

same estimation framework have also developed. As the univariate volatility

literature is vast, this dissertation cannot hope to survey the entire breadth

of available models. Instead, this Section describes several commonly used

univariate time-series models, and some very recent extensions, that shall be

utilised in this dissertation as benchmarks for the option implied measures to

match. More in-depth discussion is available in a general setting in Ander-

sen, Bollerslev, Christoffersen, and Diebold (ABCD)(2006), Terasvirta (2009)

focuses on GARCH models, and Shephard and Andersen (2009) concentrate on

SV models.

2.6.1 GARCH class conditional volatility models

The pioneering work of Engle (1982) introduced the ARCH class of models

which allow the variance of a time-series process to fluctuate as a function

of the lagged values of that process; as the conditioning process is a linear

function, it is termed autoregressive. In an ARCH specification, the one-step

ahead forecast of volatility is a deterministic function of observable information,

multi-period ahead forecasts are available through an iterative procedure. That

is, future volatility is assumed to be known with certainty. To fix ideas, Engle

(1982) specified returns, rt, to follow the process

rt = µ+ εt, εt = h1/2t zt, zt ∼ N(0, 1), (2.42)

66

Page 79: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

where µ is the deterministic drift term for the returns process. The ARCH

model attempts to forecast the amount of uncertainty, as measured by volatility,

of the above returns process as a function of the residuals of the mean equation;

the conditional variance, ht, follows an ARCH(q) process if

ht = ω +

q∑

i=1

αi(rt−i − µ)2,

ht = ω +

q∑

i=1

αiε2t−i,

(2.43)

with ω > 0, and 0 <∑q

i=1 αi < 1 for the process to be stationary and have

a finite, positive, unconditional variance, h = ω/ (1 −∑qi=1 αi). Large (small)

shocks to the returns process, either positive or negative, result in higher (lower)

than normal levels of volatility in the next period.

As the return is given by the scalar conditional variance multiplied by a draw

from a standard Gaussian distribution, the density of the conditional return

is then also Gaussian, but the variance of these densities is allowed to fluctu-

ate. This mixing of conditionally Gaussian distributions with heterogeneous

variances allows for the unconditional distribution to be non-Gaussian; it can

match the excess kurtosis observed in empirical studies (again, see Mandelbrot

(1963), Fama (1965) and Praetz (1972) for early results).

An important characteristic of the ARCH class of models is that the parameters

of the model are available through the numerical optimisation of a closed form

equation; specifically, the parameters are estimated through maximum likeli-

hood. As was discussed in Section 2.2, the squared daily return may be used as

a proxy variable for the level of latent volatility and it is these squared returns

that ARCH models are traditionally estimated upon. This method of parameter

estimation has advantages over the method for the stochastic volatility class of

67

Page 80: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

models, which involves a far more complicated process and is discussed shortly.

Parameters of an ARCH model are chosen such that the log-likelihood of the

model matching empirical observations is maximised; the following function of

T returns is maximised

lnL =T∑

t=1

lt

lt = −1

2

[ln(2π) + lnht +

ε2tht

].

(2.44)

To be clear, all of the variations of the ARCH model discussed in the following

are estimated utilising quasi-maximum likelihood over the same log-likelihood

function as just described, unless specifically stated otherwise.

For an ARCH(1) process, the forecast return volatility is determined entirely by

the previous period’s return. Given the stylised fact of persistence in volatility,

one may wish to include multiple lags of returns to incorporate more informa-

tion; the number of lags may be chosen through a function such as the Akaike

or Schwarz Information Criterion. A possible dilemma with this approach is

that the number of parameters to be estimated may become large. As an alter-

native, an efficient method of extending the ARCH(1) model to include a larger

information set is the Generalised ARCH (GARCH) (p, q) model of Bollerslev

(1986) and Taylor (1986), where p is the number of lags of past values of ht.

Such a specification is equivalent to an ARCH(∞) if all the parameters on

lagged values of ht are positive. The process given by Bollerslev (1986) is

ht = ω +

q∑

i=1

αiε2t−i +

p∑

j=1

βjh2t−j , (2.45)

with ω > 0, αi ≥ 0, βj ≥ 0 and∑αi +

∑βj ≤ 1 for a finite, stationary process

of variance with a positive, finite, unconditional variance. These parameters are

68

Page 81: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

estimated similarly to the maximum likelihood framework outlined in Equation

2.44.

Recalling that when the drift term is assumed to be zero, r2t ≈ ht, the GARCH

model may be used to easily generate multi-step ahead forecasts of volatility.

Defining the unconditional volatility by h0 = ω1−α−β , the recursive forecast from

a GARCH(1,1) specification may be written as

ht+k = h0

[1 − (α+ β)k

]+ (α+ β)k−1ht+1. (2.46)

From the above formula, it may be recognised that the forecast variance is a

weighted average of the unconditional variance and the one-step ahead forecast,

with the weightings a function the sum of α and β. The closer this sum is to

unity, recalling that it cannot equal or exceed unity, the more weight is placed

on the one-step ahead forecast relative to the unconditional variance. Further,

it may be observed that as the length of the forecast horizon tends to infinity,

the forecast variance will decay to its unconditional level at an exponential rate.

The preceding discussion highlighted the GARCH model’s properties of mean-

reversion, potential for heavy tails, relatively straightforward closed-form esti-

mation procedure and the parsimonious nature of the model. These factors,

and solid empirical performance25 have resulted in the GARCH(1,1) specifica-

tion being one of the most widely used volatility models for financial time-series

data; indeed, the review by Andersen, Bollerslev, Christoffersen, and Diebold

(2006) refers to the GARCH(1,1) model as the work horse of GARCH mod-

els. These attractive properties are generally inherited by other members of

the GARCH family, an important example of one of these variations is now

discussed.

25See, for example, Hansen and Lunde (2005).

69

Page 82: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

While the early empirical evidence in favour of the standard GARCH model

was strong, later studies demonstrated caveats under which the model faltered,

and where alternative specifications offered improvements. For the modeling of

stock returns, the asymmetric relationship between return shocks and volatility

was found by Pagan and Schwert (1990), among others, to be better captured

by the Exponential GARCH model of Nelson (1991). An alternative specifica-

tion which allows for the negative correlation between shocks and volatility to

be modelled explicitly, the GJR-GARCH of Glosten, Jaganathan and Runkle

(1993), was found by Brailsford and Faff (1996) and Taylor (2001) to outper-

form the original GARCH specification for equity indices. The GJR-GARCH

model for volatility is defined as

ht = α0 + α1ǫ2t−1 + α2I(εt−1 < 0)ε2t−1 + βht−1, (2.47)

where I(·) is the indication function; this process nests the standard GARCH

model when α2 = 0. Typical values for α2 are positive, indicating that a neg-

ative shock to volatility will increase the forecast value of ht. Many of the

properties previously discussed for the GARCH model also hold for the GJR-

GARCH model, although the recursive substitution is slightly more involved.

As the GJR-GARCH specification incorporates the stylised fact of the leverage

effect, and is a standard in the academic literature, it is believed that the GJR-

GARCH provides a sufficient benchmark for forecast comparison to implied

volatility based forecasts. While more complicated dynamic structures exist for

modelling the conditional volatility, it is not the focus here to discuss or empir-

ically investigate them all, such studies already exist in the literature. Instead,

the focus is placed on prominent models that provide readily interpretable re-

sults relevant to well-known standards. The fact that IV may outperform an

70

Page 83: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

obscure and highly complicated time-series model may not as easily convey the

importance of the result as if it had outperformed a well known benchmark

such as the GJR-GARCH. That is, an enormous amount of candidate models

exist but the focus here is on the major advances in the literature.

Further Extensions to the GARCH model

To begin, alternative specifications that accommodate leverage effects are avail-

able and may be argued to have advantages over the GJR-GARCH used here.

An example is the Exponential GARCH (EGARCH) model of Nelson (1991),

which models the logarithm of the conditional variance. Taking the exponent of

the modeled values ensures positivity of the conditional variance which means

that there are no non-negativity constraints enforced on the estimated parame-

ters. However, as the EGARCH model involves an absolute value in its specifi-

cation, this value is non-differentiable at zero which means the model is difficult

to estimate and analyse numerically, it is also more difficult to use in multi-step

ahead forecasting, ABCD (2006). The asymmetric GARCH (Engle and Ng,

1993) and quadratic GARCH of Sentana (1995) also are viable alternatives.

The GJR-GARCH model and its like yield abrupt changes in volatility depend-

ing on whether the last return was positive or negative; in the GJR-GARCH

the response is a function of either α1 or α1 + α2 and nothing in between.

However, some non-linear models exist that are known as smooth transition

GARCH (STGARCH) which allow for the transition between those two states

to be a function of the previous return, such as a log function, so the transition

between the two regimes is smoothed; such models may be found in Hagerud

(1997), Gonzales-Rivera (1998), and Anderson, Nam, and Vahid (1999). These

STGARCH models may even be modified to be a function of time. The premise

for such a modification is that the parameters of the GARCH model may vary

through time, particularly over quite long time series. Some STGARCH mod-

71

Page 84: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

els may then be able to have time-varying parameters to match the regime in

which they are in as function of time rather than just the previous period’s sign

(Terasvirta, 2009).

It has also been argued that, occasionally, such a large shock to the returns

process occurs that a fundamental break in the dynamics of volatility takes

place, a standard ARCH model is not designed to handle such occurrences. For

events such as the crash of October 1987, Hamilton and Susmel (1994) recom-

mend the Markov-switching ARCH model; for each of the k regimes modeled

there are separate parameter vectors to capture the dynamics of that regime.

If shocks are believed to be persistent, then the Markov-switching ARCH spec-

ification could be extended to its GARCH equivalent. However, (Terasvirta,

2009) notes that the generalised specification is completely path dependent and

practically impossible to estimate; some work arounds have been proposed by

Gray (1996), Klaassen (2002), and Haas, Mittnik, and Paolella (2004), although

the estimation is still complicated. Given the estimation difficulties and the fact

that the time period examined in this dissertation is relatively short, it is not

believed that time-varying parameters or regime-swithcing models are required

here.

It has been discussed that in the standard GARCH model, conditional volatil-

ity decays exponentially to its long term mean. However, models exist where

the decay occurs hyperbolically, resulting in long-memory models of volatility,

or where no long term mean exists. The first such model is the integrated

GARCH (IGARCH) model of Engle and Bollerslev (1986) where the restriction

that α+β = 1 is imposed. The motivation for their research partially came from

the result of Mandelbrot (1963) who found that volatility may be non-stationary

as it did not converge to an unconditional level. While the model is referred to

as integrated and is not weakly stationary, Nelson (1991) demonstrates that the

72

Page 85: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

process is strongly stationary. A potential theoretical disadvantage of utilising

the IGARCH model is that the unconditional variance of the process does not

exist. Further, the original IGARCH model of Engle and Bollerslev (1986) was

designed to accommodate the fact that, typically, the sum of the estimated

α and β GARCH parameters were close to unity. As Diebold (1986) argues

that this may also occur if a switch in the intercept occurs, the motivation

for using the IGARCH model may then not be as strong. Fractionally inte-

grated GARCH (FIGARCH) models were introduced by Ballie, Bollerslev, and

Mikkelsen (1996) to explain the long-memory features of daily squared returns.

However, Terasvirta (2009) notes that the probabilistic properties of that class

of models is an open question and that they are quite complex. Alternatively,

the long-memory properties of volatility may be captured using a component

structure as in Engle and Lee (1999). In the context of this thesis where the

maximum forecast horizon examined is one trading month, the evidence on

the performance of long memory models over such shorter horizons is mixed.

Hence, while some long memory models may be of use in forecasting over a

month horizon, this thesis only considers specifications with wider empirical

support in the prior literature26

Finally, it should be noted that the above specifications have relied on the

assumption of a conditionally Gaussian distribution for returns, some models

relax this assumption. The specification of the model may remain the same,

but the likelihood function over which it is estimated changes; an example is

Bollerslev (1987) where he assumes that conditional returns have a student t-

distribution. The use of non-Gaussian densities is known to occasionally result

in a bias-variance tradeoff, where consistency is no longer ensured although

efficiency gains may occur (the estimate is marginally biased but the number

26Some important considerations in utilising a long memory process include: if the DataGenerating Process (DGP) follows a FIGARCH process, IGARCH will be spuriously detected(Baillie et.al, 1996); even if the DGP contains occasional breaks, fractional models oftenoutperform correctly specified models (Diebold and Inoue (2001), Baillie and Morana (2009).

73

Page 86: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

of observations required for an accurate estimate is reduced). While such a

change may better facilitate the heavy tails commonly observed in empirical

studies, Pagan (1996) notes that only the intercept term differed significantly

when a student t-distribution was utilised in place of a Gaussian; this implies

that forecasts will differ only marginally, and even then only for longer term

forecast horizons than considered here.

2.6.2 Stochastic volatility models

Whereas the GARCH class of models assume volatility is a deterministic func-

tion of observable variables, stochastic volatility (SV) models treat volatility as

a random latent variable. SV models posit that the volatility on any given day

is a function of random events on that day, these may be unobservable. This

assumption gives rise to the Mixture of Distributions Hypothesis, which dates

back to Clark (1973), where the returns process is driven by an information

based clock rather than on calendar time. This hypothesis may be motivated

by the following two points as outlined in Taylor (2005):

1. Volatility is proportional to the number of news arrivals and as some

news is undoubtedly not scheduled, volatility will have some unpredictable

component.

2. Shocks to volume, either as transaction counts or trading volume, will

impact on volatility and as the trading clock runs at different speeds over

different days, volatility will have some unpredictable component.

SV models are characterised by two properties, the first of which is that there

is an unpredictable component of the volatility σt, that is var(σt|Ft−1) > 0, and

74

Page 87: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

secondly that excess returns are given by27

rt − µ = σtzt, σt > 0, zt ∼ i.i.d, E(zt) = 0, V ar(zt) = 1. (2.48)

Again assuming that the drift term is negligible for small time horizons such

as a day, the log-normal AR(1) SV model of Taylor (1986) may be written in

discrete-time as

rt = σtzt = exp(xt

2

)zt,

xt = ln(σ2t ) = α+ βxt−1 + ut, ut ∼ i.i.d(0, ξ2u).

(2.49)

The additional innovation term in the dynamics of the conditional variance

given above allows the SV model more flexibility than that of the GARCH class

of models (Poon and Granger, 2003). To illustrate, the correlation between the

two shocks is allowed to be non-zero, ρ = corr(ut, zt) 6= 0. If ρ < 0, then an

asymmetric relationship between volatility and returns is naturally built into

the SV model, this contrasts with the GJR-GARCH model where the asym-

metric effect must be explicitly be built into the model. This basic SV model is

impacted by two random shocks at each period of time via zt and ut; the shock

from ut impacts on returns through its partial determination of the value of xt.

Hence, in this SV model there are two random shocks but only one observable,

the return. As a result, it is impossible to deduce the actual values of xt and zt

from the process rt. Volatility is then a latent and unobservable variable, lead-

ing to the SV class of models being dynamic nonlinear latent variable models

which posses an analytically intractable likelihood function. Following White

(2006), we give a brief description as to why this is so.

27The description that follows is a similar treatment to Andersen, Bollerslev, Christoffersen,and Diebold (2006).

75

Page 88: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

As SV models belong to the general class of dynamic latent variable models,

the likelihood function of the return series {rt}Tt=1 with parameter vector θ is

given by

L(r|θ) =

∫· · ·∫f({rt}T

t=1|{xt}Tt=1, θ

)f({xt}T

t=1| θ)dx1...dxT . (2.50)

It can be seen from Equation 2.50 that for the likelihood function of returns,

the volatility must be integrated out at every point in time. This results in a T

dimensional integration problem and as such in analytically intractable. While

the Kalman filter may be applied in the case where rt is a linear function of xt

with Gaussian errors, SV models are definitely nonlinear and so we must deal

with Equation 2.50 directly. As is typically done with likelihood functions, it

is possible to take the log of Equation 2.50, resulting in

logL(r|θ) =

T∑

t=1

log

(∫ ∞

−∞f(rt|xt, θ) f(xt|θ) dxt

). (2.51)

Hence, the problem has been reduced from a T dimensional integral into the

sum of T one dimensional integrals; one must then solve the integral Equation

2.51 ∀t. The difficulty in evaluating the integral is that as volatility is unob-

served, the density of volatility, f(xt|θ), is unknown. This variable must then

be integrated out of the equation either numerically, such as the approach of

Watanabe (1999), or through a Bayesian procedure such as Jacquire, Polson,

and Rossi (2004); more detail on estimation procedures for SV models may be

found in ABCD (2006).

76

Page 89: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Extensions to the standard SV model

SV models are especially popular in the options pricing literature as the mod-

els are quite naturally written in continuous time, much like the SDEs that

underpin a lot of derivatives work. Indeed, since the numerical work of Hull

and White (1987) and theoretical work of Heston (1993), the vast majority

of closed form option pricing models are based upon the solution of SDEs for

volatility with the underlying dynamics becoming increasingly more compli-

cated and flexible. As discussed previously in Section 2.5, a non-exhaustive

list includes the Stochastic Volatility (SV) model of Heston (1993); SV-Jump

(SVJ) model of Bates (1996) where the jump occurs in the asset price; the SV-

Double Jump (SVJJ) model of Duffie, Pan & Singleton (2000) where the jumps

may occur independently in the asset price, in the volatility, or be simultaneous

and correlated; and a multi-factor SV model with long- and short-run volatility

components in Christoffersen, Heston and Jacobs (2009).

For the purposes of univariate volatility forecasting, similar to the ARCH based

models, amendments to the standard SV model of Taylor (1986) have been put

forth to accommodate some of the stylized facts of volatility outlined in Section

2.2.2 as the standard model cannot match, for example, the phenomenons of

heavy tails and leverage (Liesenfeld and Jung, 2000); much more detail about

these extensions is available in, ABCD (2006), Shephard and Andersen (2009),

Hurvich and Soulier (2009), and White (2006) and the references within.

Possible extensions include the model of Harvey, Ruiz and Shephard (1994)

which, rather than the Gaussian distribution assumption of the standard model,

models returns with a standardised Student t-distribution, or the generalised

error distribution as used in Watanabe and Asai (2001); such specifications al-

low for returns to have a heavy-tailed density function.

77

Page 90: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

With regard to accounting for leverage effects, several alternatives exist. To

begin with, So, Li and Lam (2002) propose having two parameter vectors for

the SV process, with the choice of vector determined by whether the previous

period’s return was positive or negative. Alternatively, Harvey and Shephard

(1996) and Yu (2005) allow for correlation between the return and log-variance

innovations to allow for asymmetry; Asai and McAleer (2003) demonstrate that

the correlation approach is preferred to using the threshold approach in a vari-

ety of empirical settings.

Finally, long memory models of SV exist in both discrete, Breidt, Crato, and

de Lima (1998) and Harvey (1994), and continuous time, Comte and Renault

(1998) and Barndorff-Nielson (2001), more detail on long-memory SV models

in available in Hurvich and Soulier (2009).

For the purposes of this dissertation, is it believed that the standard SV model

provides a solid benchmark in the academic literature for which forecasts of

volatility from implied measures may be contrasted against. While several no-

table extensions have been discussed, it is unfortunately true that the majority

of research in the SV field is theoretical in nature, and there is a lack of clear

consensus in the empirical literature to guide which of these extensions is to be

preferred. In light of this fact, the standard model is used as the benchmark

SV model.

2.6.3 Models for forecasting Realised Volatility

The above discussion demonstrates several desirable properties of RV; it is an

asymptotically unbiased estimator of the total variation of an asset that is ro-

bust to the presence of jumps, the presence of noise may be disregarded by an

78

Page 91: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

appropriate choice of the number of sampling periods within a day, is easily

estimated from observable returns, and is a less noisy measure of the latent

volatility than squared daily returns which allows for more powerful tests of

volatility forecasting ability. Section 2.6.1 details how the GARCH class of

models are designed using squared daily returns, as a more accurate measure

of volatility exists it is plausible that time-series models built around this more

accurate measure will then provide more accurate forecasts. Indeed, several

models have been proposed in the academic literature based on this premise,

one of which is utilised in this dissertation; that model is now defined and dis-

cussed.

Aside from being a more accurate proxy variable of volatility, forecasts of uni-

variate volatility may also be generated by directly applying time series models

to daily measures of RV, RV(m)t . In early empirical work on time series mod-

els of RV, Andersen, Bollerslev, Diebold, and Labys (2003) and Koopman,

Jungbacker, and Hol (2005) utilise an ARMA(2,1) process where parameter

estimates reflect the common feature of volatility persistence. More recently,

a MIxed Data Sampling (MIDAS) forecasting scheme advocated by Ghysels,

Santa-Clara and Valkanov (2006) is also directly applied to RV . A forecast

under this approach is based on the following specification,

RV t+q =

kmax∑

k=0

b (k, θ)RVt−k + εt, (2.52)

where q is the forecast horizon. The maximum lag length kmax can be cho-

sen rather liberally as the weight parameters b (k, θ) are tightly parameterised.

That is, the polynomial lag parameters b (k, θ) are parameterised to be a func-

tion of θ, this allows for an extended period of history28 to be included in the

information set without a proliferation of parameters to be estimated (Ghysels,

28All subsequent analysis is based on kmax = 50 as Ghysels, Santa-Clara and Valkanov(2006) find there is no benefit to having the lag structure beyond this horizon.

79

Page 92: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Santa-Clara and Valkanov, 2006). Here the weights are determined by means

of a beta density function and normalized such that∑kmax

k=1 b (k, θ) = 1. A beta

distribution function is fully specified by the 2 × 1 parameter vector θ. To

provide more technical detail,

b (k, θ) =f(

kkmax

, θ1; θ2

)

∑kmaxj=1 f

(j

kmax, θ1; θ2

) ,

f (z, a, b)) =za−1 (1 − z)b−1

β(a, b),

β(a, b) =Γ(a)Γ(b)

Γ(a+ b),

(2.53)

where Γ(n) =∫∞0 xn−1e−xdx.

It is also emphasised in Ghysels, Santa-Clara and Valkanov (2006) that while

MIDAS regressions may utilise the past history of volatility, they are not de-

signed to be an autoregressive model of volatility. This is due to the fact that

the data on the left and right hand sides of the specification may be sampled at

differing frequencies and the best predictor of future total volatility may not be

lagged total volatility but some other measure of past fluctuations in returns.

That is, the best forecast of future monthly volatility may not be generated

by lagged realisations of monthly volatility but by, say, the last week of daily

volatility.

Given that the economic and statistical improvement from utilising realised

measures of volatility in time-series models is often found to be significant, see

Christoffersen, Feunou, Jacobs and Meddahi (2010) as an example, it is believed

that the MIDAS model may offer a strong competitor to the GARCH and SV

models discussed previously.

80

Page 93: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

2.6.4 Hybrid Models

The preceding discussion has canvassed four broad classes of univariate volatility

forecasting models. The GARCH class of models is an autoregressive specifica-

tion for squared daily returns and treats future volatility as a deterministic func-

tion of observable variables. In contrast, the SV family of models treats volatil-

ity as a latent variable driven by multiple shocks that may come from sources

such as unscheduled news events; these multiple shocks lead to a much more

complicated estimation procedure. The MIDAS model, a time-series model of

volatility that utilises the asymptotically unbiased and less noisy measure of

volatility that is RV, was then presented. Further, Section 2.5 reviewed articles

that have used implied volatility in forecasting future stock market volatility.

Some prior research has investigated the possibility of combining several of these

measures into the same volatility model, some of these studies are now reviewed.

The first major study this author is aware of that combines the measures just

discussed into one coherent model is Blair, Poon, and Taylor (2001). Their very

general specification is now given29; specifying returns as

rt = µ+ εt, εt = h1/2t zt, zt ∼ i.i.d.(0, 1), (2.54)

the conditional variance of these returns may be modeled as

ht =α0 + α1ε

2t−1 + α2st−1ε

2t−1

1 − βL+γRVt−1

1 − βIL+δV IX2

t−1

1 − βV L, (2.55)

where L is the lag operator, st−1 is a dummy variable given by I(εt−1 < 0).

Placing certain restrictions on this flexible specification nests some models rel-

evant to this dissertation30:

29The original specification included a dummy variable for the crash on 19th of October1987, but as this period is not included in the sample data, it is not included in the specificationgiven here.

30Other restrictions are possible, but are not relevant.

81

Page 94: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

1. γ = δ = βI = βV = 0 yields the GJR-GARCH(1,1) model of Glosten,

Jaganathan and Runkle (1993) discussed in Section 2.6.1.

2. α2 = δ = βV = 0 yields a model denoted GARCH-RV, which combines

information from both daily squared returns and the RV measure.

3. α2 = γ = βI = 0 yields a model denoted GARCH-VIX, which combines

information from both daily squared returns and the VIX.

4. δ = βV = 0 yields a model denoted GJR-RV, which combines informa-

tion from both daily squared returns and the RV measure and allows for

asymmetric effects.

5. γ = βI = 0 yields a model denoted GJR-VIX, which combines information

from both daily squared returns and the VIX and allows for asymmetric

effects.

Some of these models have been canvassed previously in Section 2.5 where

the parameters on the VIX term were tested for statistical significance, if the

parameter was found to be significant then it demonstrated that the options

market contains information incremental to that contained in historical data

alone.

This broader range of models also allows for conclusions to be drawn on whether

the theoretically more precise proxy variable for volatility, RV, yields superior

forecasts to models based on squared daily returns. Indeed, BPT find that

for the GJR-RV model the parameter α1 has a robust t-ratio of 1.56 which is

insignificant at most tolerance levels, this suggests that the information in RV

supersedes the information in squared daily returns. While this is an inter-

esting result that lends credence to theoretical results, further investigation of

this line of research is not pursed here as the interest is in the benefit of using

option implied information. Broadly, the work of BPT provides some bench-

82

Page 95: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

mark models with which to compare forecasts generated from the information

implicit in options markets.

A recent innovation in time-series models of volatility is the Realised GARCH

(RGARCH) model of Hansen, Huang, and Shek (2010). They argue that a sin-

gle return is unable to provide an accurate signal about the current, latent, level

of volatility. As the GARCH family of models are estimated on such returns,

the implication is that GARCH models are poorly suited for situations where

volatility changes rapidly. Further, the standard GARCH model is slow to catch

up and will take many periods for the conditional variance to reach its new level.

The RGARCH model incorporates a realised measure of volatility, with RV as

an example, which is more informative about the latent level of volatility. How-

ever, unlike the recent proposals of the Multiplicative Error Model of Engle and

Gallo (2006), and the high frequency based volatility (HEAVY) model of Shep-

hard and Shepperd (2010), which also utilize realised measures of volatility, the

RGARCH model retains the single volatility-factor structure of the traditional

GARCH framework. The log-linear Realized GARCH model with p lags of the

conditional variance and q lags of the realised measure is defined here as the

LRGARCH(p,q) and is specified as

rt = h1/2t zt,

log ht = ω +

p∑

i=1

βi log ht−i +

q∑

j=1

γj log xt−j ,

log xt = ξ + ψ log ht + τ(zt) + ut,

(2.56)

where rt is the return, zt = rt/√ht ∼ i.i.d. (0, 1), ut ∼ i.i.d. (0, σ2

u), zt and ut

are mutually independent, and ht = var(rt|Ft−1). The variable xt is the realised

measure of the conditional variance of returns, in this thesis the realised mea-

83

Page 96: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

sure will be the RV defined previously. The function τ(zt) is a leverage function

and captures the stylised fact of asymmetric reactions of volatility to positive

and negative returns of the same magnitude. Hansen et al. (2010) propose the

use of Hermite polynomials and use the quadratic form, τ(zt) = τ1zt+τ2(z2t −1),

as their baseline choice. Hansen et al. (2010) remark that the measurement

equation does not require xt to be an unbiased measure of ht. As an example,

the RV over the 6.5 hour trading day may be utilised, which only spans a frac-

tion of the 24 hour period that daily returns are calculated over.

This new class of what is termed here as hybrid models, models that combine

multiple measures of volatility, are distinct from the combination forecasts dis-

cussed in Becker and Clements (2008) made reference to in 2.5. That is, rather

than separate estimation of models that utilise only one measure of volatility

and then taking some average of the forecasts, these hybrid models make direct

use of multiple measures of volatility within the one estimation framework.

To conclude this Section on univariate volatility forecasting models, the main

point that needs to be reinforced is that in evaluating the forecasting per-

formance of option implied measures, a broad range of competing candidate

models exist. While the entire spectrum of univariate volatility models has not

been canvassed here, it is argued that including the well studied and commonly

used GARCH, SV, RV, IV, and hybrid models of univariate volatility, a high

standard of benchmark models has been set for the option implied measures to

outperform or fail to match as may be the case.

2.7 Multivariate Time-Series Forecasts

It may perhaps be argued that forecasts of covariances are of more practical

use in finance than univariate forecasts of variance; an example is the depen-

84

Page 97: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

dence of asset allocation problems on finding minimum (conditional) variance

portfolios. A complicating factor is that the econometrics in generating these

multivariate forecasts can become quite challenging. Further, representation of

the object of interest is at times infeasible when dealing with large portfolios,

and in trying to differentiate forecasts it may be difficult to find statistics that

accurately measure performance. Each of the three broad categories of univari-

ate time-series models, GARCH, SV, and RV-based specifications, discussed

above have been extended into their multivariate analogues along with some of

their desirable properties. While some of these extensions are quite straightfor-

ward to develop analytically, they may run into severe problems when it comes

to estimating these models in practical situations. While the following Section

discusses some of the proposed specifications in the multivariate literature, it is

not as detailed as the univariate Section. This is mostly due to the fact that a

lot of the groundwork for the multivariate models was covered in the previous

Section, and that multivariate models are not as central to this thesis as the

univariate literature. For more theoretical detail, good reviews are Silvennoinen

and Terasvirta (2009) and Bauwens, Laurent, and Rombouts (2006) for mul-

tivariate GARCH models, Chib, Omori, and Asai (2009) for multivariate SV

models; Laurent, Rombouts, and Violante (2010) conduct a thorough empirical

examination of multivariate GARCH models; it must be noted that the fol-

lowing discussion is inspired by the aforementioned review articles, particularly

Silvennoinen and Terasvirta (2009).

2.7.1 Multivariate GARCH models

Following Silvennoinen and Terasvirta (2009), the framework for discussing mul-

tivariate GARCH models is given by considering a stochastic process {rt} with

dimension N × 1 such that E [rt] = 0. Then assume that {rt} is conditionally

85

Page 98: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

heteroscedastic

{rt} = H1/2t ηt, (2.57)

given the information set Ft−1, the information set generated by the observed

{rt} available at time t − 1, where the N × N matrix H1/2t = [hi,j,t] is the

conditional covariance matrix of {rt} and ηt is an i.i.d. vector error process

such that E

[ηtη

t

]= I. This framework ensures no linear dependence in the

returns, which are typically considered log-returns in an empirical setting.

The first multivariate GARCH model discussed is the VEC-GARCH model of

Bollerslev, Engle, and Wooldridge (1988). This is an extremely general model

that allows the conditional covariances to be a function of all lagged condi-

tional covariances, lagged squared daily returns, and cross-products of returns;

the process is described by

vech(Ht) = c +

q∑

j=1

Ajvech(rt−jr′

t−j) +

p∑

j=1

Bjvech(Ht−j), (2.58)

where vech(·) is an operator that stacks the 12N(N + 1) unique elements in

the lower triangular part of a symmetric matrix into a column vector, c is an

N(N+1)/2×1 vector and Aj and Bj are N(N+1)/2×N(N+1)/2 parameter

matrices. This specification may easily be extended to include asymmetric

effects, may be used to generate multi-step ahead forecasts of the conditional

covariance matrix, and similar to the univariate GARCH, decays exponentially

to its unconditional covariance (ABCD, 2006). While allowing the conditional

covariance matrix to make direct use of such a large information set introduces

a large degree of flexibility into the VEC-GARCH model, it is nearly impossible

to use in a practical setting given the potentially large number of parameters

to estimate; further, unless restrictions are enforced, it is unlikely that the

forecast covariance matrix will be positive definite. To illustrate, the VEC-

GARCH model requires the estimation of (p+ q)(N(N + 1)/2)2 +N(N + 1)/2

86

Page 99: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

parameters. For a VEC-GARCH(1,1) model with 5 assets this equates to 465

parameters, 10 assets yields 6105 parameters, and for the 30 assets in the Dow

Jones Industrial Average, one would need to estimate 432915 parameters; this is

the curse of dimensionality. These parameters are estimated in a similar manner

to the univariate GARCH models discussed previously, through a maximum

likelihood estimation procedure over the following likelihood

T∑

t=1

lt(θ) = c− 1

2

T∑

t=1

ln |Ht| −1

2

T∑

t=1

r′

tH−1t rt. (2.59)

The optimisation of the above log-likelihood is computationally burdensome

and potentially inaccurate, the optimisation process requires finding the in-

verse and determinant of potentially very large matrices. While alternatives

may be found for taking the inverse, such as Gauss-Jordan elimination, no such

alternative pathways exist for the determinant. As this procedure must be re-

peated at each time step for each iteration of the optimisation algorithm, the

estimation procedure can quickly become cumbersome. The combination of a

large number of parameters with a cumbersome likelihood function for optimi-

sation raises the following salient point.

As discussed in Silvennoinen and Terasvirta (2009), desirable features of a mul-

tivariate GARCH model are that it be flexible enough to match the dynamics

of the conditional covariances yet parsimonious enough to avoid the curse of

dimensionality. Further, these parameters should be relatively easy to esti-

mate and have readily interpretable values as in the α (weight to be placed on

the shock) and β (weight to be placed on persistence terms) in the univariate

GARCH model. It appears that there is a tradeoff in the multivariate liter-

ature between the flexibility of a model, and the feasibility of implementing

it. A focus of research in the multivariate literature, therefore, has been in

finding a compromise between flexibility and practical use. Indeed, the work

87

Page 100: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

of Baba-Engle-Kraft-Kroner that is represented in the BEKK model described

in Engle and Kroner (1995) is a restricted version of the VEC-GARCH model,

which may then be further restricted to yield the diagonal BEKK model; while

these models lose flexibility they require the estimation of fewer parameters,

(p + q)KN2 +N(N + 1)/2 and (p + q)KN +N(N + 1)/2 parameters respec-

tively, which is still a significant number of parameters for any practical purpose.

Rather than increasingly adding restrictions, an alternative way to reduce the

dimensionality of the problem is to separate out the covariance matrix into its

variances and its correlations.

In the constant conditional correlation GARCH (CCC-GARCH) model of Boller-

slev (1990), the temporal variation in the covariances is assumed to be driven

solely by the temporal variation in the conditional standard deviations while

the correlation remains constant31 (ABCD, 2006); the conditional covariance

matrix may be decomposed into

Ht = DtPDt, (2.60)

where Dt = diag(h

1/21,t , ..., h

1/2N,t

)is the diagonal matrix of conditional standard

deviations and P = [ρi,j ] is positive definite matrix of correlations which is

time-invariant. That is, only the conditional standard deviations vary through

time. Typically, the estimation procedure is to model each of the individual

returns series through a separate GARCH specification, and then combine these

univariate volatility forecasts into a multivariate conditional covariance matrix

through scalar multiplication

[Ht]i,j = h1/2i,t h

1/2j,t ρi,j . (2.61)

31Tests for constant correlations exist in Tse (2000) with Lagrange multipliers, Bera andKim (2002) test for constant correlation in the bivariate case, and Silvennoinen and Terasvirta(2005) develop a test that includes a test of constant correlation against their Smooth Tran-sition Conditional Correlation model which is discussed shortly.

88

Page 101: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

In this way, the estimation problem has been broke down into (p+ q)N condi-

tional volatility parameters and N(N − 1)/2 unconditional correlation param-

eters. This is true even if one implements the Extended CCC-GARCH model

introduced by Jeantheau (1998) which allows the conditional variances to be a

function of lagged squared returns and variances of other assets.

It must again be noted that a trade-off between flexibility and parsimony has

been made in the CCC-GARCH model32, it is a priori unlikely that the cor-

relations between assets are not time-varying. A model which allows quite a

large degree of flexibility yet requires the estimation of only a small number pa-

rameters was introduced by Engle (2002) by way of the Dynamic Conditional

Correlation GARCH (DCC-GARCH) model. As this model is discussed in more

detail later in Chapter 5, only its context within the multivariate GARCH lit-

erature is discussed here.

The DCC-GARCH model of Engle (2002) follows Bollerslev (1990) in decom-

posing the conditional covariance matrix into a matrix of conditional standard

deviations, and a correlation matrix. In this case, however, the correlation ma-

trix is dynamic (as the name implies), with the dynamics given by a model that

is similar to the univariate GARCH specification

Qt = Q(1 − α− β) + αrt−1r′t−1

+ βQt−1, (2.62)

where Q is the unconditional correlation matrix, and r are volatility standard-

ised returns. That is, similar to Bollerslev (1990), each of the individual assets

within the portfolio of interest have a univariate GARCH model estimated upon

their past history of returns, yielding a vector of fitted volatilities. The returns

vectors are then standardised by the vectors of fitted univariate volatilities, the

32In estimating the CCC-GARCH, the correlation matrix must only be inverted once periteration.

89

Page 102: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

interrelationships between the volatility standardised returns are then modeled

by the matrix Qt. It is important to note, however, that Qt is not itself a

correlation matrix but one is found through the rescaling

Ht = DtRtDt,

Rt = Q− 1

2t QtQ

− 12

t ,

(2.63)

where Qt replaces the off-diagonal elements of Qt with zeros but maintains its

principal diagonal, or finding the Hadamard product of Qt and the identity

matrix.

The use of Q in the DCC-GARCH specification results in correlation target-

ing, or mean-reversion of conditional correlations to their unconditional levels.

This results in the estimation of the DCC-GARCH requiring only two corre-

lation specific parameters, there are of course still (p + q) ×N parameters for

each of the N univariate GARCH(p, q) models. This specification allows for

a significant degree of flexibility in modelling conditional correlations, each of

the cross-products of volatility standardised returns are included as are all the

lagged conditional correlations, yet only two parameters need be required for

the correlation specification. Such flexibility while still being parsimonious has

driven a large degree of research into DCC-GARCH and potential extensions.

Given the similarity in the dynamics of Qt to the univariate GARCH model,

it is perhaps unsurprising that some of the modifications already discussed in

Section 2.6.1 arise again in the multivariate context. As the leverage effect is a

well known property of equity markets, it may be useful to include a dummy

or threshold parameter into conditional correlations. Aside from the univari-

ate similarity, this asymmetric effect may be motivated by remembering that

crashes generally affect the majority of stocks in a similar fashion, leading to a

90

Page 103: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

potential spike in correlation, where the relationship may not be as strong in

bull markets.

Hence, Cappiello, Engle and Sheppard (2006) introduce their Asymmetric Gen-

eralised DCC GARCH (AGDCC-GARCH) model to incorporate such leverage

effects, although the number of parameters to be estimated in this model is in-

feasible for large portfolio problems unless restrictions such as diagonal, scalar,

or symmetric parameter matrices are imposed (Silvennoinen and Terasvirta,

2009). Alternatively, Silvennoinen and Terasvirta (2005) propose the Smooth

Transition Conditional Correlation GARCH (STCC-GARCH) model and its

extension the Double Smooth Transition model (DSTCC-GARCH), which are

distinct from the DCC-GARCH type of models. In the STCC-GARCH, a func-

tion of the transition variable determines where between two extreme states of

correlation the conditional correlation matrix lies. Silvennoinen and Terasvirta

(2005) propose a logistic function but leave the transition variable open to be

defined for the problem at hand, the speed of transition and the location of the

transition are parameters to be estimated rather than imposed by the econo-

metrician. The STCC-GARCH model may be extended to allow the extreme

states of correlation to vary through time, this added flexibility is available

through the DSTCC-GARCH model. Finally, a model that is more flexible

than the STCC-GARCH model, yet not as flexible as the DSTCC-GARCH is

the Regime Switching Dynamic Correlation GARCH (RSDC-GARCH) model.

Introduced by Pelletier (2006), rather than having the states of correlation vary

continuously through time, a finite number of states exist and is given by the

number of regimes.

Unfortunately in the context of this thesis, there is a limited amount of prior

research that examines the relative forecasting performance of the proposed

multivariate GARCH models just discussed. The only major study that this

91

Page 104: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

author is aware of is a working paper by Laurent, Rombouts, and Violante

(2010) who examine the out-of-sample forecasting performance of 125 multi-

variate GARCH models in the 10 asset case for one-step-ahead forecasts. As

is done in this dissertation, they utilise the Model Confidence Set33 approach

in separating out candidate models; some interesting results emerge from their

paper. Firstly, in relatively calm markets, the hypothesis that the conditional

correlation is constant cannot be rejected, neither can the hypothesis that re-

sponses to shocks are symmetric. Secondly, the DCC-GARCH model, when

coupled with an asymmetric process for the conditional univariate volatilities

cannot be rejected as the best forecasting model overall. Thirdly, in relatively

volatile markets, the assumption that each of the conditional correlation pairs

are equal cannot be rejected; that is, each of the off-diagonal elements of the

conditional covariance matrix are identical, this model is represented by the Dy-

namic Equicorrelation model of Engle and Kelly (2008, 2009) and is discussed

in Chapter 5.

In their empirical study, Laurent, Rombouts, and Violante (2010) find that

the theoretical extensions to the original DCC-GARCH model that they exam-

ine, such as including leverage effects, do not result in superior out-of-sample

forecasting of the conditional covariance matrix. For this reason, only the orig-

inal DCC-GARCH class of models without these theoretical extensions will be

considered in the remainder of this thesis.

2.7.2 Multivariate Stochastic Volatility Models

Similar to the univariate case, a competing approach to modelling the condi-

tional covariance matrix is the use of multivariate Stochastic Volatility (MSV)

models. Unfortunately, unlike the case of multivariate GARCH models where

Laurent, Rombouts, Violante (2010) have conducted a significant empirical

33Detail is given below in Section 2.9.

92

Page 105: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

study, this author is not aware of any large-scale empirical work that examines

the relative forecasting performance of several potential MSV candidates nor

considers the case of practical implementation for a medium sized number of

assets, the largest portfolio size that this author has come across is 8. As there

are a large number of candidate MSV models (a review is given in Chib, Omori,

and Asai (2009)) it is believed that it is beyond the scope of this thesis to em-

pirically examine the competing MSV models, and then compare them to the

candidate multivariate GARCH models and the multivariate implied volatility

models without more empirical guidance on the best choice of MSV model.

Hence, no MSV models are utilised in Chapter 5 of this thesis, which exam-

ines the out-of-sample forecasting performance of several multivariate volatility

models. Anecdotally, it should be said that MSV models are in general not

used in the academic literature to anywhere near the same level of prevalence

as multivariate GARCH models.

2.7.3 Multivariate Realised Volatility Models

A considerable amount of detail is given in Section 2.2.1 developing and dis-

cussing the properties of Realised Volatility (RV). This measure of volatility

has been shown to be a superior proxy variable for latent volatility relative to

the alternatives, such as daily squared returns. It is computationally easy to es-

timate and, if market microstructure effects are ignored, converges in the limit

to either the integrated variance if the process is continuous or the quadratic

variation in the presence of jumps. Recently, it has been shown that some of

the properties of RV that make it the preferred measure of latent univariate

volatility may be extended into the multivariate framework. While this thesis

does not make direct use of time-series models of multivariate volatility34, the

realised covariance (RCOV) is utilised in Chapter 5 in evaluating multivariate

34Some examples are discussed in ABCD (2006), and more recently the Multivariate Het-erogeneous AutoRegressive (HAR) model has been proposed by Corsi and Audrino (2007).

93

Page 106: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

forecasts of volatility and correlation; some discussion of this measure now pro-

ceeds.

Following ABCD (2006), let Rt,∆ be the N × 1 vector of log-returns over one

time-step of length ∆; Rt,∆ ≡ Pt − Pt−∆. The RCOV may then be defined as

RCOV (t,∆) =

1/∆∑

i=1

Rt−1+j·∆R′

t−1+j·∆. (2.64)

If one assumes a N -dimensional Geometric Brownian Motion with drift as the

SDE that governs the price evolution, then as the sampling frequency increases

the RCOV converges in the limit to the Integrated Covariance (ICOV)

RCOV (t,∆) −→∆→0

∫ t

t−1ΣsΣ

sds, (2.65)

where Σt is the N × N dimensional instantaneous diffusion matrix that mul-

tiples on a multivariate standard Brownian Motion. Hence, the RCOV is the

best proxy variable for the true level of interrelationship of asset returns, the

ICOV.

Similar to the univariate case, Barndorff-Nielson and Shephard (2004a) have

demonstrated that the RCOV measure is robust to the presence of jumps, in

which case it asymptotically approximates the quadratic covariation of the as-

sets under consideration.

As in the univariate case, there are practical considerations that contradict

the theoretical results that one should sample as frequently as possible to best

estimate the realised volatility or covariance. To begin with, not all assets trade

at exactly the same point in time, or asynchronous trading occurs in the mar-

ket, known as the Epps effect (Epps, 1979). It may even be that between two

points on a regularly spaced time grid, one asset does not trade at all, or the

94

Page 107: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

trade may occur half way between the two time points; this raises two problems.

Firstly, if no trade occurs within an interval of time, then the return of that

asset is zero, when multiplying through by the other returns it appears as if no

relationship exists between that asset and the remainder of the portfolio. By

sampling too frequently, the realised covariances are biased downwards towards

zero. One must then question whether to use calendar time, which requires

sampling at regular intervals, event time, which requires sampling only after

a certain amount of information flow, or an estimator that is able to handle

asynchronous trading.

Depending on the liquidity of the stocks under consideration, one may cre-

ate an equally spaced time grid that samples frequently enough to improve the

accuracy of the RCOV measurement over daily returns, but not too frequently

that one encounters the Epps effect. In an empirical setting, Sheppard (2006)

finds that sampling no more frequently than every 10 minutes leads to a flatten-

ing of the realised correlation plot, an analogue to the volatility signature plot

discussed earlier, for the Dow Jones Industrial Average stocks. That is, Shep-

pard’s (2006) results imply that sampling less frequently than every 10 minutes

should produce an unbiased estimate of the realised correlation. An alternative

is to use what is known as refresh time, or waiting until all assets have traded

at least once so that prices are synchronised by the time grid (which is irregu-

larly spaced), more detail is given, for example, in Barndorff-Nielson, Hansen,

Lunde and Shephard (2010) and Harris, McInish, Shoesmith, and Wood (1995).

A downside of this approach is that if just one of the assets is illiquid, then for

the more actively traded stocks a lot of returns observations are discarded. Fi-

nally, Hayashi and Yoshida (2005) introduce a covariance estimator that is able

to accommodate asynchronous trading directly. As noted by Patton (2011), this

estimation procedure does not guarantee positive definiteness and is therefore

95

Page 108: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

not used in this thesis.

Secondly, when the trade occurs between the two time points, does one in-

terpolate to find the price of the asset at the specific time of interest, or just

use the last observed price? As interpolation requires the use of future infor-

mation, the literature has settled on using the last observed price (Hansen and

Lunde, 2006).

It must also be noted that more advanced measures of the interrelationships of

assets are available than the RCOV. The estimator discussed above falters as an

unbiased estimator of the true covariance matrix when the intraday returns are

correlated; an alternative estimator for the true covariance matrix which is able

to handle correlated intraday returns is the multivariate realised kernel approach

of Barndorff-Nielson, Hansen, Lunde and Shephard (2010). However, this more

advanced methodology for estimating the covariance matrix is not implemented

here due to the empirical findings of Laurent, Rombouts, Violante (2010). In

their broad-ranging examination of the forecasting ability of 125 multivariate

GARCH models, they utilise the RCOV estimator described above sampling

at regular intervals of 5-minutes. They examine their results for robustness by

also using the RCOV estimator sampling at 1- and 15-minute intervals, as well

as a realised kernel estimator using 1-, 5-, and 15-minute sampling windows;

they find that their results are robust to each of these alternative measures of

the realised interrelationships of the assets under consideration. Therefore, it is

believed that utilising the RCOV estimator described above, using refresh time

sampling with a minimum window of 10-minutes to avoid the Epps effect35, is a

robust estimator for the purposes of evaluating multivariate covariance forecasts

as used in this thesis.

35The choice of 10 minutes is based on Sheppard (2006) for the DJIA stocks.

96

Page 109: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

2.8 Implied Correlation

The above discussion highlights that for the GARCH, SV, and RV based uni-

variate volatility models there are natural extensions that exist in the multi-

variate literature, this is not the case for implied measures. The reason for

this is simple, no derivatives market has developed to the extent that there are

exchange listed options on the co-movement between assets such as Microsoft

and Boeing, or even between separate asset classes such as gold and Treasury

bond rates. Therefore, one is not able to use market prices of such options to

imply a risk-neutral forecast of the co-movement between assets; there is no

multivariate equivalent to the V IX. While it is possible to find the implied

volatilities of the individual stocks to populate the diagonal elements of a con-

ditional covariance matrix, there is currently no way that each of the individual

off-diagonal elements may be separately calculated from exchange traded op-

tions on the co-movement of that particular pair of assets. Hence, there is a

scarcity of research into the multivariate volatility forecasting ability of option

implied measures. There is, however, one caveat where option implied measures

are easily used in forecasting multivariate volatility.

It is possible to explicitly use option implied measures in multivariate volatil-

ity forecasting if one makes the assumption that each of the off-diagonal ele-

ments of the conditional correlation matrix is the same; that is, all assets are

equally correlated with each other. This assumption has an analogue in the

multivariate GARCH literature through the Constant Conditional Correlation

GARCH model of Bollerslev (1990)36. Further, recalling the result of Laurent,

Rombouts, Violante (2010) where the assumption of constant conditional cor-

36Note that the CCC-GARCH model does not assume that the correlations are the sameacross all pairs at a point in time, but that the correlation is constant over time for eachasset. The comparison is made purely to highlight that an obviously incorrect assumptionregarding correlation has been implemented in the literature previously in an attempt tosimplify estimation purposes

97

Page 110: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

relation could not be rejected in calmer markets, the same assumption in the

options market may not easily be dismissed.

To illustrate the equality of off-diagonal elements, consider a market index

on which options trade, the S&P 500 Index for example, it has been discussed

that a model-free estimate of the implied volatility of the index as a whole can

be constructed, the V IX. For each one of the constituent stocks of that in-

dex on which options trade, one can similarly determine its model-free implied

volatility. Relying on the portfolio variance identity

σ2 =n∑

j=1

w2j s

2j + 2

n−1∑

i=1

n∑

j=i+1

wiwjsisjρi,j , (2.66)

where σ is the standard deviation of the portfolio, wi is the portfolio weight

given to asset i, si is the standard deviation of asset i, and there are n assets

in the portfolio. Making the assumption of equicorrelation, one can re-arrange

the portfolio variance identity to yield

IC =σ2 −∑n

j=1w2j s

2j

2∑n−1

i=1

∑nj=i+1wiwjsisj

, (2.67)

where the relevant variables have changed such that σ is the annualised implied

22-day-ahead standard deviation of the index, wi is the portfolio weight given

to asset i, and si is the annualised implied 22-day-ahead standard deviation

of asset i. Such a process generates a scalar conditional correlation that may

be used in the same covariance decomposition as Bollerslev (1990) and Engle

(2002)

Ht = DtRtDt, (2.68)

where Rt now has ones along the diagonal and IC on every off-diagonal ele-

ment. Hence, the assumption of equicorrelation allows the information implied

by the options market to directly be used in generating multivariate volatility

98

Page 111: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

forecasts. The obvious restriction on such a process is that both the index and

all of its constituents must have options actively traded upon its price, other-

wise there is some form of mismeasurement of the implied equicorrelation.

As has been noted, there is a limited amount of research into the use of option

implied measures in multivariate volatility forecasting, at the time of writing,

the only research this author is aware of two pieces of research that use implied

equicorrelation. In their paper introducing the DCC-DECO model37, Engle

and Kelly (2008) calculate the Dow Jones Industrial Average implied equicor-

relation and show that it closely matches the fitted equicorrelation from the

DCC-DECO models, although they do not use the implied equicorrelation di-

rectly in model estimation or forecasting. The use of implied equicorrelation for

modelling purposes has been used previously by Castren and Mazzotta (2005)

in the bivariate setting of exchange rates and they find that a combination fore-

cast of implied equicorrelation and a multivariate GARCH model is preferred,

although no forecasting exercise is conducted and report in-sample adjusted R2.

Encouraged by these results and the publishing of implied equicorrelation on the

CBOE, this thesis adapts the methodology employed in the univariate context

of incorporating option implied measures into time-series models of volatility

to the multivariate volatility analogue.

37Discussed fully in Chapter 5.

99

Page 112: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

2.9 Comparing Forecast Performance

Introduction

Each of the empirical Chapters that follow in this dissertation examine the sta-

tistical performance of various means of forecasting; therefore, a robust method

of discerning the collection of superior models is required. Being central to

the empirical component of this thesis, a considerable amount of detail is given

regarding alternative forecast evaluation procedures. This Section discusses

how forecast performance may be measured, and how these measurements are

then used to discriminate between candidate models. This Section will focus

purely on statistically loss functions, economic loss functions, such as portfolio

variance, portfolio utility and Value-at-Risk, are not discussed.

2.9.1 Regression based measures

As has been discussed in Section 2.5, some early tests of the performance of

implied volatility relied on comparing the R2 values of linear regressions. Pre-

dominantly, the specification used was the Mincer-Zarnowitz (1969) regression,

defined by

φt = αi + βifit + εi, (2.69)

where φt is a measure of the target for the time period of interest, t, and f it

is the forecast from model i generated at time t. If model i produces unbiased

forecasts, then E [αi] = 0, and E

[βi

]= 1. However, sampling error will gener-

ally result in this condition being violated and a joint hypothesis of αi = 0 and

βi = 1 is tested instead.

The review article of Poon and Granger (2003) demonstrates that this sim-

ple forecast evaluation tool is quite popular in the empirical literature, nearly

a third of the articles they survey utilise the Mincer-Zarnowitz regression. The

100

Page 113: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

consensus of these papers is that αi > 0 and βi < 0, implying that volatility

forecasts are typically too high in less volatile periods and too low in more

volatile periods.

The Mincer-Zarnowitz regression framework is not utilised here for two rea-

sons. Firstly, the target measures in this thesis are generally variations of a

volatility proxy and Patton (2011) shows that rankings from these regressions

are sensitive to the assumed distribution of the volatility proxy, changes in the

proxy can lead to changes in forecast rankings. For robustness, several volatility

proxies are used in each of the Chapters and in light of the result of Patton

(2011), some doubt may remain regarding the consistency of results given the

different proxies. Secondly, the Mincer-Zarnowitz regression is designed as a

measure of forecast bias and not of forecast accuracy. An unbiased but noisy

volatility model may be less useful than a biased but generally accurate model.

Hence, the measure of bias from the Mincer-Zarnowitz regression alone will not

fulfill the requirement of consistent ranking of models. Although these regres-

sions also provide R2s that may be ranked, more advanced measures of accuracy

exist which are specifically designed for the purposes of forecast ranking and

they are now surveyed.

2.9.2 Statistical loss functions

Statistical loss functions are measures of the distance between a forecast and a

particular target, various loss functions have various methods of weighting losses

and the choice of loss function may be determined by the practical problem at

hand. While many choices exist, and these shall be discussed shortly, loss

functions may be described generally as

L(φt, fit ), (2.70)

101

Page 114: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

where φt is the target of interest, f it is the forecast of the target generated by

model i at time t, and L(·, ·) is a particular loss function. In an evaluation of

the merits of various functional forms of loss functions, Patton (2011) details

the following specifications of Mean-Square Error (MSE), Mean-Absolute Error

(MAE), and one likelihood based measure

MSEi,t =(φt − f i

t

)2, (2.71)

MSE-LOGi,t =(log φt − log f i

t

)2, (2.72)

MSE-SDi,t =

(√φt −

√f i

t

)2

, (2.73)

MSE-propi,t =

(φt

f it

− 1

)2

, (2.74)

MAEi,t =∣∣φt − f i

t

∣∣ , (2.75)

MAE-LOGi,t =∣∣log φt − log f i

t

∣∣ , (2.76)

MAE-SDi,t =

∣∣∣∣√φt −

√f i

t

∣∣∣∣ , (2.77)

MAE-propi,t =

∣∣∣∣φt

f it

− 1

∣∣∣∣ , (2.78)

QLIKEt = log f it +

φt

f it

. (2.79)

It may be observed that the above specifications all achieve a minimum when

the forecast is equal to the target. As given in Table A.1, the first derivatives

of these loss functions all return the scalar value 0 when f it = φt. Further, all of

the second derivatives evaluated at f it = φt are non-negative, indicating that a

minimum has been achieved38. Also, it may be observed that each of the func-

tions are monotonically increasing in the magnitude of the error, a larger error

will lead to a higher loss function value. Alternatively, a model that produces

a smaller loss function value may be thought of as producing a superior forecast.

38The second derivatives for the MAE measures are all zero as the function is discontinuousat f i

t = φt, which indicates a possible inflection point. However, closer inspection of the second

derivatives reveals that ∂2

∂(fit)

2

∣∣∣∣φt<fi

t

< 0, and ∂2

∂(fit)

2

∣∣∣∣φt>fi

t

> 0, so it is indeed a minimum

that is reached.

102

Page 115: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

While each of the detailed loss functions have the desirable properties of achiev-

ing a minimum when the forecast is equal to the target, and monotonically in-

creasing with the magnitude of the error, prior studies have demonstrated that

some loss functions should be preferred. Patton (2011) shows that MSE and

QLIKE are robust to noise in the volatility proxy39 and give consistent rankings

of models regardless of the chosen volatility proxy. In the context of this disser-

tation, in the first empirical Chapter, the proxy is the RV over the proceeding

22-trading-days; the second Chapter uses realised covariance, which is based on

RV; and the final empirical Chapter utilises squared 5-minute returns. As each

of these statistics is a noisy measure of volatility, or is constructed from a noisy

measure of volatility, it is believed that the statistics used in this thesis justify

the use of the MSE and QLIKE measures given the results of Patton (2011).

It should be noted that MSE weights errors symmetrically whereas QLIKE pe-

nalises under-prediction more heavily than over-prediction.

Both the MSE and QLIKE loss functions, as well as the other loss functions

described, return a vector of scalar values as the measurement of forecast er-

ror for each model considered. These vectors of losses may then be averaged

to evaluate relative forecast performance, the lowest mean loss function value

belongs to the generally more accurate forecasting model. While these loss

functions allow forecasts to be ranked by their scalar values, they give no con-

crete indication of whether the performance of one forecasting methodology is

statistically separable from its competitors. Similar to the comparison of say,

the height of two sample groups, more information is required to decide if one

group is statistically separable from the other. Some possible methodologies for

a more rigorous examination of the loss function values are now discussed.

39Recalling the discussion in Section 2.2 of how the true level of volatility is latent, avolatility proxy is a particular choice of measure for this unobservable variable. Popular choicesare squared daily returns or the realised volatility of the returns, with the noise present in theRV being smaller than r2

t for reasons outlined previously.

103

Page 116: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

2.9.3 Distinguishing relative forecast performance

It has been discussed that a more robust method of evaluating model forecasts

than a simple ranking of mean loss function values is required. As an example,

the univariate SV model of volatility is significantly more time-consuming and

computationally intensive to estimate than a GARCH(1,1) model, and both are

more difficult to estimate than the at-the-money Black-Scholes-Merton implied

volatility. It could be that the SV model is invariably the superior model for

forecasting volatility, but that the difference between the models is minimal and

not statistically significant. What need then for estimating the more compli-

cated SV model just because it is ranked highest when one would do just as well,

statistically speaking, by using either of the less complicated methods? Further,

how does one discern between the differences in ranks? Is the difference between

the first and second model the same as the fourth and fifth model? It is clear

that there is ambiguity in simply ranking models on mean loss function values.

Two papers by Diebold and Mariano (1995) and West (1996) provide guidance

on more robust methodologies for the statistical separation of candidate models.

To begin, Diebold and Mariano (1995) find that several earlier methods for sepa-

rating models, such as the F-, Morgan-Granger-Newbold (Morgan (1939-1940),

Granger and Newbold (1977), and Messe-Rogoo (1988) tests, were not robust

to non-Gaussian error distributions, serial and contemporaneous correlation

of forecast errors, small samples, and combinations of these. To circumvent

these problems, independent research by Diebold and Mariano (1995) and West

(1996) introduced tests of Equal Predictive Accuracy (EPA). The tests are dis-

tinct in that West (1996) allows for errors in parameter estimation in his test

of EPA, while Diebold and Mariano (1995) take the forecasts as given. If the

forecasts are taken as given, as is the case in this dissertation, then the tests

are identical (Hansen, 2005); the description that follows is that of the Diebold

104

Page 117: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

and Mariano (1995) procedure.

It has been discussed that one way to evaluate relative performance is to com-

pute the mean values from a particular loss function for each candidate model,

the model that produces the lowest mean is on average the more accurate fore-

casting tool. Alternatively, one may compute the difference in loss function

values between models i and j through a loss differential, dij,t, as given by the

following

dij,t = L(φt, fit ) − L(φt, f

jt ), t = 1, ..., T, i 6= j, (2.80)

where L(·, ·) is chosen to be one of the loss functions described above. Given that

the loss functions considered here are monotonically increasing in the magnitude

of the error, the loss differential is positive (negative) if model j (i) is more

accurate. The test of EPA is premised on the null hypothesis that the mean

loss differential is zero,

H0 : E(dij,t) = 0, ∀i, j, i 6= j. (2.81)

This hypothesis may be tested using the Diebold-Mariano t-statistic,

tij =dij√

var(dij), (2.82)

where dij = 1T

∑Tt=1 dij,t, and var(dij) is an estimator of the asymptotic variance

of dij . The statistic tij provides scaled information on the mean difference in the

forecast quality of models i and j. The Diebold-Mariano testing framework40

can accommodate a wide range of loss functions, including asymmetric and

discontinuous functional forms. However, the procedure does not easily accom-

modate the comparison of a large number of candidate models for the following

40More detail, which is beyond the scope of this thesis as the exposition is purely to showthe development towards the MCS approach, of the Diebold-Mariano procedure is given intheir original paper.

105

Page 118: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

two points: firstly, a data snooping problem exists when a large number of mod-

els are considered (White, 2000); secondly, the testing procedure requires the

estimation of a covariance matrix with the same dimensionality as the number

of candidate models considered (Hansen, Lunde, and Nason, 2003). Given the

wide body of available forecasting methodologies for univariate and multivari-

ate volatility discussed above, a more flexible approach that can manage the

inclusion of a large number of candidate models simultaneously is required here.

An alternative to testing Equal Predictive Accuracy is the ‘reality check’ ap-

proach of White (2000). In place of pairwise tests of EPA, the reality check

method simultaneously tests all models against a benchmark model. Denoting

the competing models i = 1, ...,m, where m is the number of candidate models

and the benchmark model as i = 0, the loss differentials are given by

di,t = L(φt, fit ) − L(φt, f

0t ), t = 1, ..., T, ∀i = 1, ...,m. (2.83)

The assumption underlying this test is that no model outperforms the bench-

mark, or

H0 : mini=1,...,m

E(di,t) ≥ 0. (2.84)

Again, the detail required in outlining how this hypothesis may be tested is

beyond the scope of this thesis and more detail is available in the original pa-

per, suffice to say here that the process is slightly more complicated given that

it incorporates a bootstrap procedure of the loss differentials. There are two

problems with the implementation of the reality check method, one technical

and one practical. The technical problem, as found by Hansen (2005), is that as

the number of models under consideration increases, the p-values for rejection

of the null hypothesis increase. That is, including a number of poor performing

models diminishes the ability of the test to discern between candidate models.

The practical problem is that the reality check method requires the definition

106

Page 119: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

of a benchmark model, known as multiple comparisons with control; in many

practical circumstances, there may not be a clear choice of benchmark. While

the test of Superior Predictive Accuracy (SPA) introduced by Hansen (2005)

offers improvements over the reality check method by being more robust to the

inclusion of poor performing models, Hansen, Lunde and Nason (2003) are crit-

ical of the fact that the SPA method still requires the definition of a benchmark

and the SPA procedure only offers information about that potentially arbitrar-

ily chosen benchmark. A procedure outlined by Romano and Wolf (2005) also

improves on the reality check method by producing a set of models that sig-

nificantly outperform the chosen benchmark, but again a benchmark must be

defined. The Model Confidence Set requires no such definition of a benchmark,

however if a natural benchmark exists then the MCS may still address the same

objective as the SPA test (Hansen, Lunde and Nason, 2010).

A recent advance in the statistical separation of forecast performance is the

Model Confidence Set (MCS) of Hansen, Lunde and Nason (2003, 2010). The

MCS may be thought of as the set of models that provides the best forecasts

with a given level of confidence, this may include several models which are

statistically indistinguishable or of EPA. The construction of an MCS is an it-

erative procedure and requires a sequence of tests for EPA, the set of candidate

models is reduced by deleting the worst performing model at each iteration until

the null hypothesis of EPA cannot be rejected41; this procedure is now outlined.

The total number of candidate forecasting methods is now denoted by m0,

therefore the competing forecasts are given by f it , i = 1, 2, ...,m0. The proce-

dure starts with a full set of candidate models M0 = {1, ...,m0}. The MCS is

determined by sequentially trimming models from M0, reducing the number of

models to m < m0. Prior to starting the sequential elimination procedure, all

41As the MCS methodology involves sequential tests for EPA, Hansen, Lunde and Nason(2003, 2010) utilised the testing principle of Pantula (1989) to avoid size distortions.

107

Page 120: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

loss differentials between models i and j are computed, identical to in Equation

2.80; these do not need to be re-calculated at each iteration. Then, the null

hypothesis of EPA given in Equation 2.81 is tested, if H0 is rejected at the

significance level α, the worst performing model is removed and the process

continued until non-rejection occurs; the set of surviving models is the MCS,

M∗α. If a fixed significance level α is used at each step, M∗

α contains the best

models from M0 with (1 − α) confidence42. It is important to note that M∗α

is dependent on the quality of the measurement variable, i.e. a more accurate

measure will produce a smaller MCS than a noisy measure, and that the final

MCS is dependent upon the loss function chosen. More technical detail on the

process is now given, and Hansen et al. (2010) note that it is similar to the

trace-test procedure used for selecting the rank of a matrix used in choosing

the number of cointegration relations within a vector autoregressive model.

Similar to the Diebold-Mariano framework, the test of EPA at each iteration of

the MCS framework is again based around the t-statistic defined in Equation

2.82, but now the asymptotic variance estimate is obtained from a bootstrap

procedure which is described below. The difficulty in evaluating the null hy-

pothesis under the MCS framework is that at each iteration the information

from (m− 1)m/2 unique t-statistics needs to be distilled into one test statistic.

This problem arises as the MCS evaluates an entire set of models simultane-

ously, rather than the pairwise evaluation in the Diebold-Mariano setting. To

overcome this problem, Hansen, et al. (2003, 2010) propose the range statistic,

TR = maxi,j∈M

|tij | , (2.85)

42Despite the testing procedure involving multiple hypothesis tests this interpretation is astatistically correct one. See Hansen et al. (2010) for a detailed discussion of these aspects.

108

Page 121: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

and the semi-quadratic statistic,

TSQ =∑

i,j∈Mi<j

t2ij (2.86)

as test statistics to determine EPA. Values of either statistic that are signifi-

cantly larger than zero indicate a rejection of the EPA hypothesis. In the case

of rejection, the model removed is that with the largest mean loss relative to

the competing models after standardising for the variance of the loss; the model

is defined as Mi:

i = arg maxi∈M

di√var(di)

, (2.87)

and di = 1m−1

∑j∈M dij . As the kth model is eliminated from M, save the

bootstrapped p-value of the EPA test found in Equation 2.88 as p(k); details

of the p-value calculation are given below. For instance, if model Mi was elim-

inated in the third iteration, i.e. k = 3, the p-value for this ith model is then

pi = maxk≤3 p(k). This ensures that the model eliminated first is associated

with the smallest p-value indicating that it is the least likely to belong into the

MCS43.

While the two proposed test statistics are easily calculated, the distribution

of either test statistic is not trivial. The distributions of both are dependent

on the covariance structure between the forecasts from the candidate models,

these change at each iteration as models are removed. Hence, p-values for the

rejection of a model must be found through the use of a bootstrap procedure for

that distribution. The bootstrap procedure relies on the generation of bootstrap

replications of dij,t and must take the temporal dependence in dij,t into consid-

eration; following Hansen et al. (2003, 2010), and Becker and Clements (2008)

43See Hansen et al. (2010) for a detailed interpretation for the MCS p-values.

109

Page 122: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

this is achieved by the circular block bootstrap with constant block size44; the

basic steps of the bootstrap procedure are now described.

Recalling {dij,t} is the sequence of T observed loss differentials between model

i and model j. B block bootstrap counterparts are generated for all combina-

tions of i and j,{d

(b)ij,t

}, b = 1, ..., B, i 6= j. The estimators for the asymptotic

variances which are required for the test statistics in Equations 2.85, 2.86 and

2.87 are given by

var(dij) = B−1B∑

b=1

(d

(b)ij − dij

)2,

var(di) = B−1B∑

b=1

(d

(b)i − di

)2,

∀i, j ∈ M, i 6= j. The p-value for the rejection of a candidate model is calculated

by comparing either the proposed range or semi-quadratic statistic with their

bootstrap realisations,

pτ = B−1B∑

b=1

I

(T (b)

τ > Tτ

)for τ = R,SQ, (2.88)

where I(·) is the indicator function.

In calculating the above p-values, the B bootstrap versions of the test statistics

TR or TSQ are calculated by replacing∣∣dij

∣∣and(dij

)2in Equations 2.85 and

2.86 with∣∣∣d(b)

ij − dij

∣∣∣ and(d

(b)ij − dij

)2respectively. The denominator in the

test statistics remain the bootstrap estimate discussed above. This completes

the description of the Model Confidence Set procedure.

44The reason for this choice of bootstrapping procedure is beyond the scope of this thesis.In implementing this approach, the block length is determined by selecting the largest laglength from the autoregressive process for each dij,t, where the Akaike information criterionis used to determine lag length. More detail is available in White (2000).

110

Page 123: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Chapter 3

Implied Volatility and the

Volatility Risk Premium

3.1 Introduction

The preceding Chapter has highlighted that, broadly speaking, forecasts of

univariate volatility may be generated either through time series models which

utilise purely historical data or by finding the volatility implicit in option prices.

Time-series models generally produce forecasts by placing weights on the vector

of historical volatility, the weighting scheme typically measures the distance in

time from the current period. Options market based forecasts are produced by

using the market prices of forward looking options to find the market equilib-

rium level of volatility, or the implied volatility. Therefore, implied volatility

(IV) should represent the market’s best prediction of the future volatility of the

underlying asset under the assumption of risk-neutrality (see, amongst others,

Jorion, 1995, Poon and Granger, 2003, 2005). The results of previous studies

investigating forecast performance have been mixed and appear to depend on

the context of the problem; while already covered previously in Chapter 2, some

of the main results are again discussed here for their relevance to the current

111

Page 124: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

research question being addressed.

It should be stated that the focus of this Chapter is on generating forecasts

of the daily level of equity index volatility, the reason for this focus is now out-

lined. The prior Chapter discussed that there are several methods for inferring

the level of IV. Many of the early approaches relied on inverting the market

prices of options under the assumption of a particular pricing model, differing

levels of the IV would be found for differing pricing models and sometimes for

differing strike prices. More recently, a model-free measure of IV exists and is

premised on a quite general SDE that allows for the presence of jumps; this

model-free measure has been shown to outperform model specific IV in fore-

casting exercises. The preference is then to utilise a model-free measure and

the most well known of these is the Volatility Index (VIX), the model-free level

of IV for the S&P 500 Index over the following trading month. The VIX is

publicly available with the Chicago Board of Exchange now publishing its level

at 15 second intervals. While it is possible to construct the model-free IV for

individual stocks or for other indexes, there are data limitations that prevent

the construction of a time-series as accurate and as long-dated as the VIX.

For this reason, the focus of this Chapter will be on forecasts of equity index

volatility, specifically the S&P 500 Index.

With regard to prior research in equity index volatility, Day and Lewis (1993),

Canina and Figlewski (1993), Ederington and Guan (2002), and Koopman,

Jungbacker and Hol (2005) find in favour of time-series models that utilise his-

torical information alone, or model based forecasts (MBF). On the other hand,

Fleming, Ostdiek and Whaley (1995), Christensen and Prabhala (1998), Flem-

ing (1998) and Blair, Poon and Taylor (2001), hereafter BPT, all find that

equity index IV dominate MBF. While the results of individual studies are

mixed, the survey of 93 articles compiled by Poon and Granger (2003, 2005) re-

112

Page 125: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

ports that IV often provides more accurate volatility forecasts than competing

MBF. However, these studies typically undertake pairwise comparisons rather

than simultaneously evaluating a broad range of competing forecasts.

The general result that IV estimates often provide more accurate volatility

forecasts than competing MBF may be rationalised on the basis that IV should

be based on a larger and timelier information set. IV is derived from the equi-

librium market expectation of a future-dated payoff, rather than on the purely

historical data that MBF are estimated on. Hence, it is argued that IV contains

all prior information garnered from historical data while also incorporating the

additional information of the beliefs of market participants regarding future

volatility; in an efficient options market, this additional information should

yield superior forecasts.

As this Chapter specifically focuses on the V IX, some recent relevant results

are found in Becker, Clements and White (2006). They examined whether the

V IX contains any information relevant to future volatility beyond that reflected

in MBF. As they conclude that the V IX does not contain any such information,

this result, prima facie, appears to contradict the previous findings summarised

in Poon and Granger (2003). However, no forecast comparison is undertaken

and they merely conjecture that the V IX may be viewed as a combination

of MBF. Subsequently, Becker and Clements (2008) show that the V IX index

produces forecasts which are statistically inferior to a number of competing

MBF. Further, a combination of the best MBF is found to be superior to both

the individual model based and V IX forecasts. They conclude that while it is

plausible that the V IX combines information reflected in a range of MBF, it

may not be the best possible combination of such information. This research

provided an important contribution to the literature by allowing for more robust

conclusions to be drawn regarding comparative forecast performance, relative

113

Page 126: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

to the contradictory results of prior research. This was achieved by simulta-

neously examining a wide class of MBF and an IV forecast, rather than the

typical pairwise comparisons of prior work, using up-to-date forecast evalua-

tion technology which is also employed here.

However, these earlier results cannot be viewed as definitive as it may be argued

that IV forecasts are inherently biased. The Literature Review demonstrated

that hedging arguments may be used to show that the pricing of financial deriva-

tives occurs in the risk-neutral environment. In contrast, MBF are estimated

under the physical measure. That is, MBF are generated by using historical

measures of the same variable that they are trying to forecast. Hence, prior

tests of predictive accuracy have compared forecasts generated under different

statistical measures, risk-neutral versus real-world forecasts. As the object of

interest is the equity index volatility under the physical measure, IV will likely

be disadvantaged.

In a perfectly efficient options market, the difference between the IV and ac-

tual volatility should be a function of the volatility risk-premium (VRP) alone

(Bollerslev, Gibson, and Zhou, 2011). This fact gives rise to the first research

question addressed in this dissertation: Does taking into account the VRP im-

prove the forecast performance of implied volatility? This Chapter presents the

first piece of empirical research to directly adjust the V IX for the VRP and

compare its relative forecast performance with that of the risk-neutral V IX

and competing MBF. Hence, for the first time, the relative performance of the

risk-adjusted implied volatility may be compared with the risk-adjusted fore-

casts of MBF.

By matching the moments of model-free realized volatility (RV) and the model-

free V IX, Bollerslev, Gibson, and Zhou (2011) (hereafter BGZ), obtain an esti-

114

Page 127: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

mate of the VRP. It is possible to utilise this estimate to convert the risk-neutral

IV into a forecast under the physical measure; which can then be compared to

the original IV and the MBF. Following the discussion in Section 2.9, this com-

parison is conducted using the Model Confidence Set (MCS) methodology of

Hansen, Lunde and Nason (2003, 2010). The MCS analysis finds that the risk-

adjusted V IX is undoubtedly the best performing model in calmer periods of

market volatility. However, in high-volatility periods the conditional variance

is generally more difficult to forecast and there is less separation between com-

peting models.

The remainder of this Chapter proceeds as follows. Section 3.2 will outline

the data relevant to this study. Section 3.3 discusses the econometric models

used to generate the various forecasts, along with the methods used to discrim-

inate between forecast performance. Sections 3.4 and 3.5 present the empirical

results and concluding remarks respectively.

3.2 Data

The analysis is conducted on the volatility of the S&P 500 Index, utilising

data from 2 January 1990 to 31 December 2008 (4791 observations). As has

been mentioned, the V IX constructed by the Chicago Board Options Exchange

(CBOE) over the same time frame is the model-free measure of IV used here.

The V IX is calculated to be a model-free measure of the options market’s

risk-neutral estimate of the S&P 500 Index’s mean daily volatility over the

subsequent 22 trading days (BPT, 2001, Christensen and Prabhala, 1998 and

CBOE, 2003). The 22-day length of the V IX forecast shall be denoted by ∆

hereafter.

As has been covered extensively in Section 2.2.1, the accurate measurement of

115

Page 128: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

volatility has been the subject of much research in the finance and statistics lit-

erature. The exposition regarding the measurement of volatility demonstrated

that the RV of an asset is a less noisy estimate of the true level of underlying

volatility relative to alternative measures such as squared daily returns. Hence,

estimates of the latent volatility analysed in this Chapter were obtained us-

ing the RV methodology outlined in Andersen, Bollerslev, Diebold, and Labys

(2001, 2003). RV estimates volatility by means of aggregating intra-day squared

returns1; it is important to note, as shall be discussed in more detail in the next

section, that this measure is a model-free estimate of latent volatility.

Figure 3.1: Daily VIX (top panel) from 2/01/1990 to 1/12/2008 and 22-day meandaily S&P500 Index RV estimate (bottom panel) from 2/01/1990 to 31/12/2008. Thesolid line signals the beginning of what is defined as a high volatility period, the dashedline signals the end of that period.

Average Monthly Volatility Over 2/01/1990 to 1/12/2008

VIX

RV

t+∆

2/01/1990 15/04/1996 8/08/2002 12/01/20080

5

10

15

20

25

0

5

10

15

20

25

Figure 3.1 shows the daily V IX and the 22-day-mean daily S&P 500 Index

RV, RV t+∆ = 1∆

∑∆i=1RVt+i, for the sample period considered. While the RV

1Intraday S&P 500 Index data were purchased from Tick Data, Inc.

116

Page 129: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

estimates exhibit a similar overall pattern to the V IX, it is typically smaller

in magnitude, 1T

∑Tt=1

[RV t+∆ − V IXt

]= −0.6854. As noted previously, in

a perfectly efficient options market this difference between the IV and ac-

tual volatility should be a function of the volatility risk-premium (VRP) alone

(Bollerslev, Gibson, and Zhou, 2011). The aim of this chapter is to empirically

test whether accounting for the VRP removes this systematic bias and results

in superior forecasting performance, thereby providing supporting evidence for

options market efficiency.

3.3 Methodology

In this section the econometric models upon which forecasts are based will be

outlined, followed by how the risk-neutral forecast provided by the V IX can be

transformed into a forecast under the physical measure. This section concludes

with a discussion of the technique utilised to discriminate between the volatility

forecasts.

3.3.1 Model based forecasts

A range of models are chosen so that they span the space of available classes;

specifically those from the GARCH, stochastic volatility (SV), and RV families.

For brevity’s sake, those models previously defined in Section 2.6 are not again

covered here.

The GARCH style models employed in this study are chosen to be similar to

those studied by BPT (2001) in their investigation of the incremental value of

the V IX. This group of models captures some of the stylised facts of volatility

discussed in Section 2.2 such as volatility persistence and volatility asymme-

try; specifically, the original GARCH specification of Bollerslev (1986), and the

117

Page 130: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

asymmetric GJR model (see Glosten, Jagannathan and Runkle, 1993, Engle

and Ng, 1993) are utilised. Parameter estimates for the GARCH and GJR

models are similar to those commonly observed for GARCH models based on

various financial time series and are qualitatively similar to those reported in

BPT (2001)2. In addition to the models examined in BPT, it is proposed here

that an SV model may be used to generate competing forecasts. SV models

differ from GARCH models in that conditional volatility is treated as an un-

observed variable and not as a deterministic function of lagged returns, more

detail is given in Section 2.6.

Forecasts may also be generated by directly applying traditional time series

models to daily measures of RV, RVt. Following Andersen et al. (2003) and

Koopman et al. (2005), an ARMA(2,1) process is utilised where parameter

estimates reflect the common feature of volatility persistence. The ARMA(p,q)

process may be represented as

A(L) (xt − µxt) = B(L) εt, (3.1)

where A(L) and B(L) are coefficient polynomials of order p and q. In the con-

text of this work, the ARMA(2,1) model was estimated with xt = ln(√RVt

).

This variable transform is applied to reduce the skewness and kurtosis of the

observed volatility data (Andersen et al., 2003).

As well as the traditional time-series models of volatility (GARCH, SV, and

ARMA models), more recent advances in the univariate volatility forecasting

literature are employed. A MIxed Data Sampling (MIDAS) forecasting scheme

advocated by Ghysels, Santa-Clara and Valkanov (2006) is also directly applied

2As the models discussed are re-estimated 3770 times in order to recursively generatevolatility forecasts for the 22 days following the respective estimation period, reporting pa-rameter estimates is of little value. Parameter estimates for the rolling windows and the fullsample are available on request.

118

Page 131: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

to RVt. Finally, the recent innovation in MBF that is the Realised GARCH

(RGARCH) model of Hansen, Huang, and Shek (2010) is also included; more

detail on the specifications of all of these MBF may be found in Section 2.6 .

In order to generate MBF which capture the maximum amount of informa-

tion available at time t efficiently, the volatility models were re-estimated via

an expanding window from 2nd January 1990 up to time t, after allowing for

an initial estimation window of 1000 observations, t = 1000, ..., 4770. The re-

sulting parameter values were then used to generate volatility forecasts for the

subsequent ∆ business days (t+ 1 → t+ ∆), corresponding to the period cov-

ered by the V IX forecast generated on day t. The first forecast covers the

trading period from 13 December 1993 to 12 January 1994. The last forecast

period covers 1 December 2008 to 31 December 2008, leaving 3,770 forecasts.

For the shorter forecast horizons of 5- and 1-day ahead forecasts, the sample is

shortened to also contain 3,770 forecasts.

3.3.2 A risk-adjusted VIX forecast

As discussed in Section 3.1, prior tests of the relative predictive accuracy of

MBF and IV forecasts may be inherently biased as the IV forecasts are gener-

ated in a risk-neutral environment even though the target is the volatility under

the physical measure. In an efficient options market, the bias between the risk-

neutral forecast and the subsequent volatility should be a function of the VRP

alone. Recently, BGZ propose an approach for estimating the VRP which may

then be used to generate a risk-adjusted VIX forecast, the broad details of their

approach are now given; it is important to note that BGZ demonstrate that

this methodology is robust in the presence of jumps if the jump risk premium

is assumed to be fixed.

119

Page 132: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

BGZ make use of the fact that there exists a model-free estimate of latent

volatility, RV, and a model-free forecast from IV, the V IX. For the purposes

of robustness, it should be noted that BGZ recovered similar results in their

estimation of the VRP when using the Black-Scholes-Merton IV in place of the

model-free measure. Some of the properties of RV and the V IX are now briefly

outlined before then describing how the VRP is estimated within a Generalized

Method of Moments (GMM) framework.

Let Vnt,t+∆ be the RV computed by aggregating intraday returns over the inter-

val [t, t+ ∆]:

Vnt,t+∆ ≡

n∑

i=1

[pt+ i

n(∆) − pt+ i−1

n(∆)

]2, (3.2)

where pt is the logarithm of the price at time t, and n is the number of periods

within the interval.

Ignoring market microstructure effects, as n increases asymptotically, Vnt,t+∆

becomes an increasingly accurate measure of the latent, underlying, volatil-

ity by the theory of quadratic variation (see Barndorff-Nielson and Shephard

(2004b) for asymptotic distributional results when allowing for leverage effects).

The first conditional moment of RV under the physical measure is given by (see

Bollerslev and Zhou, 2002, Meddahi, 2002, and Anderson, Bollerslev, and Med-

dahi 2004)

E(Vt+∆,t+2∆|Ft) = α∆E(Vt,t+∆|Ft) + β∆, (3.3)

where Ft is the information set up to time t. The co-efficients α∆ = e−κ∆ and

β∆ = θ(1 − e−κ∆) are functions of the underlying parameters of the general

continuous-time stochastic volatility model for the logarithm of the stock price

120

Page 133: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

of Heston (1993); specifically, κ, is the speed of mean reversion to the long-term

mean of volatility, θ. The expectation of future volatility, E(Vt,t+∆|Ft), may be

found from calculating the risk-adjusted IV as shown below.

The model-free, risk-neutral forecast of volatility shall be denoted by

E∗(Vt,t+∆|Ft) = IV ∗

t,t+∆, (3.4)

with E∗(·) the expectation under the risk-neutral measure. To transform this

risk-neutral expectation into its equivalent under the physical measure, invoke

the result of Bollerslev and Zhou (2002),

E(Vt,t+∆|Ft) = A∆IV∗t,t+∆ + B∆,

A∆ =(1 − e−κ∆)/κ

(1 − e−κ∗∆)/κ∗,

B∆ = θ[∆ − (1 − e−κ∆)/κ

]−A∆θ

∗[∆ − (1 − e−κ∗∆)/κ∗

],

(3.5)

where A∆ and B∆ depend on the underlying parameters, κ, θ, and λ, of the

aforementioned stochastic volatility model; specifically, κ∗ = κ + λ and θ∗ =

κθ/(κ+λ). This risk-adjusted V IX forecast will be denoted below as the BGZ

forecast.

3.3.3 Estimation of the Volatility Risk-Premium

The unconditional VRP, λ, is estimated in a GMM framework utilising the mo-

ment conditions given in Equations 3.3 and 3.5, as well as a lagged instrument

of IV3 to accommodate over-identifying restrictions, leading to the system of

3While BGZ use lagged RV as their instrument, the use of IV is found here to dramaticallyimprove forecast performance, with details available upon request.

121

Page 134: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

equations:

ft(ξ) =

Vt+∆,t+2∆ − α∆Vt,t+∆ − β∆

(Vt+∆,t+2∆ − α∆Vt,t+∆ − β∆)IV ∗t−∆,t

Vt,t+∆ −A∆IV∗t,t+∆ − B∆

(Vt,t+∆ −A∆IV∗t,t+∆ − B∆)IV ∗

t−∆,t

; (3.6)

where ξ is the parameter vector (κ, θ, λ)′. The vector ξ is estimated via stan-

dard GMM arguments such that ξ = arg min gt(ξ)′Wgt(ξ), where gt(ξ) are

the sample means of the moment conditions, and W is the asymptotic co-

variance matrix of gt(ξ). Following BGZ, the matrix W is autocorrelation and

heteroscedasticity robust, as per Newey and West (1987). The optimisation

process is constrained such that κ and θ are positive, to ensure stationarity and

positive variance respectively. A Monte Carlo experiment conducted by BGZ

confirms that the approach just outlined leads to an estimate of the VRP compa-

rable to using actual (unobserved and infeasible) risk-neutral implied volatility

and continuous-time integrated volatility. Once the elements of ξ have been

determined, substitution into Equation 3.5 yields a risk-adjusted forecast of

volatility, derived from a risk-neutral IV.

The parameter vector, ξ, is recursively estimated with an initial estimation

period of 1000 observations to align with the estimation process of the MBF;

this recursive estimation is updated daily. However, it must be noted that one

cannot utilise all data points in the estimation period due to the 22-day win-

dow for calculating Vt,t+∆. That is, with 1000 days of data there are 45 whole

periods of 22 days, so the first available data point is day 10; daily updating

results in differing start dates, i.e. with 1001 there are still 45 whole periods of

22 days, so the first available data point is day 11.

122

Page 135: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

3.3.4 Evaluating forecasts

The previous Chapter expounded the evolution of some of the more well known

tests for evaluating the relative statistical performance of competing forecast

methodologies. The MCS approach of Hansen, Lunde and Nason (2003, 2010)

has been shown to be a robust evaluation tool that is able to accommodate a

wide range of statistical loss functions as well as simultaneously evaluate numer-

ous models while still retaining high power4. At the heart of the methodology

as it is applied here, is a forecast loss measure. Such measures have frequently

been used to rank different forecasts and the two loss functions utilised here are

the MSE and QLIKE,

MSEi = (RV t+∆ − f it )

2, (3.7)

QLIKEi = log(f it ) +

RV t+∆

f it

, (3.8)

where f it are individual forecasts (formed at time t) obtained from the individ-

ual models, i, and both the risk-neutral and adjusted V IX forecasts. While

there are many alternative loss functions, Patton (2011) shows that MSE and

QLIKE belong to a family of loss functions that are robust to noise in the

volatility proxy, RV t+∆ in this case, and would give consistent rankings of

models irrespective of the volatility proxy used. Each loss function has some-

what different properties, with MSE weighting errors symmetrically whereas

QLIKE penalizes under-prediction more heavily than over-prediction. Patton

and Sheppard (2007) show that QLIKE exhibits more power than MSE in dis-

tinguishing between forecasts.

Recalling the discussion of the Model Confidence Set methodology of Hansen,

Lunde and Nason (2003, 2010) given earlier in Section 2.9, the MCS contains

the best forecasting methodologies with a given level of confidence; although

4Further, the use of the circular block bootstrap ensures stationarity of the bootstraprealisations. This enables a robust evaluation of volatility forecasts even for overlappingperiods. See White (2000) for additional detail.

123

Page 136: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

the MCS may contain a number of models which indicates they are of equal

predictive accuracy. The final surviving models in the MCS are optimal with a

given level of confidence and are statistically indistinguishable in terms of their

forecast performance.

3.4 Empirical results

The discussion of the empirical results from this Chapter begins with a com-

parison of the forecasts generated by both the unadjusted and risk-adjusted

V IX. Firstly, the raw V IX is plotted in Figure 3.2 along with the target level

of 22-day-ahead mean daily realised volatility for the out-of-sample period, this

target will also be used in one of the Model Confidence Set exercises. From

Figure 3.2 it may be observed that while the trends in the level of both series

are quite similar, it is clear that the V IX typically over-predicts the level of the

target RV, reflective of the VRP. That is, while changes in the target RV may

be mirrored by changes in the V IX, there is an inherent bias in the forecasts

generated by the V IX.

In an efficient options market, the difference between the level of the V IX

and the RV should be a function of the VRP. It has been shown that this risk

premium leads to an inherent bias in the V IX as it typically over-predicts

the level of RV. By accounting for the VRP as described above, the VIX may

be transformed into a risk-adjusted forecast of future volatility. As may be

observed in Figure 3.3, this adjusted V IX much more closely matches the

target level of subsequent RV. As the adjustment process is linear in functional

form, the risk-adjusted V IX is similarly able to match the changing trends in

volatility while the scaling effect allows for the inherent bias due to the VRP to

be removed. Figure 3.3 provides an encouraging result and anecdotal support

for suggesting that the risk-adjusted forecast will provide superior forecasts of

124

Page 137: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 3.2: The level of the V IX for the out-of-sample forecast period is plotted in red,while the 22-day-ahead mean of Realised Volatility is plotted in blue.

Lev

elof

Vol

atility

Unadjusted VIX and Target 22-day-mean Realised Volatility

13/12/1993 5/09/1997 1/06/2001 4/03/2005 11/28/20080

5

10

15

20

25

the subsequent RV relative to the raw VIX; this conjecture will now be examined

statistically.

The statistical analysis covers the full sample of forecasts as well as their rela-

tive performance in two sub-periods: low (to moderate) volatility (2643 obser-

vations) and high volatility (1127 observations). These periods are represented

in Figure 3.1 with solid vertical lines signaling the beginning of a period con-

sidered to be of high volatility, and the dashed line signaling the end of such a

period. Decomposing the full sample into these sub-periods acts as a robustness

check and allows for determining whether the unconditional results hold in gen-

eral or vary depending on the time period considered. Further, while the V IX

is constructed to be a 22-day-ahead forecast, it may still contain information

relevant to shorter forecast horizons. To examine this issue, 1- and 5-day ahead

125

Page 138: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 3.3: The level of the risk-adjusted V IX for the out-of-sample forecast period isplotted in red, while the 22-day-ahead mean of Realised Volatility is plotted in blue.

Lev

elof

Vol

atility

Risk-Adjusted VIX and Target 22-day-mean Realised Volatility

13/12/1993 5/09/1997 1/06/2001 4/03/2005 11/28/20080

5

10

15

20

25

forecast performance is considered in addition to the 22-day horizon.

A consistent result emerges from the empirical analysis, the risk-adjusted V IX

forecast is of superior predictive accuracy for 1-, 5-, and 22-day-ahead volatil-

ity in periods of low to moderate volatility. During more turbulent times, the

results are inconsistent and depend on the forecast horizon of interest with no

one model setting itself apart from its competitors. However, an interesting

result is that the raw, unadjusted, V IX generally outperforms its risk-adjusted

counterpart in these turbulent times. This suggests that the use of an uncondi-

tional risk-premium is beneficial in periods of calm, and a hindrance in periods

of turmoil.

126

Page 139: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

3.4.1 22-day-ahead forecasts

When considering the full sample results, there are a large group of models

which are statistically indistinguishable. In fact, as Table 3.4.1 shows, only

two models of the 13 are not included in the MCS under the QLIKE loss func-

tion and none under MSE for reasonable significance levels5. The LRGARCH

model of Hansen, Huang, and Shek (2010) is included in the MCS under both

loss functions, and is the best performing model under QLIKE. In the con-

text of this Chapter, an important result is that the risk-neutral V IX has a

very small p-value of inclusion in the MCS under the QLIKE loss function,

confirming the result of Becker and Clements (2008), while BGZ is indistin-

guishable from the majority of competing MBF with a p-value of 0.458. That

is, while the raw V IX is conclusively rejected from the set of superior models,

the risk-adjusted V IX is not. This initial result addresses the first key research

question of this thesis on the value of taking account of the VRP for implied

volatility; taking the VRP into account results in a relatively poor statistical

performance being transformed into being statistically indistinguishable from

the top performing MBF. When properly accounting for the VRP, the options

market is able to generate forecasts that are statistically indistinguishable from

sophisticated time-series econometric models.

For periods of relative calm, the risk-adjusted V IX dominates all other mod-

els under QLIKE, but is joined in the MSE by both the LRGARCH and SV

model under MSE for p-values above 5%. This is in contrast to the risk-neutral

V IX which is excluded from the MCS almost surely under both loss functions.

Again, this demonstrates that properly taking account of the VRP within the

V IX results in superior forecasting performance.

5As has been discussed, Patton and Shephard (2007) have demonstrated that the QLIKEloss function has superior power to the MSE function; hence, the QLIKE loss function resultsare generally deferred to here.

127

Page 140: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

With regard to more volatile periods, no one model separates itself from its

competitors, with all models being included in the MCS under the MSE loss

function for all reasonable p-values. Under QLIKE, just three models are ex-

cluded at the 5% significance level. Interestingly, the V IX moves from being the

worst performing model in smoother periods to the best performing in volatile

periods, while the risk-adjusted V IX forecast goes from being the dominant

model to being exclued from the MCS. Thus, it appears that volatility is gen-

erally very difficult to forecast during turbulent times and hence it is difficult

to distinguish between many of the competing forecasts.

128

Page 141: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table 3.1: Model Confidence Set p-values for 22-day ahead forecasts for the Full Sample, Low-Moderate Volatility, and High Volatility Periods under theMSE and QLIKE loss functions using the range statistic.

MSEFull-Sample Low volatility High volatility

Model pi Model pi Model pi

V IX 0.105 V IX 0.000 ARMA 0.299MIDAS 0.356 G 0.008 MIDAS 0.323ARMA 0.382 GJR 0.008 LRGARCH 0.334LRGARCH 0.576 MIDAS 0.008 V IX 0.334BGZ 0.576 ARMA 0.008 BGZ 0.464G 0.576 LRGARCH 0.078 G 0.485GJR 0.854 SV 0.078 GJR 0.988SV 1.000 BGZ 1.000 SV 1.000

QLIKEFull-Sample Low volatility High volatility

Model pi Model pi Model pi

V IX 0.000 V IX 0.000 ARMA 0.007ARMA 0.020 ARMA 0.000 BGZ 0.021BGZ 0.458 GJR 0.000 SV 0.021MIDAS 0.458 G 0.000 MIDAS 0.075SV 0.458 MIDAS 0.000 LRGARCH 0.085G 0.458 SV 0.000 G 0.085GJR 0.458 LRGARCH 0.001 GJR 0.157LRGARCH 1.000 BGZ 1.000 V IX 1.000

129

Page 142: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

3.4.2 5-day-ahead forecasts

Relative to the 22-day case, there is a greater separation of the competing

forecast methodologies, with results contained in Table 3.4.2. Unconditionally

under the QLIKE loss measure, the V IX generates relatively poor forecasts

while its risk-adjusted counterpart cannot be excluded from the MCS; the risk-

adjusted V IX also outperforms traditional univariate volatility models such as

the GARCH, GJR, and SV models. Under the MSE loss function, no models

may be excluded from the MCS under standard significance levels.

For periods of relatively low volatility, the risk-adjusted V IX remains the su-

perior forecast under both loss functions and is the only model included in the

MCS; this result re-enforces a similar finding for the 22-day-ahead forecasts.

The raw V IX is shown once again to be a poor forecast in these calmer times,

being excluded from the MCS under both loss functions almost surely.

In more turbulent markets, there is no statistically significant difference between

the models under the MSE loss function. Under the QLIKE loss function, sim-

ilar to the 22-day-ahead case, the V IX is found to outperform its risk-adjusted

counterpart. Interestingly, LRGARCH model moves from the worst performing

model under MSE, though still included in the MCS, to the best performing

model under QLIKE. As QLIKE penalises under-prediction more heavily than

the symmetric MSE loss function, it may be that the LRGARCH model typ-

ically over-predicts the level of volatility, but this features performs well in

turbulent times when volatility peaks.

130

Page 143: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table 3.2: Model Confidence Set p-values for 5-day ahead forecasts for the Full Sample, Low-Moderate Volatility, and High Volatility Periods under theMSE and QLIKE loss functions using the range statistic.

MSEFull-Sample Low volatility High volatility

Model pi Model pi Model pi

LRGARCH 0.119 V IX 0.000 LRGARCH 0.318BGZ 0.119 G 0.025 BGZ 0.383V IX 0.119 GJR 0.025 ARMA 0.430ARMA 0.361 LRGARCH 0.025 MIDAS 0.545MIDAS 0.440 ARMA 0.032 G 0.545G 0.440 MIDAS 0.032 V IX 0.629SV 0.694 SV 0.032 SV 0.629GJR 1.000 BGZ 1.000 GJR 1.000

QLIKEFull-Sample Low volatility High volatility

Model pi Model pi Model pi

V IX 0.000 V IX 0.000 ARMA 0.011ARMA 0.003 GJR 0.000 BGZ 0.012G 0.005 G 0.000 G 0.012GJR 0.044 ARMA 0.000 SV 0.012SV 0.064 SV 0.010 V IX 0.104BGZ 0.350 LRGARCH 0.010 MIDAS 0.104MIDAS 0.432 MIDAS 0.010 GJR 0.104LRGARCH 1.000 BGZ 1.000 LRGACRH 1.000

131

Page 144: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

3.4.3 1-day-ahead forecasts

As shown in Table 3.4.3, the MSE loss function is unable to statistically sep-

arate between any of the candidate models except for times of relative calm

where the risk-adjusted V IX forecast again dominates the V IX and all MBF.

Under the QLIKE measure, there are three models included in the MCS for the

full sample comprised of the risk-adjusted V IX, LRGARCH and MIDAS mod-

els. Unconditionally, taking account of the VRP again yields superior forecasts

relative to the raw V IX.

During periods of relatively low volatility, the risk-adjusted V IX is again dom-

inant under QLIKE, with all other forecasts being excluded from the MCS for

reasonable significance levels. For more turbulent times, the unadjusted V IX

again outperforms its risk-adjusted counterpart.

132

Page 145: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table 3.3: Model Confidence Set p-values for 1-day ahead forecasts for the Full Sample, Low-Moderate Volatility, and High Volatility Periods under theMSE and QLIKE loss functions using the range statistic.

MSEFull-Sample Low volatility High volatility

Model pi Model pi Model pi

LRGARCH 0.408 V IX 0.000 LRGARCH 0.262BGZ 0.445 G 0.010 BGZ 0.326G 0.530 GJR 0.010 G 0.383ARMA 0.732 LRGARCH 0.010 ARMA 0.463GJR 0.952 MIDAS 0.010 SV 0.656V IX 0.976 SV 0.010 GJR 0.656SV 0.976 ARMA 0.010 MIDAS 0.656MIDAS 1.000 BGZ 1.000 V IX 1.000

QLIKEFull-Sample Low volatility High volatility

Model pi Model pi Model pi

V IX 0.000 V IX 0.000 G 0.002G 0.000 G 0.000 BGZ 0.009ARMA 0.010 GJR 0.000 V IX 0.009SV 0.023 ARMA 0.000 SV 0.015GJR 0.070 SV 0.000 ARMA 0.185BGZ 0.795 LRGARCH 0.000 GJR 0.668LRGARCH 0.795 MIDAS 0.000 MIDAS 0.668MIDAS 1.000 BGZ 1.000 LRGARCH 1.000

133

Page 146: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

3.5 Conclusion

Issues relating to forecasting volatility have attracted a great deal of attention

in recent years, with such interest undoubtedly piquing given the extreme varia-

tions observed in late 2008. As a result, many studies into the relative merits of

implied and model based volatility forecasts have been conducted and although

it has often been found that implied volatility offers superior performance, many

studies disagreed and none have accurately taken account the VRP. Recently,

Becker and Clements (2008) showed that the V IX was statistically inferior to

a combination of model based forecasts, inferring that the V IX does not rep-

resent an optimal combination of such forecasts. However, it was argued in

this Chapter that these comparisons may not have been balanced given that

they involved comparisons of risk-neutral implied volatility forecasts with model

based forecasts generated under the physical measure. Using the methodology

of Bollerslev, Gibson and Zhou (2011), a transformed V IX forecast that incor-

porated the volatility risk-premium was generated and its forecast performance

compared with model based forecasts of S&P 500 Index volatility via the Model

Confidence Set technology of Hansen et al. (2003, 2010).

Unconditionally, the risk-adjusted V IX forecast is statistically inseparable from

competing MBF. This is in contrast to the findings of Becker and Clements

(2008), and highlights that properly accounting for the VRP leads to option

market based forecasts performing equally as well as time-series econometric

models.

In periods of relatively low volatility, the transformed V IX was statistically

superior to its unadjusted counterpart across all time-periods considered un-

der both the MSE and QLIKE loss functions. There is no question that the

methodology of BGZ generates the best volatility forecasts in calmer markets.

134

Page 147: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

The story was less consistent for more turbulent periods, with the raw V IX

outperforming the risk-adjusted V IX. However, it is also true that in these high

volatility periods the majority of the models considered are statistically indis-

tinguishable, reflecting the generally more difficult task of forecasting volatility

in such times. It is worth noting here that although MSE may be affected

by the large observations that occur in this turbulent period, Patton (2011)

states that QLIKE will be less affected by such extreme observations. Hence,

it is believed that the inability to separate out performance in this period is

attributable to the difficulty in forecasting in such periods rather than noise in

the performance measure.

Overall, this Chapter shows that if one correctly accounts for the volatility

risk-premium, the market determined forecast of volatility over a 22-day hori-

zon is in fact of superior predictive accuracy to a small number of model based

forecasts and the risk-neutral V IX in periods of relative calm. However, the

picture is less clear for turbulent times and depends on the forecast horizon of

interest.

135

Page 148: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Chapter 4

Forecasting Intraday

Volatility: The Role of VIX

Futures

4.1 Introduction

Prior Chapters have documented that the level of volatility of financial assets

is heteroscedastic for daily horizons or longer. This feature and other stylised

facts of volatility were discussed in Section 2.2, with a broad range of im-

plied volatility and time-series models designed to capture the salient features

of volatility being canvassed in Sections 2.5 and 2.6 respectively. In addition

to the more commonly studied daily level of volatility, heteroscedasticity is

also present at higher frequencies within a trading day; as far back as Wood,

McInish, and Ord (1985), researchers have found a pronounced U-shape for the

intraday periodicity of volatility in stock markets. However, while there exist

numerous alternative approaches for modelling heteroscedasticity at the daily

level, far fewer models have been designed to capture the properties of volatil-

ity at higher frequencies. There is significant economic motivation for studying

136

Page 149: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

the patterns of volatility at this smaller time interval and this Chapter aims to

inform this literature; some of this motivation is now given.

To begin, Sokalska, Chanda, and Engle (2005) note that an economic moti-

vation for modelling volatility at smaller intervals such as 5-minutes is that

derivative traders or hedge funds may require high frequency measures of risk

for their time-varying hedge ratios. Further, Engle (2005) argues that intraday

estimates of volatility may be used to evaluate the risk of slow trading or as an

input to measures of time varying liquidity. However, Sokalska, Chanda, and

Engle (2005) argue that the most important economic use of intraday volatility

forecasts may be for devising optimal strategies in placing limit orders or in

scheduling trades, and link intraday volatility to the recent increase in algorith-

mic trading1. Given the economic importance of intraday volatility forecasts

just noted, this Chapter conducts an examination of both the in-sample fit and

out-of-sample forecast performance of various methodologies for modelling in-

traday return volatility, this examination is executed on the S&P 500 Index for

the period 1st January 2010 through to the 7th of December 2010.

Specifications that have previously been proposed for modelling the intraday

periodicity of volatility, such as the Functional Fourier Form of Andersen and

Bollerslev (1997, 1998) or the spline approach of Taylor (2004a, 2004b), typi-

cally do not update their forecasts within the trading day. That is, unexpected

market conditions that occur at the beginning of a trading day are not reflected

in the forecasts of volatility for the remainder of the day. It is argued here that

an accurate forecasting method for high frequency volatility should be able to

update its forecasts to include more recent information from within the same

trading day. Recently, Sokalska, Chanda, and Engle (2005) propose a GARCH

1They motivate this claim by citing order choice literature such as Ellul, Holden, Jain andJennings (2003), Griffiths, Smith, Turnbull, and White (2000), Lo, MacKinlay, and Zhang(2002), and Hasbrouck and Saar (2002).

137

Page 150: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

style framework that may be used in forecasting the intraday level of volatility

and incorporates information from within the same trading day. This Chapter

investigates the usefulness of this model, as well as extending the analysis along

two lines. An alternative, semi-parametric framework that has had success at

forecasting daily levels of volatility is also examined. Further, this Chapter

proposes the use of several derivatives market based information measures that

may be of use in high frequency volatility forecasting. Each of these proposals

is now introduced in turn.

Firstly, an alternative and novel semi-parametric estimation framework is utilised

to generate competing forecasts to the Sokalska, Chanda, and Engle (2005)

specification. This model, proposed by Becker, Clements, and Hurn (2011),

has been found to outperform popular time-series models of daily volatility

in an application of out-of-sample forecasting. Related to the Heterogeneous

Market Hypothesis of Muller, Dacorogna, Dav, Pictet, Olsen, and Ward (1993)

where the heterogeneity refers to distinct groups of traders, the framework of

Becker, Clements, and Hurn (2011) is able to capture the complex dynamics of

volatility that results from heterogeneous agents reacting to and causing differ-

ent volatility components. As the forecasting of intraday volatility is partially

motivated by the trading activity of market participants, e.g. algorithmic trad-

ing, it is believed that this newly proposed specification may be naturally suited

to the problem at hand.

Secondly, it is investigated whether performance of both of the proposed frame-

works may be enhanced by the addition of information from derivatives mar-

kets, rather than utilising historical information alone. Specifically, whether

measures of the level of volatility implied by the options market, and the trad-

ing activity of futures on this level of implied volatility, are of incremental value

to the both the original specification of Sokalska, Chanda, and Engle (2005)

138

Page 151: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

and the semi-parametric framework of Becker, Clements, and Hurn (2011) is

examined. The utilisation of two distinct methodologies adds robustness to the

results, if the derivatives market information is of use under both frameworks

then it would add credence to any claims of incremental value for the purposes

of intraday volatility forecasting; the motivation for including the derivatives

market measures comes from two distinct lines of research which are now dis-

cussed.

As has been covered extensively in the Chapter 2, there has been a large body

of research that demonstrated empirically that the inclusion of implied volatil-

ity (IV) is often of use in forecasting future levels of daily volatility, see Section

2.5 and the references therein. Further, Chapter 3 demonstrates that a risk-

adjusted forecast of volatility from the options market cannot be separated sta-

tistically from a range of time-series models unconditionally, and dominates in

low volatility periods. Hence, a strong link has already been established between

the options market and volatility forecasting at the daily horizon. Additionally,

there is an extensive literature, both theoretical and empirical, examining the

linkages between the options and spot markets at higher frequencies.

Beginning with the theoretical motivation, as far back as Black (1975) it was

argued that informed investors may first execute transactions in the options

market due to lower transaction costs, capital requirements, trading restric-

tions, but providing higher leverage. Since then, theoretical models of informa-

tion flow that link the two markets have been presented. Easley, O’Hara, and

Srinivas (1998) develop a model that results in a pooling equilibrium where

some informed traders may choose to trade in the options market “resulting

in particular option trades being informative for the future movement of stock

prices”. This pooling equilibrium will occur when the option market provides

high leverage, the liquidity in the spot market is low, or when there are more

139

Page 152: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

informed investors. Recently, Rourke and Maraachlian (2011) present an ex-

tended theoretical model of options market trading that decomposes trading

activity into sign- and volatility-motivated trades. Importantly in the current

context, their empirical analysis shows that volatility-motivated trading domi-

nates sign-motivated trading in options markets. That is, their results demon-

strate that informed trading in the options market is directly related to future

volatility, potentially implying that it may be useful in a forecasting context.

In addition to the aforementioned theoretical results, there is also a large vol-

ume of prior empirical research that examines the lead-lag relationship of the

options and spot markets, generally taking the form of Vector Autoregressive

(VAR) models or Granger causality tests. Early work on daily data conducted

by, for example, Manaster and Rendleman (1982) and Anthony (1988) found

that the options market lead the spot market both in price movements and

trading activity. More recently, Chakravarty, Gulen, and Mayhew (2004), Cao,

Chen, and Griffin (2005), and Pan and Potesham (2006) all find that informed

traders first execute their transactions in the options market and that these

trades contain information relevant to the subsequent returns for the under-

lying. Even though these results are not directly related to volatility, they

nonetheless provide indirect evidence that options market activity may lead

the spot market.

Conversely, evidence exists that informed traders transact initially in the spot

market. For example, Piqueira and Hao (2011) examine whether bad news is

first traded upon through put options or short sales and find that short sales

generally lead net put volume and can predict subsequent stock returns; this

result may suggest that put option trading activity may not be useful in fore-

casting future volatility over and above that contained in the physical market.

140

Page 153: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Overall, there is no clear result from the prior empirical literature regarding

information flow from the options to spot markets. As noted by Piqueira and

Hao (2011), “the literature finds strong evidence supporting the presence of

informed trading in the options market but fails to form a consensus on which

market contains more information”. However, it is argued here that enough

prior evidence exists to warrant an examination of whether an options market

based measure of the level of future volatility is informative in forecasting fu-

ture spot market volatility at an intraday horizon. That is, given the success

of the inclusion of options based measures in forecasting volatility at the daily

horizon, as well as theoretical and empirical evidence of options markets lead-

ing the spot market at higher frequencies, this Chapter investigates whether

the inclusion of an options based measure of volatility is of incremental value

to purely historical measures.

As the object of interest here is the level of S&P 500 Index volatility, an obvious

choice for an options market based forecast of volatility is the V IX. That is,

some prior studies have found that options price movements lead spot market

movements, as the V IX is directly constructed from options market prices it

is logical that it too will lead the spot market. Rather than using the raw level

of the V IX, a standardised measure of the V IX is used to act as a proxy for

changes in the level of expected volatility from the options market.

Further, it is examined whether trading activity in futures on the V IX are also

of incremental value. Typically, investors will buy (sell) V IX futures when they

anticipate volatility will increase (decrease). It is then plausible that movements

in V IX futures trading activity are a reflection of changes in options market

participants’ beliefs of volatility and may in fact lead spot market volatility. A

standardised measure of V IX futures trading activity may then be of use in a

forecasting exercise. Aside from the motivation just outlined, there also exists

141

Page 154: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

prior literature suggesting that futures market trading activity will be of use in

intraday volatility forecasting.

Theoretically, it has been argued that futures prices should lead cash index

prices as the futures market is less costly for traders to operate in, the futures

market should then reveal economy-wide information prior to the cash market

(Chan, 1992). Further, it is plausible that this lead from futures markets may be

stronger in periods of bad news. If short-sales are restricted in the spot market

and there are no such restrictions in the futures market, traders may first react

to bad news there resulting in an earlier reflection of information (Chan, 1992).

While the present context is different in that there does not exist a futures

and cash market on the V IX, this result does imply a potentially asymmetric

relationship between the futures and spot markets dependent on the state of

the economy. Another result suggesting that the futures market may lead the

spot market is that Chan (1990) shows that an informed trader with market-

wide information will earn a higher profit by transacting in the futures market

rather than in the individual securities, even in the absence of transaction costs.

Early empirical studies that examined the lead-lag relationship between the

futures and cash index returns found that futures returns significantly lead

cash index returns. As examples, Kawaller, Koch, and Koch (1987) find that

futures prices lead cash prices by up to 45-minutes, while Stoll and Whaley

(1990) demonstrate that for the S&P 500 and Major Market Index, futures re-

turns lead stock index returns by on average by 5-minutes; this is an interesting

result as this is the time horizon and index of interest here. In an important pa-

per within the literature, Chan (1992) finds that futures prices lead cash index

prices and that these results are robust to the effect of nonsynchronous trad-

ing2. In a study that examined the lead-lag relationship between the futures,

2It had been argued prior to Chan (1992) that the lead-lag relation may be due to manyconstituents of the S&P 500 Index not trading frequently enough for their prices to update

142

Page 155: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

options, and stock markets at high-frequencies, de Jong and Donders (1998)

report that not only does the futures market lead the cash market but also the

options market, by approximately 10 minutes in both cases. More recently, Tse

and Chan (2010) find in an examination of the S&P 500 Index markets that

the futures market again leads the spot market, a result that is particularly

relevant here as this is also the index examined in this Chapter. They also find

that short-selling restrictions do indeed play a part in the lead-lag relationship,

with their results implying that the futures market may lead the spot market

to a greater degree in periods of bad news.

It is worth highlighting that the above results regarding the lead-lag relation-

ship between the futures and spot markets are analyses of price movements and

not of volatility. However, in the current context of V IX futures, the futures

price is in fact a measure of volatility. The price of V IX futures is a mea-

sure of the anticipated future level of the implied volatility from the options

market, and thus is directly related to volatility. If futures prices move before

spot market prices, and the futures price now represents a measure of volatility,

then it is plausible that V IX futures movements may then predict future spot

market volatility. An avenue for investigating the lead of futures markets for

spot market volatility, and is related to the motivation for studying intraday

volatility, is through the level of trading activity.

It has been proposed previously that volatility and trading volume are jointly

directed by the process of information arrival, this is known as the mixture

of distributions hypothesis (see Clark (1973), Tauchen and Pitts (1983), and

Andersen (1996)). Further, prior literature has argued that the order flow

of markets reveals to traders information that is either private or just widely

dispersed among economic agents (Berger, Chaboud, and Hjalmarsson, 2009).

information quickly.

143

Page 156: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

There is, therefore, theoretical justification for examining the link between trad-

ing activity and volitility.

Empirically, a regression analysis by Bessimbinder and Seguin (1992) found

that unexpected futures trading activity volume is positively related with spot

market volatility. In causality analysis studies, Darrat and Rachman (1995)

find no evidence that S&P 500 futures trading activity leads cash price volatil-

ity. However, Chatrath, Ramchander, and Song (1996) find in the currency

market (and Yang, Balyeat, and Leatham (2005) in the agricultural commodi-

ties market) that there exists a leading relationship from the futures market.

Specifically, Yang, Balyeat, and Leatham (2005) report that unexpected futures

trading volume Granger causes an increase in the cash price volatility.

The above discussion highlights that a significant amount of prior research has

found that both futures price movements and futures trading activity, particu-

larly unexpected trading activity, lead the spot market both in price movements

and in volatility. Therefore, an a priori expectation exists that trading activity

in V IX futures may lead S&P 500 Index spot volatility; these measures may

be included as additional exogenous regressors in both the Sokalska, Chanda,

and Engle (2005) and Becker, Clements, and Hurn (2011) frameworks that have

been mentioned previously. Whether the inclusion of these variables improves

the in-sample fit or out-of-sample forecast performance of intraday volatility

shall now be examined.

The remainder of the Chapter proceeds as follows. Section 4.2 outlines the

two proposed estimation frameworks and the construction of the derivatives

market information measures, Section 4.3 describes the data used, Section 4.4

presents the empirical findings and Section 4.5 concludes.

144

Page 157: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

4.2 Methodology

Given the pronounced nature of the intraday periodicity that has been docu-

mented by prior studies such as Wood, McInish, and Ord (1985), Andersen and

Bollerslev (1997, 1998), and further detailed in Section 4.3 below, any accurate

forecasting model of intraday volatility must be able to account for and match

the typical U-shaped pattern of intraday volatility for equity markets. While

not as numerous as the available models in the daily volatility literature, a few

candidate specifications exist that may be of use here.

The observed intraday periodicity may be matched by the Periodic GARCH

(PGARCH) model as introduced by Bollerslev and Ghysels (1996); the simplest

specification of which is where a dummy variable exists for each of the periods

where a repetitive pattern in the second order moment occurs. Alternatively, a

trigonometric approach is advocated by Andersen and Bollerslev (1997, 1998)

via their Fourier Functional Form. This method is criticised by Taylor (2004a,

2004b) for requiring that the volatility at the beginning and end of a cycle are

the same. Typically, the volatility of the opening period is significantly higher

than at the close of a trading day, which casts some doubt on how the Fourier

Functional Form may perform empirically. Another option for modelling intra-

day volatility is utilised by Taylor (2004a, 2004b), this alternative is to fit a

cubic spline between knots within a trading day. Evans and Speight (2010) find

that this approach allows for sharp peaks and troughs in intraday volatility as

well as outperforming the Fourier Functional Form specification.

It is arguable that these models may not be ideally suited to forecasting volatil-

ity at high frequencies as they do not accommodate updating their forecasts

within the trading day. That is, the forecast of a given interval’s level of volatil-

ity is based on information only up to the close of the previous trading day, it

145

Page 158: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

is fixed regardless of whether the actual volatility on that day is lower or higher

than expected. It would be preferable if a model were able to update its fore-

cast based upon information that is more recent, i.e. the forecast of volatility

at 3:00PM now uses information up to 2:55PM on the same day, rather than

only up to 4:00PM the prior trading day; the model of Sokalska, Chanda, and

Engle (2005) is able to accommodate this feature and is now described.

4.2.1 An intraday volatility framework

Following the framework of Sokalska, Chanda, and Engle (2005), define the

log-return as

r{t,i} = p{t,i} − p{t,i−1}, i = 1, ...,M, t = 1, ..., T, (4.1)

where p{t,i} = lnP{t,i}, the log-price at the close of period i on day t, there are

M periods within a day and T number of days. The overnight return is ignored

leading to a total number of return observations equal to T ×M .

Sokalska, Chanda, and Engle (2005) detail a GARCH style model for high

frequency financial returns where the conditional variance of an intraday pe-

riod is a multiplicative product of the daily, diurnal and stochastic intraday

volatility. Intraday equity returns are described by the following process:

r{t,i} =√hts{t,i}q{t,i}ε{t,i}, ε{t,i} ∼ N(0, 1), (4.2)

where:

ht is the daily variance component,

s{t,i} is the diurnal variance pattern for period i,

q{t,i} is the intraday variance component with E[q{t,i}

]= 1, and,

ε{t,i} is an Gaussian error term, E[ε{t,i}

]= 0, E

[ε2{t,i}

]= 1.

146

Page 159: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

The daily variance component, ht, that is used in Sokalska, Chanda, and Engle

(2005) is a commercially available volatility forecast that is produced daily for

each of the companies within their sample. Rather than adopting their ap-

proach, an econometric procedure is employed here that attempts to measure

the latent volatility of a given day by using high frequency intraday data to

calculate the RV; the choice of RV over alternative measures of volatility is mo-

tivated by the discussion given in Section 2.2.1. This approach facilitates the

forecasting objective in that existing time-series models of daily volatility are

easily applied to the RV measure. In particular, the MIDAS model of Ghysels,

Santa-Clara and Valkanov (2006) is chosen here due in part to the result of the

previous Chapter where it was shown to provide the best out-of-sample fore-

casts for the 1-day horizon3. It is important to note that in this Chapter the

estimates of ht come from the fitted values of the recursively updated estimates

from the MIDAS model; it is infeasible to use the RV of day t in Equation 4.2

as this would utilise future information from that day. The forecast ht values

are given by

ht =

kmax∑

k=1

b (k, θ)RVt−k. (4.3)

As has been discussed in Chapter 2, the maximum lag length kmax can be cho-

sen rather liberally as the weight parameters b (k, θ) are tightly parameterised,

this Chapter utilises kmax = 50. The weights are determined by means of a

beta density function and normalized such that∑kmax

k=1 b (k, θ) = 1. A beta dis-

tribution function is fully specified by the 2× 1 parameter vector θ. To provide

3While other choices exist in modelling and forecasting the daily volatility component, itis not believed that the choice is critical in this context. This is due to the fact that all of thecompeting specifications for intraday volatility will have the same daily component, and willnot play a role in separating out forecast performance.

147

Page 160: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

more technical detail,

b (k, θ) =f(

kkmax

, θ1; θ2

)

∑kmaxj=1 f

(j

kmax, θ1; θ2

) ,

f (z, a, b)) =za−1 (1 − z)b−1

β(a, b),

β(a, b) =Γ(a)Γ(b)

Γ(a+ b).

(4.4)

The diurnal component, s{t,i}, is the intraday periodicity that has been referred

to previously and is typically U-shaped for equity markets, the full sample values

of this variable are plotted in Figures 4.3 and 4.4 in Section 4.3 that follows.

Rather than employing a trigonometric or spline approach, Sokalska, Chanda,

and Engle (2005) calculate the diurnal component as the mean squared return

of each period after standardising by the level of the daily volatility, this is

similar to including a dummy variable for each of the intraday periods. Their

approach is given by re-writing Equation 4.2 and then taking expectations

r2{t,i}ht

= s{t,i}q{t,i}ε2{t,i},

E

(r2{t,i}ht

)= s{t,i}E

(q{t,i}

)E

(ε2{t,i}

)= s{t,i},

(4.5)

The intraday variance component, q{t,i}, is the variable that is explicitly mod-

elled by Sokalska, Chanda, and Engle (2005), and they propose a GARCH

style specification for its evolution. It is this part of their model that may offer

advantages over the trigonometric or spline specifications in that it allows infor-

mation from within the current trading day to be used in generating forecasts.

If the volatility on a given trading day is higher than anticipated (the actual

daily component of volatility is higher than the forecast ht), then this will flow

148

Page 161: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

to updated values for the intraday variance component also being higher than

anticipated; the trigonometric or spline specifications do not update within the

trading day.

Sokalska, Chanda, and Engle (2005) propose that estimation of their model

may proceed in a multi-stage process; detailing the process slightly differently

to their work, begin by standardising intraday returns by the daily volatility

estimate, ht,

y{t,i} =r{t,i}√ht

=√s{t,i}q{t,i}ε{t,i}. (4.6)

These intraday returns, standardised by the estimated level of daily volatility,

may then be used to generate estimates of the typical diurnal pattern of volatil-

ity. This is done by taking the arithmetic mean of the squared standardised

returns in Equation 4.6 for each of the intraday periods within a trading day

s{t,i} =1

t− 1

t−1∑

τ=1

y2{τ,i},∀i = 1, ...,M, t = 1, ..., T. (4.7)

That is, the estimate of the diurnal component on day t only uses information

from day 1 to day t−1. It is important to note that the above definition of s{t,i}

is slightly different to that of Sokalska, Chanda, and Engle (2005). Equation 4.7

is defined to highlight that the estimate of the diurnal component is updated

each day as new information comes to light without using future information,

whereas their specification uses a sum over all T days, which may imply using

future information in forecasting which is clearly infeasible.

Combining the daily and diurnal volatility components, returns may be stan-

dardised as

149

Page 162: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

z{t,i} =r{t,i}√hts{t,i}

=y{t,i}√s{t,i}

, z{t,i}∣∣F{t,i−1} ∼ N(0, q{t,i}), (4.8)

where F{t,i−1} is the information set available at the time period i − 1 on day

t. It is important to note for estimation purposes that the estimated daily

volatility in Equation 4.8, ht, must be the forecast daily variance component;

to use the RV of day t would require the use of future information, which is in-

feasible. The above decomposition shows that the q{t,i} variable is the variance

of the intraday return after standardising by the daily and diurnal volatility

components. A value larger (smaller) than unity will imply that this variance

is larger (smaller) than would otherwise be anticipated given the forecast daily

and diurnal volatility components alone. Hence, any additional information

that is incorporated into the Sokalska, Chanda, and Engle (2005) model of the

intraday variance component, q{t,i}, should be a relative measure that reflects

changes in expectations.

Having removed the daily and diurnal components of volatility through the

above returns standardisation process, the standardised returns, z{t,i}, are then

used in modelling the conditional intraday variance component

q{t,i} = ω + αz2{t,i−1} + βq{t,i−1}. (4.9)

To summarise, Sokalska, Chanda, and Engle (2005) decompose total volatility

into its daily, diurnal, and intraday stochastic components. The daily compo-

nent may be modelled by a traditional time-series model of volatility, and a

MIDAS model is chosen here for generating the conditional forecasts of daily

volatility. The diurnal component is the intraday periodicity of the asset being

studied; rather than a trigonometric or spline approach, an arithmetic mean of

the squared daily volatility-standardised returns of each period is taken, this is

150

Page 163: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

similar to a dummy variable for each period. Finally, the intraday stochastic

component is modelled by a GARCH style process of the volatility-standardised

returns; doing so allows updates of forecasts that utilise information from within

the same trading day, which does not occur using the trigonometric or spline

approaches.

Given the prior theoretical and empirical results discussed earlier, this Chapter

proposes adapting the Sokalska, Chanda, and Engle (2005) model to include

information from derivatives markets. Specifically, the information impounded

in measures of the volatility implied by the options market, and trading activity

on futures written on this level of implied volatility, is examined for its ability

to improve the in-sample fit and forecast performance relative to a model that

utilises historical information alone.

It is important to remember in adding this information that the variable being

modelled here, q{t,i}, has an expectation of unity and captures changes in the

expected level of volatility, not the level itself. The q{t,i} term exists to cap-

ture information over and above that contained in the forecast daily variance

and typical diurnal pattern, so only deviations from the predictable intraday

periodicity should be included in any additional regressors used in modelling

its evolution. Hence, all of the proposed additional exogenous regressors for

the Sokalska, Chanda, and Engle (2005) model have been designed to have an

expectation of zero. These variables may be added into the Sokalska, Chanda,

and Engle (2005) specification in a similar fashion to the addition of IV to the

GARCH model, as executed by Blair, Poon, and Taylor (2001); the first of

these variables is a standardised measure of the V IX

q{t,i} = ω + αz2{t,i−1} + βq{t,i−1} + γV IX∗

{t,i−1}, (4.10)

151

Page 164: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

where V IX∗{t,i−1} is the scaled V IX given by

V IX∗{t,i} =

V IX{t,i} − V IX{t−1,i}Std(V IX{t−1,i})

, (4.11)

where V IX{t−1,i} and Std(V IX{t−1,i}) are the mean and standard deviation

respectively of the V IX for period i based on the information from days 1 to

t−1. This standardisation process is similar to that executed for the s{t,i} terms

defined in Equation 4.7 and removes any intraday periodicity within the mea-

sure. Given that the V IX is a forecast of volatility from the options market, an

increase in V IX∗{t,i} would imply an increase in the options market expectation

of spot market volatility; hence, the co-efficient, γ, in the above specification is

anticipated to be positive.

Standardised measures of trading activity are also proposed and whether shocks

to trading volume in the futures market then flows on to the volatility in the

spot market is investigated. The inclusion of such variables is motivated by prior

research that shows that unexpected futures market trading activity leads to

increases in spot market volatility. Unexpected trading activity may be mea-

sured through a scaled distance from the mean level of trading activity for that

intraday period. These specifications are the same as in Equation 4.10 but re-

place V IX∗ with the relevant measure of trading activity, the first of which is

total volume in V IX futures contracts,

V ∗{t,i} =

V{t,i} − V {t−1,i}Std(V{t−1,i})

, (4.12)

where V{t,i} is the total volume of V IX futures contracts traded in period i

on day t. Even though futures markets traders will typically buy (sell) V IX

futures if they believe that future volatility will increase (decrease), previous

results imply that an unexpected surge in volume, even if the trade is initiated

152

Page 165: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

by a seller, will increase future spot market volatility. Therefore, prior research

leads to an a priori expectation of a positive co-efficient for γ in this specifica-

tion.

Aside from total volume, it is possible through the work of Lee and Ready

(1991) to filter out the order flow of transaction activity. This methodology

utilises the tick test, which is a technique to infer the direction of a trade

(whether it was buyer or seller initiated) by comparing its price to the price

of preceding trades, with four possible categorisations. A trade is an uptick

(downtick) if the price is higher (lower) than the price of the previous trade.

When consecutive trades take place at the same price (a zero tick), if the last

price change was an uptick, then the trade is a zero-uptick. Similarly, if the

last price change was a downtick, then the trade is a zero-downtick. A trade

is classified as a buy if it occurs on an uptick or a zero-uptick; otherwise it is

classified as a sell (Lee and Ready, 1991). The order flow is then found by the

sum of the buys and sells over a given interval. That is, rather than relying on

the total volume in the futures market, the direction of trading in V IX futures

may be discerned. As V IX futures traders would typically buy (sell) such fu-

tures in the expectation of volatility increases (decreases), a positive co-efficient

on the order flow variable is anticipated; this variable is given by

OF ∗{t,i} =

OF{t,i} −OF {t−1,i}Std(OF{t−1,i})

. (4.13)

That is, if the net buying pressure in the V IX futures market is positive, then

one would anticipate an increase in the intraday variance component as it may

be a signal from the futures market participants of a belief that spot market

volatility will increase. Conversely, if the order flow is negative then the multi-

plication of the negative order flow by a positive co-efficient results in a decrease

in the anticipated value of q{t,i}.

153

Page 166: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Given the stylised facts of volatility outlined in Section 2.2, it is known that

volatility tends to spike upon the release of bad news, more so than a similarly

sized release of good news. Further, the purchase of V IX futures typically re-

flects a view that volatility will increase. These facts may be combined to argue

that an increase in net buys of V IX futures, a positive shock to the variable

OF ∗{t,i}, may reflect the release of bad news to the market. This argument is

related to prior research that has found that the lead of the futures market to

the spot market strengthens in times of bad news. Hence, it is plausible that

an asymmetric relationship may exist between shocks to order flow and future

spot market volatility. It is anticipated that the reaction to a positive shock to

OF ∗{t,i} will have a larger impact on q{t,i} than a similarly sized negative shock.

For these reasons, a final adaptation to the Sokalska, Chanda, and Engle (2005)

model is proposed to include an asymmetric response to shocks to standardised

order flow. All of these models may be summarised in the following specifica-

tions

Standard : q{t,i} = ω + αz2{t,i−1} + βq{t,i−1},

V IX∗ : q{t,i} = ...+ γV IX∗{t,i−1},

V ∗ : q{t,i} = ...+ γV ∗{t,i−1},

OF ∗ : q{t,i} = ...+ γOF ∗{t,i−1},

Asym : q{t,i} = ...+ δ1OF∗{t,i−1}I{OF{t,i−1}∗>0}

+δ2OF∗{t,i−1}I{OF{t,i−1}∗<0}.

In the final specification, Asym, a positive sign is anticipated for both the

154

Page 167: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

co-efficient acting on positive shocks to order flow, δ1, and negative shocks to

order flow, δ2; however, the magnitude of δ1 is expected to be larger than δ2.

Finally, it should be noted that the extensions to the original model of Sokalska,

Chanda, and Engle (2005) just presented theoretically allow for negative values

of q{t,i} which would then flow through a negative forecast for total volatility,

which is clearly infeasible. Although theoretically possible, no such instance is

observed in either the estimation or forecasting exercises.

4.2.2 A semi-parametric framework

An alternative to the GARCH style specification for intraday volatility of Sokalska,

Chanda, and Engle (2005) is the recently proposed semi-parametric model of

Becker, Clements, and Hurn (2011). They note that the majority of time-series

models, including the GARCH specification on which the Sokalska, Chanda, and

Engle (2005) model is based, weight historical observations by their distance

in time from the current period; the GARCH model is a short-memory process

with the weight on prior observations declining exponentially with time. In

place of this time-dependent weighting structure, Becker, Clements, and Hurn

(2011) put forth a state-dependent weighting structure that, generally speak-

ing, weights historical observations by their similarity to the current period.

Becker, Clements, and Hurn (2011) note that the intuition behind their model

is similar to the Heterogeneous Autoregressive (HAR) model of Corsi (2009)

where the forecast of volatility is a function of volatility over the prior day,

week, and month. The HAR model is motivated by the Heterogeneous Market

Hypothesis of Muller, Dacorogna, Dav, Pictet, Olsen, and Ward (1993) where

there are heterogeneous groups of traders; such heterogeneity leads to agents

reacting to and causing different volatility components (Becker, Clements, and

Hurn, 2011) and the newly proposed framework also captures this trading be-

155

Page 168: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

haviour. As the forecasting of intraday volatility is partially motivated by the

trading activity of market participants and this Chapter examines the incremen-

tal value of trading activity in the V IX futures market, the flexibility of this

alternative framework that can reveal the complex dynamics between volatil-

ity and futures trading makes it naturally suited to the problem at hand; the

model is now briefly described with more detail available in Becker, Clements,

and Hurn (2011).

The proposed model is semi-parametric in that it combines a linear functional

form with a non-parametric weighting scheme of past observations. The linear

functional form is given by

E[YT+1] = κ+

kmax∑

k=0

wT−kYT−k, (4.14)

where κ is a deterministic drift term, wT−k is the weight placed on the lagged

value YT−k and kmax is the maximum lag length considered; how these weights

may be determined is now discussed.

Becker, Clements, and Hurn (2011) describe a framework where the weight

applied to each of the lagged observations of Yt varies smoothly with the degree

of proximity of YT−k to YT , where the degree of proximity is determined by

the similarity of the states, not time4. This degree of similarity is measured

through the use of a kernel function such that the weights are given by

wT−k = K

(YT − YT−k

h

), (4.15)

where K(·) is the standard normal kernel function and h is the bandwidth.

Here, Becker, Clements, and Hurn (2011) is followed and h is chosen to be the

4This approach is related to the nearest neighbour regression framework of Cleveland (1979)and Mizrach (1992).

156

Page 169: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

rule-of-thumb bandwidth derived by Silverman (1986)

h = 0.9σY T−1/5 , (4.16)

where σY is the standard deviation of the observed sample realisations. Hence,

the smaller the absolute size of the scaled difference in the levels of the current

value of YT to the lagged realisation YT−k, the greater the weight placed on that

lagged realisation. Once the weighting vector, w, has been calculated through

the above approach, it may be rescaled to ensure that the sum of the weights

is equal to one, w = w

w′1.

An advantage of the approach advocated by Becker, Clements, and Hurn (2011)

is that it may be extended to accommodate a multivariate weighting scheme;

it is able to handle the inclusion of information beyond just the lagged realisa-

tions of the process of interest. Following their discussion, let ΦT be a T ×N

matrix of observations of the variables thought relevant for the determination

of weights; the first column of this matrix will be the variable YT with the

remainder composed of the additional relevant variables. The last row of this

matrix contains the last available observations for each of the N variables and

is a measure of the current state of each of the processes. The weight placed on

the lagged values of these observations is now given by the multivariate product

kernel

wT−k =N∏

n=1

K

(ΦT,n − ΦT−k,n

hn

), (4.17)

where K is once again the standard normal kernel, Φt,n is the row t, column

n element in ΦT . The bandwidth for dimension n, hn is given by the normal

157

Page 170: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

reference rule for multivariate density estimation5,

hn = σnT− 1

4+N . (4.18)

Having outlined the method by which weights are constructed, the construction

of the variables of interest in the current context is now addressed; hereafter, the

variable YT is replaced by z2{t,i} from Equation 4.8, the squared volatility stan-

dardised returns6. Following the analysis of daily realised volatility in Becker,

Clements, and Hurn (2011), the multivariate matrix Φt will be composed of

the series of interest, z2{t,i}, and four vectors of moving averages of differing lag

structure,

Φt = [z2t,i,λ1

, . . . , z2t,i,λN

]′, (4.19)

where z2t,i,λi

is a λi period moving average (ending) at time t. Values for λi are

selected to be 1, 6, 12, 78 and 158; these values are chosen to coincide with the

5-, 30-, and 60-minute intervals, as well as the 1- and 2-day intervals and are

based on the autocorrelation plot in Figure 4.2. Consequently, the first entry in

Φt is simply the historical z2{t,i}, capturing the squared volatility standardised

returns. Beyond this, short-term moving averages of z2{t,i} are also included to

distinguish whether this variable was rising, falling or relatively stable at the

time at which the forecast was made.

The V IX and futures trading activity variables described previously for inclu-

sion in the Sokalska, Chanda, and Engle (2005) framework will also be included

5Becker, Clements, and Hurn (2011) utilise this rule based on the work of Scott (1992) andfollows from the fact for N = 2 the scaling factor is equal to one. In the N > 2 case, Scott(1992) suggests using a scaling factor of one as a simple rule of thumb.

6Even though the specification of Sokalska, Chanda, and Engle (2005) is a model for thevalues q{t,i}, the kernel estimation must be based on the z2

{t,i} series as the q{t,i} series is neveractually observed. Rather the q{t,i} series is estimated by maximum likelihood and is modelspecific. Modelling the kernel approach on the z2

{t,i} vector is analogous to the estimation ofa MIDAS model on daily realised variances; the MIDAS model is not run on estimates of ht,the daily variance estimated from a GARCH style model. Additionally, it may be noted thatE(z2{t,i}

)= E

(q{t,i}

).

158

Page 171: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

in the kernel approach of Becker, Clements, and Hurn (2011), including the

moving average terms for each of the variables. It has already been demon-

strated that additional variables are easily accommodated in the multivariate

kernel framework and may be included in the matrix Φt, the additional deriva-

tives market variables may be included in Φt in a similar fashion to the inclusion

of the moving average terms.

It is possible that the information content of the trading activity measures

may be more accurately captured by the state-specific nature of the kernel ap-

proach, relative to the GARCH style specification that weights observations by

their distance in time. Consider the trading activity of market participants in

the lead up to the release of a regularly scheduled announcement, such as the

quarterly release of Gross Domestic Product figures. Plausibly, prior trading

activity in the lead up to previous such announcements will be more relevant

to future volatility after the announcement than the trading activity over, say,

the last week. That is, the state-specific approach may capture important dy-

namics that a distance in time based weighting scheme may miss. The kernel

estimation based models may be summarised by the variables included in the

matrix Φt as follows

Kernel : Φt = [z2t,i,λ1

, . . . , z2t,i,λN

]′,

Kernel V IX∗ : Φt = [z2t,i,λ1

, . . . , z2t,i,λN

, V IX∗t,i,λ1

, . . . , V IX∗t,i,λN

]′,

Kernel V ∗ : Φt = [z2t,i,λ1

, . . . , z2t,i,λN

, V∗t,i,λ1

, . . . , V∗t,i,λN

]′,

Kernel OF ∗ : Φt = [z2t,i,λ1

, . . . , z2t,i,λN

, OF∗t,i,λ1

, . . . , OF∗t,i,λN

]′,

159

Page 172: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

To conclude the methodology Section, how the competing forecasts are evalu-

ated is now discussed. As each of the competing Sokalska, Chanda, and Engle

(2005) models are estimated via quasi-maximum likelihood over a log-likelihood

function, traditional forms of analysis may be conducted on their in-sample re-

sults. That is, the significance of individual variables may be tested through the

use of t-statistics on their estimated co-efficients, as well as whether their inclu-

sion significantly improves model fit by executing likelihood ratio tests. There

are no such in-sample comparisons conducted for the Becker, Clements, and

Hurn (2011) set of models, their relevance will only be determined by their out-

of-sample forecast performance relative to the competing GARCH style models.

The relative out-of-sample forecast performance will be evaluated using the

Model Confidence Set (MCS) methodology outlined in Section 2.9. The Mean-

Square-Error (MSE) and QLIKE loss functions are used to evaluate the out-of-

sample forecast performance and both the range and semi-quadratic statistics

are used. The candidate models here are the original specification of Sokalska,

Chanda, and Engle (2005), the kernel based model of Becker, Clements, and

Hurn (2011), and both of these specifications augmented by the inclusion of

the standardised measures of the VIX, V IX futures trading volume, and V IX

futures order flow.

In estimating the candidate models, the first step is in estimating the daily

levels of volatility for the process of standardising the returns. The estimates

of daily volatility from the MIDAS model are found after an initial estima-

tion window of 736 trading days, this window covers the 1st of January 2007

through to the 31st of December 2009. The out-of-sample forecasting exercise

is then conducted from the 1st of January 2010 through to the 7th of Decem-

ber 2010. The intraday volatility models are estimated on five-minute return

intervals, of which there are 78 in a full trading day. Allowing for shortened

160

Page 173: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

trading days, 18,252 forecasts are generated of one-step-ahead 5-minute trading

interval volatility.

4.3 Data

The data set consists of intraday levels of the S&P 500 Index, intraday levels

of the V IX, and tick-data regarding transactions in V IX futures, all of these

data come from the Thomson Reuters Tick History database; the period under

consideration covers the 1st of January 2007, through to the 7th of December

2010. The starting date of the sample is chosen to allow sufficient liquidity to

develop in the V IX futures market which began trading in March 2004. The

shortest interval at which the CBOE publishes the levels of both the V IX and

the S&P 500 Index is 15 seconds, these values are used to construct a regularly

spaced grid with 5-minute intervals.

To begin, some of the statistical properties of the S&P 500 Index log-returns are

given. Over the full sample period the lowest 5-minute log-return was −5.90%

while the highest 5-minute log-return was 3.90%. The sample moments of

the log-returns are a mean indistingushable from zero, a standard deviation of

0.1421%, a skewness term of -0.6772, a kurtosis value of 101.3727, and finally

a Jarque-Bera statistic of 5.4987×107, implying clear rejection of the null hy-

pothesis of a Gaussian distribution for the 5-minute log-returns.

At the 5-minute interval, there is no significant evidence of serial dependence

in the log-returns of the S&P 500 Index. The plot of the serial autocorrela-

tion function in Figure 4.1 shows that there is little short-term persistence in

the series although there is some statistically significant serial dependence at

longer horizons that coincide with one- and two-day lengths. For the most part,

however, the plot of log-return autocorrelations appears fairly noisy, suggesting

161

Page 174: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

there is little structure in the returns process.

Figure 4.1: Plot of the autocorrelation function of the 5-minute log-returns of the S&P500 Index. The horizontal lines represent the 95% confidence level for significant auto-correlation.

Lag

Sam

ple

Auto

corr

elat

ion

Sample Autocorrelation Function (ACF)

0 20 40 60 80 100 120 140 160-0.04

-0.03

-0.02

-0.01

0

0.01

0.02

0.03

In contrast, the plot of the sample autocorrelation function of squared log-

returns for the same 5-minute intervals, plotted in Figure 4.2, highlights a

high degree of persistence; in particular, there is pronounced periodicity at lag

lengths of multiples of 78, the number of return periods in a full trading day.

This intraday periodicity of squared returns is well known in the literature and

dates back to Wood, McInish, and Ord (1985). To more clearly demonstrate

the documented periodicity, Figure 4.3 plots the mean squared log-return of

each of the 5-minute intervals of a trading day for the entire sample period,

this is a proxy for the average volatility within each of the intervals of a trading

day and also represents the full sample estimates of s{t,i} given in Equation 4.7.

Typically, there is a large spike in volatility at the opening of a market, followed

162

Page 175: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

by a slow decline in volatility before rising again in the afternoon and peaking

towards the close of the market.

It may be argued that for the entire trading day, the pattern is more of an

L-shape than U-shaped. To highlight the U-shaped pattern, the mean squared

log-returns for each of the intervals excluding the opening 5-minutes are plotted

in Figure 4.4; it should now be clear that a U-shaped pattern does indeed exist

within a trading day subsequent to the initial 5-minute interval.

Figure 4.2: Plot of the autocorrelation function of the 5-minute squared log-returns ofthe S&P 500 Index. The horizontal lines represent the 95% confidence level for significantautocorrelation.

Lag

Sam

ple

Auto

corr

elat

ion

Sample Autocorrelation Function (ACF)

0 100 200 300 400 500 600 700 800

0

0.1

0.2

0.3

0.4

0.5

Having detailed some of the sample properties of intraday volatility, some of the

characteristics of the derivatives market based exogenous regressors are given.

Firstly, it is examined whether a intraday periodic pattern also exists for the

level of the V IX, with the mean level of the V IX for each of the 5-minute

163

Page 176: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 4.3: Plot of the in-sample mean squared log-return for each of the 5-minuteintervals, denoted by period i, within a trading day, a proxy for the intraday levels ofvolatility. The shape is typically observed to be that of a U.

Mean Squared Return for 5 Minute intervals

r2 i

9:35 AM 11:05 AM 12:45 PM 2:25 PM 4:00 PM0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

intervals within the sample period plotted in Figure 4.5. While the pattern is

not as pronounced and is definitely more irregular, there again appears to be

a degree of intraday periodicity in the level of the V IX. Similar to the case of

intraday spot market returns, there is a large increase in the level of volatility

at the beginning of the trading day before slowly declining until approximately

lunch time, before increasing again in the afternoon; however, there is now a

second drop off in volatility at the end of the trading day.

The less pronounced intraday periodicity is perhaps unsurprising as the V IX

is the options market’s risk-neutral forecast of the mean level of volatility over

the next 22-trading-days, rather than the incorporation of new information that

is reflected by changes in volatility in the physical market. To illustrate, the

spike in volatility that begins a trading day in the spot market probably reflects

164

Page 177: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 4.4: Plot of the in-sample mean squared log-return for all but the first of the 5-minute intervals, denoted by period i, within a trading day, a proxy for the intraday levelsof volatility. The plot more clearly demonstrates the U-shaped pattern that is typicallyobserved.

Mean Squared Return for 5 Minute intervals

r2 i

9:40 AM 11:05 AM 12:45 PM 2:25 PM 4:00 PM0

0.5

1

1.5

2

2.5

3

changes in information that occur in the overnight period that are quickly incor-

porated into the asset price. A similar spike in the V IX would only occur if this

information also changed the expected mean volatility for the next 22-trading-

days. It is plausible that as the asset price changes quickly to incorporate this

new information, there is no concomitant increase in expected volatility as the

new information has already been impounded in the price; only if this infor-

mation also increases uncertainty about future price changes will the expected

volatility also spike. This argument tends to imply that the actual level of

the V IX may not be a priori useful in forecasting intraday volatility, but that

changes in the level may be.

With regard to the V IX futures7 trading activity related variables, the mean

7The V IX futures data are from the Thomson Reuters Tick History database and are the

165

Page 178: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 4.5: Plot of mean level of the V IX for each intraday period.

Mean VIX for 5 Minute intervalsVIX

t

9:35 AM 11:05 AM 12:45 PM 2:25 PM 4:00 PM

2.19

2.195

2.2

2.205

2.21

2.215

2.22

2.225

trading volume in a 5-minute period is 42.15, the mean trading volume condi-

tional on at least one trade occurring is 48.95, approximately 13.9% of 5-minute

intervals have no trading activity. The standard deviation of trading volume

is 136.41. Figure 4.6 plots the total trading volume over each of the 5-minute

intervals for the whole sample period. An interesting feature of this plot is

that volume appears to increase markedly from the middle of 2009 onwards,

with a possible explanation being increased use of this instrument by portfolio

managers.

It has been noted by some, Moran and Dash (2007), Sloyer and Tolkin (2008),

and Szado (2009) for example, that the use of V IX futures is beneficial in

a portfolio management sense due to the asymmetric response of volatility to

similarly sized releases of good and bad news. In a simplified explanation, as

nearest to maturity contracts with the roll-over occurring on expiry.

166

Page 179: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

market wide negative information is released to the market, the value of assets

shall decline and volatility will spike. A long position in V IX futures will then

act as a natural hedge to the decrease in asset value that comes from the release

of bad news. It is plausible that given the increased volatility towards the end

of the sample period, greater interest in V IX futures by portfolio managers for

volatility risk management purposes lead to an increase in trading activity in

this instrument.

Figure 4.6: Plot of total level of trading volume in V IX futures for the entire sampleperiod.

VIX Futures 5-Minute Trading Volume

Vol

um

e

03-Jan-2007 04-Jan-2008 26-Dec-2008 17-Dec-2009 07-Dec-20100

2000

4000

6000

8000

10000

12000

As has been mentioned, the total volume of V IX futures may be decomposed

into its order flow, a measure of the net buying pressure in that market. The

order flow may be determined through the use of the Lee and Ready (1991)

algorithm which utilises the tick-test. The order flow for each of the 5-minute

intervals is plotted in Figure 4.7. The mean level of the order flow is 0.9098,

167

Page 180: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

with the mean conditional on the order flow not equal to zero being 1.0952.

Hence, the net buying pressure appears to be slightly positive, this may reflect

traders on average taking a position that volatility will increase. Given the

levels of volatility witnessed in the latter part of the sample data, the net buying

of V IX futures may be due to portfolio managers protecting themselves from

future surges in volatility generated by additional negative shocks to the returns

process. The standard deviation of the order flow variable is 103.68. Because

the variable is a measure of net buying pressure, the means for the order flow

in either direction is also provided; when the order flow is positive, the mean is

26.4255, when the order flow is negative the mean is -24.9567.

Figure 4.7: Plot of order flow for trading volume in V IX futures for the entire sampleperiod.

VIX Futures 5-Minute Order Flow

Ord

erFlo

w

03-Jan-2007 04-Jan-2008 26-Dec-2008 17-Dec-2009 07-Dec-2010-10000

-8000

-6000

-4000

-2000

0

2000

4000

6000

168

Page 181: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

4.4 Results

The empirical results are composed of two sub-sections, the results for the in-

sample estimation of the Sokalska, Chanda, and Engle (2005) models, and then

the relative out-of-sample forecast performance across all of the candidate mod-

els. The analysis of the in-sample results is designed to evaluate any statistical

improvement in fit offered by the derivatives market based measures, this is

achieved through standard tests of significance such as t-tests of estimated co-

efficients and likelihood ratio tests of improved overall model fit. It should again

be noted that no in-sample analysis is conducted for the semi-parametric ap-

proach. The relevance of this alternative framework will only be determined by

its forecast performance relative to the competing GARCH style model, which

shall be evaluated using the Model Confidence Set methodology of Hansen,

Lunde, and Nason (2003, 2010).

4.4.1 In-Sample Results

The results of the in-sample estimation are presented in Table 4.1 and it is clear

that the inclusion of the V IX related measures generally offer improvement over

the original specification of Sokalska, Chanda, and Engle (2005). While it is

true that the proposed amendments nest the original model and that the re-

stricted model will never outperform the more general form, the p-values of the

likelihood ratio tests universally reject the null hypothesis of no statistically

significant improvement offered by the inclusion of the V IX related measures;

all of the proposed derivatives based information measures significantly improve

the in-sample fit.

The co-efficient on the standardised V IX measure, V IX∗, is positive but bor-

derline significant at 1.77 robust standard errors from zero. The sign of the

co-efficient is as expected; when the V IX is higher than normal, the stochastic

169

Page 182: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

intraday term, q{t,i}, will increase and the following intraday period’s variance

is expected to be higher than a simple product of the daily and diurnal com-

ponents. Although the co-efficient itself is borderline significant, the likelihood

ratio test has a p-value of 0.002 and implies clear statistical significance of the

measure.

When examining the effect of including total volume, the results are again

as anticipated. Earlier empirical research has shown that unexpected shocks

to trading volume lead to an increase in spot market volatility, a result that

is corroborated here. The positive co-efficient on V ∗ signals that an increase

in standardised V IX futures trading volume will lead to an increase in q{t,i}

which flows through to a higher than anticipated level of spot market volatil-

ity. As both the t-test and likelihood ratio test both reject a null hypotheses

of no statistical significance for this variable, it is clear that the standardised

measure of trading volume is informative to modelling intraday volatility, at

least in-sample. In addition, the log-likelihood function is maximised across all

models by the inclusion of the V ∗ measure.

It was discussed previously that through the use of the Lee and Ready (1991) al-

gorithm that the total volume of V IX futures transactions may be decomposed

into buyer and seller initiated trades, or the net buying pressure of V IX fu-

tures; this variables is given by the standardised measure OF ∗. As anticipated,

the estimated co-efficient on this variable is positive and the t-test implies sta-

tistical significance of the variable at more than 3 robust standard errors from

zero. Further, the likelihood ratio test has a p-value of zero, suggesting that the

inclusion of this measure results in statistically significance improvement in the

overall fit of the model. It must be noted the improvement in the log-likelihood

function value is not as great as for the total trading volume, possibly signaling

that the decomposition into net buying pressure is not as useful. However, it

170

Page 183: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

must be remembered that prior research has found that the link between futures

and spot markets is stronger in poor economic times, which may be reflected by

an increase in net buying pressure. For this reason, an asymmetric specification

was also estimated.

After decomposing the standardised order flow variable into positive or neg-

ative shocks, results are again consistent with prior findings. It is known from

articles such as Darrat and Rachman (1995) that shocks to total trading vol-

ume, regardless if the trade is buyer or seller initiated, lead to an increase in lead

S&P 500 Index volatility. This is again reflected here by the positive co-efficient

for δ1 and a negative co-efficient for δ2; the negative co-efficient when multiplied

by the negative sign of order flow results in an overall net positive effect on the

forecast q{t,i}. However, it is clear that the co-efficient acting on positive shocks

to order flow, δ1, is statistically significant at over 5 robust standard errors,

while δ2 is not. That is, the net purchase of V IX futures typically reflecting

bad news is important for modelling intraday volatility, net sales which reflect

bad news are not. This corroborates earlier research, such as Tse and Chan

(2010), that demonstrate that the lead from futures markets to spot markets

is stronger in poor economic times. In fact, these results suggest that the link

between futures market trading activity and spot market volatility only exists

in these poor economic times.

The above results are consistent with the majority of prior research in this

field. When the standardised measures of the V IX or trading volume receive

a positive shock, it leads to an increase in anticipated spot market volatility.

When decomposing total trading volume into its order flow, the improvement

in model fit is significant but not as large an improvement as total trading vol-

ume. However, when an asymmetric response to net buying pressure is allowed,

the results reinforce prior research suggesting that the link between futures and

171

Page 184: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

spot markets is stronger in poor economic times. Overall, the best in-sample fit

is provided by the original specification of Sokalska, Chanda, and Engle (2005)

augmented by the inclusion of the standardised measure of trading volume.

172

Page 185: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table 4.1: In-sample parameter estimates and log-likelihood values for the candidate models, the robust standard errors are given in parentheses, whilethe p-values of the likelihood-ratio test are given under the log-likelihood values.

In-Sample parameter estimates

Model ω α β γ δ1 δ2 LLF

Standard 0.0134 0.0602 0.9264 - - - -94869.86(0.0014) (0.0030) (0.0038) -

V IX∗ 0.0135 0.0602 0.9256 8.11 × 10−4 - - -94865.10(0.0014) (0.0029) (0.0039) (4.59 × 10−4) 0.002

V ∗ 0.0144 0.0617 0.9220 0.0060 - - -94811.87(0.0016) (0.0030) (0.0043) (0.0012) 0.000

OF ∗ 0.0133 0.0596 0.9268 0.0037 - - -94858.83(0.0014) (0.0029) (0.0038) (0.0012) 0.000

Asym 0.0113 0.0608 0.9239 - 0.0146 -0.0005 -94813.87(0.0013) (0.0029) (0.0038) - (0.0027) (0.0020) 0.000

173

Page 186: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

4.4.2 Out-of-Sample Results

The out-of-sample forecasting results discussion takes place in two stages. Firstly,

the forecasting ability for the variable q{t,i} is examined, this is the variable ex-

plicitly modelled by the competing candidate models. Secondly, the forecasting

performance for the level of intraday volatility, r2{t,i}, is analysed8. The reason

for this decomposition is that the object q{t,i} in and of itself has no economic

value, it is of use only in the context of forecasts of total intraday volatility.

However, all of the competing models of total intraday volatility share the same

conditional forecast of the daily level of volatility and also the intraday period-

icity; recall r2{t,i} = hts{t,i}q{t,i}. Hence, it is possible to examine the benefit of

generating accurate forecasts for q{t,i} in forecasting the true object of interest

r2{t,i}. That is, does the ability to generate statistically significantly superior of

forecasts of q{t,i} then lead on to statistically significantly superior forecasts of

total volatility? In this analysis, whether any forecasting of q{t,i} need be ex-

ecuted at all is examined; this is achieved by including the model q{t,i} = 1,∀t, i.

To begin discussing the out-of-sample forecasting results, a plot of the first

object of interest, q{t,i} is given in Figure 4.8; these values are measured by the

realised z2{t,i}. It may be observed that the measure is subject to occasional sig-

nificant spikes, which potentially reflects the noise present in examining volatil-

ity at such high-frequencies. The plot is perhaps deceiving, however, as it may

give the impression that the series often takes very large values. However, the

mean of the target series is 0.9246 and approximately only 14% of observation

exceed q{t,i} = 2.

For the MCS results when the object of interest is the forecast level of q{t,i},

8Even though it has been stated previously squared returns are a noisy proxy for the dailylevel of volatility and the realised volatility should be preferred, such an option does notexist for intraday volatility over the 5-minute horizon without encountering significant marketmicrostructure effects.

174

Page 187: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 4.8: Plot of target out-of-sample levels of q{t,i}.

Target q{t,i}

01-Jan-2010 12-Apr-2010 29-Jun-2010 17-Sep-2010 07-Dec-20100

5

10

15

20

25

30

35

40

two main findings may be observed in Table 4.2. Firstly, the kernel method

of forecasting, including when the derivatives markets information is incorpo-

rated, typically is outranked by the original specification of Sokalska, Chanda,

and Engle (2005). However, even though the original model may outrank the

kernel method, they cannot be separated statistically. In fact, under the MSE

loss function for both the range and semi-quadratic statistics, no model can be

excluded from the MCS under regular levels of significance. That is, no model

statistically dominates any of the other models in forecasting q{t,i} under the

MSE loss function.

Under the QLIKE loss function, which Patton (2011) demonstrates to have

higher power, the standard model of Sokalska, Chanda, and Engle (2005) aug-

mented by the inclusion of the standardised V IX futures total volume is the

best performing model. All competing models are excluded from the MCS at

175

Page 188: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

a significance level of 10% under both the range and semi-quadratic statistics,

only the asymmetric specification may be included at lower significance levels.

This result again reinforces earlier work that finds that unexpected shocks to

total trading volume lead to an increase in spot market volatility. However,

earlier work also found that the lead from futures markets is stronger in times

of bad news. These findings suggest that the current method of accounting for

this asymmetry, by allowing for different responses in volatility to positive and

negative shocks to order flow, does not lead to improved forecasts of the q{t,i}

variable over simply measuring shocks to overall volume.

Table 4.2: Model Confidence Set results for the forecasts of q{t,i}, the true value of q{t,i}

is measured by the realised z2{t,i}.

MCS Results

MSE QLIKEModel TR TSQ Model TR TSQ

Kernel V OL∗ 0.196 0.165 Kernel OF ∗ 0.000 0.000Kernel V IX∗ 0.196 0.196 Kernel V OL∗ 0.000 0.000Kernel OF ∗ 0.196 0.196 Kernel V IX∗ 0.000 0.000Kernel 0.196 0.196 V IX∗ 0.000 0.000V ∗ 0.196 0.196 Standard 0.000 0.000Standard 0.196 0.196 Kernel 0.000 0.000V IX∗ 0.326 0.265 OF ∗ 0.000 0.000OF ∗ 0.326 0.265 Asym OF ∗ 0.072 0.072Asym OF ∗ 1.000 1.000 V ∗ 1.000 1.000

The results in Table 4.2 show that the model of Sokalska, Chanda, and Engle

(2005) augmented by the inclusion of standardised V IX futures total trading

volume is statistically superior to the competing candidate models under the

QLIKE loss function in generating forecasts of q{t,i}, no model is excluded un-

der the MSE loss function. However, the motivation for this Chapter lies not

in generating forecasts of q{t,i} but of r2{t,i}, or the intraday volatility over the

next trading period.

176

Page 189: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

It is still unknown whether superior forecasting of the variable q{t,i} then leads

onto superior forecasts of the more economically important variable r2{t,i}. As

all of the candidate models share the same forecast of the daily level of volatil-

ity and also the intraday periodicity, the q{t,i} forecast is all that separates

them in their forecasts of the level of intraday volatility. It is now examined

whether differing forecast ability for q{t,i} leads to superior forecasts of r2{t,i};

in this examination, it is queried whether modelling q{t,i} at all is necessary by

including the forecasts of intraday volatility under the assumption of assuming

q{t,i} = 1,∀t, i.

Table 4.3 contains the MCS results when the object of interest is r2{t,i} and,

similar to the case of forecasts of q{t,i}, no model may be excluded from the

MCS using normal significance levels under the MSE loss function for either

the range or semi-quadratic statistic. Interestingly, the highest ranked model

under this loss function is that which assumes that q{t,i} = 1,∀t, i, a result

that implies there is no benefit for forecasting intraday volatility by explicitly

modelling the q{t,i} variable. However, when examining the results under the

QLIKE loss function, which has more power in separating volatility forecasts,

this result no longer holds.

There are three broad results under the QLIKE loss function. Firstly, the

assumption of q{t,i} = 1,∀t, i leads to statistically inferior forecasts of intra-

day volatility relative to the majority of models under both the range and

semi-quadratic statistic. This finding demonstrates that there is a benefit to

updating forecasts of volatility within the trading day, rather than assuming

that the daily and periodic components of volatility alone are enough to gener-

ate accurate forecasts of intraday volatility.

Secondly, all of the kernel based forecasting models are excluded from the MCS

177

Page 190: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

under the range statistic with p-values of less than 0.01. However, under the

semi-quadratic statistic, the p-value for the kernel approach without the deriva-

tives market measures is 0.123, which implies borderline inclusion in the MCS.

Hence, it is clear that the kernel based approach of intraday volatility forecast-

ing is typically inferior to the GARCH style specification under the QLIKE loss

function.

Finally, the inclusion of the derivatives market information measures in the

Sokalska, Chanda, and Engle (2005) framework leads to superior forecasting

performance. All four of the proposed measures are of higher rank to the stan-

dard specification, and the standard model cannot be included in the MCS

under the range statistic with a p-value of 0.008. However, it may be included

in the MCS under the semi-quadratic statistic with a marginally significant

p-value of 0.160. While the order flow variable models were the two highest

ranked, and the asymmetric specification designed to capture good and bad

news releases the highest ranked overall, the difference from the other deriva-

tives market measures was not statistically significant.

Overall, the conclusions for forecasting r2{t,i} are relatively clear under the

QLIKE loss function: there is a benefit to modelling the intraday stochas-

tic component of volatility and not assuming it is merely a product of daily

volatility and a periodic pattern; that the kernel method is typically statisti-

cally inferior to the Sokalska, Chanda, and Engle (2005) framework9; and the

inclusion of the derivatives market measures in the Sokalska, Chanda, and En-

gle (2005) framework leads to statistically superior forecast performance.

The above empirical results generally reinforce prior research into the relation-

9The poor performance of the kernel based approach at the intraday level is at odds withits success at the daily horizon. This is thought to be due to the fact there exist trends at thedaily level (announcement of economic data on certain dates and times, etc.) that are not asapparent at an intraday level.

178

Page 191: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table 4.3: Model Confidence Set results for the forecasts of r2{t,i}.

MCS Results

MSE QLIKEModel TR TSQ Model TR TSQ

Kernel V OL∗ 0.302 0.149 Kernel OF ∗ 0.002 0.012Kernel OF ∗ 0.302 0.163 Kernel V OL∗ 0.007 0.015Kernel V IX∗ 0.394 0.250 Kernel V IX∗ 0.007 0.015Kernel 0.395 0.329 q = 1 0.007 0.015OF ∗ 0.395 0.465 Kernel 0.008 0.123V IX∗ 0.395 0.476 Standard 0.008 0.160Standard 0.395 0.476 V ∗ 0.124 0.379Asym OF ∗ 0.395 0.476 V IX∗ 0.124 0.379V ∗ 0.408 0.476 OF ∗ 0.925 0.925q = 1 1.000 1.000 Asym OF ∗ 1.000 1.000

ship between derivatives markets and spot market volatility, but also provide

new results in regard to forecasting volatility at a high frequency. Similar to

prior studies, shocks to trading volume in the futures market lead to increases

in spot market volatility. The inclusion of this variable as an exogenous regres-

sor to the model of Sokalska, Chanda, and Engle (2005) resulted in the best

in-sample fit and also the best out-of-sample forecasts of the intraday stochastic

component of volatility, q{t,i}. Further, the inclusion of the derivatives market

based information measures led to statistically superior out-of-sample forecast

performance for the level of intraday total volatility, the ostensibly more eco-

nomically important variable. Each of the four proposed measures outranked

the standard Sokalska, Chanda, and Engle (2005) specification, with the dif-

ference being statistically significant under the QLIKE loss function and range

statistic. In addition, the asymmetric specification designed to capture the fact

that prior research has demonstrated a stronger lead from futures market in

poor economic times was the best performing model, although it could not be

statistically separated from the other derivatives market measures. Finally, the

kernel based methods were typically inferior to the Sokalska, Chanda, and Engle

179

Page 192: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

(2005) specifications, with or without the derivatives market measures.

4.5 Conclusion

There is a large amount of literature examining the forecasting performance

of time-series volatility models at the daily or monthly horizon. Despite, be-

ing of use for derivative traders, hedge funds, and trading strategies, there are

far fewer specifications for modelling intraday volatility. This Chapter exam-

ines a framework that allows for the updating of intraday volatility forecasts

within the same trading day. This framework may accommodate the inclu-

sion of exogenous regressors that may be of use in forecasting volatility and

four measures derived from both the options and futures markets are investi-

gated for their incremental value of historical information alone. An alternative

specification that utilises a semi-parametric approach, which may also accom-

modate the inclusion of the derivatives markets information, was also examined.

For the in-sample estimation results, it was found that the addition of the

derivatives markets information lead to a statistically significant improvement

over the original Sokalska, Chanda, and Engle (2005) specification. All of the

proposed derivatives market based exogenous regressors had statistically sig-

nificant t-statistics for the individual co-efficients and p-values from likelihood

ratio tests below standard significance levels. Further, the superior in-sample

fit translated to the out-of-sample forecasting results.

Under the Mean-Square-Error loss function, no models were excluded from the

Model Confidence Set results for forecasting the intraday stochastic component

q{t,i}. However, using the QLIKE loss function did result in only one standout

model being included in the Model Confidence Set for the forecasting of the

intraday stochastic component of volatility, this was the GARCH style specifi-

180

Page 193: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

cation augmented by the inclusion of a measure of V IX futures total trading

volume. While the superior forecasting of the intraday stochastic component of

volatility is an encouraging result, the true object of interest is the statistical

performance of forecasting total volatility.

In evaluating forecasts of r2{t,i}, the MSE loss function was unable to sepa-

rate out any of the competing forecasts of volatility under either the range or

semi-quadratic statistic. However, the QLIKE loss function was able to sepa-

rate out the candidate models to lead to three conclusions. The rejection from

the MCS of the q{t,i} = 1,∀t, i specification implies that there exists a bene-

fit to modelling the intraday stochastic component of volatility in generating

forecasts of total intraday volatility. The majority of the kernel based models

were also rejected from the MCS and were typically inferior to the GARCH

style specification. Finally, all four of the proposed derivatives market mea-

sures lead to statistically significant improvement for out-of-sample forecasting

under the range statistic when included in the Sokalska, Chanda, and Engle

(2005) framework.

181

Page 194: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Chapter 5

Forecasting Equicorrelation

5.1 Introduction

Previous Chapters have documented a significant amount of research into the

merits of utilising implied volatility in univariate volatility forecasting. While

the relative forecasting performance of time-series and implied volatility based

models in the univariate setting is well studied, this is not true of the multivari-

ate case. That is, few studies have directly investigated whether options market

implied information may be as beneficial in a multivariate framework as it has

been proven to be in the univariate setting. Given the practical importance of

accurate multivariate volatility forecasts, for use in optimal portfolio allocation

decisions etc., this Chapter is motivated by an attempt to partially fill that

void. Some background on multivariate volatility models is now given before

detailing how the information implicit in options markets may be use of in these

models.

Recently in the multivariate volatility literature, there has been a developing

interest in the utilisation and modelling of the average correlation, or equicor-

relation, of assets; this equicorrelation is defined as the mean of the off-diagonal

182

Page 195: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

elements of a correlation matrix. This is in contrast to the bulk of earlier stud-

ies into modelling conditional covariance matrices which attempt to forecast

each of the individual off-diagonal elements. There are several reasons why the

equicorrelation variable may be of importance in the financial economics liter-

ature.

To begin with, the concept of equicorrelation was developed nearly forty years

ago by Elton and Gruber (1973) who showed that assuming all of the off-

diagonal pairs of a correlation matrix were equal lead to superior portfolio

allocation results and also reduced estimation error. More recently, Pollet and

Wilson (2010) develop a theoretical argument, and provide empirical evidence

supporting their theory, for why the equicorrelation of a stock market index is

strongly related to future market returns while stock market variance is not.

Relatedly, Driessen, Maenhout, and Vilkov (2009) conduct an empirical exer-

cise in which the entire S&P 100 Index variance risk premium is attributable

to the correlation risk premium.

The concept of equicorrelation may also be of use for portfolio managers inter-

ested in assessing the level of diversification amongst their assets. The equicor-

relation of a portfolio is the only scalar measure the author is aware of that

summarises the degree of interdependence within the portfolio and hence di-

versification benefits. Forecasts of equicorrelation may then provide portfolio

managers a simple guide to the interrelationships of their portfolio constituents

into the future that may be more readily interpretable than forecasting each of

the potentially numerous individual correlation pairs.

Equicorrelation, or at least the equicorrelation implied by options markets,

is also of relevance in derivatives markets. The return on a strategy known as

dispersion trading, where one goes long an option on a basket of assets and

183

Page 196: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

shorts options on each of the constituents, is dependent only on correlations

after each of the individual options are delta hedged. It is common to make

the assumption that all of these correlations are equal, resulting in the value of

the position depending upon the evolution of the implied equicorrelation alone

(Engle and Kelly, 2008). Partially motivated by its use in dispersion trading,

the Chicago Board of Exchange (CBOE) has published the Implied Correlation

Index, the mean correlation of the S&P 500 Index for the proceeding 22-trading-

days, since July 2009 (CBOE, 2009). Therefore, in addition to being used in

forming expectations of market returns, equicorrelation is also of direct use in

a popular derivatives trading strategy.

From an econometric perspective, the assumption of equicorrelation is beneficial

in that it imposes structure on problems that are otherwise intractable. The fol-

lowing discussion will demonstrate that for the majority of existing MGARCH

models it is a requirement that the length of the available time-series is sig-

nificantly larger than the number of assets in the portfolio if reliable model

estimates are to be attained; this is problematic for large portfolios such as the

S&P 500 Index. This requirement is problematic due to the available time-

span for estimation purposes typically being limited to the shortest lived stock

within the portfolio, which is conceivably quite short; for example, even the

very large firm Kraft Inc. has only been a publicly traded firm since mid-2007.

The utilisation of equicorrelation can circumvent this restriction and allows for

solutions to problems that would otherwise be intractable.

Given the theoretical and empirical relevance of equicorrelation just described,

a brief overview of previous literature that informs the equicorrelation fore-

casting problem is now provided, including how options market information

may be incorporated. While the focus here is on forecasting equicorrelation,

there has been significant interest in correlation forecasting more generally and

184

Page 197: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

a large range of competing candidate models exist that may be utilised here.

It is beyond the scope of this Chapter to provide a thorough review of all

of these models1; instead, guidance on filtering the list of candidate models

is found in the review of Multivariate Generalised Autoregressive Conditional

Heteroscedasticiy (MGARCH) models by Silvennoinen and Terasvirta (2009).

They state that an ideal time-series model of conditional covariance or corre-

lation matrices faces competing requirements; while the specification must be

flexible enough to model the dynamic structure of variances and covariances, it

is also desirable to remain parsimonious for the purposes of estimation.

The Dynamic Conditional Correlation (DCC) model of Engle (2002), adapted

for consistent estimation by the cDCC model of Aielli (2009), allows for the

forecasting of conditional correlations with the optimisation of just two param-

eters while still retaining a reasonable degree of flexibility; it is this model and

variations thereof that are utilised here for generating equicorrelation forecasts.

In addition to meeting the criteria of flexibility and parsimony, the cDCC model

has also become a benchmark in the correlation forecasting literature and pro-

vides a natural starting point to which competing forecasts will be compared.

Motivated by some of the reasons already outlined, and to circumvent some

estimation issues that will be expounded on shortly in Section 5.2, Engle and

Kelly (2008) propose two models of equicorrelation; one of these is related to

the cDCC specification just mentioned, the Dynamic Equicorrelation (cDCC-

DECO) model, while the other may be considered independent, the Linear

Dynamic Equicorrelation (LDECO) model. Both of these models are similar in

functional form, but differences in the approach to measuring equicorrelation

1In addition to the articles surveyed in Section 2.7, theoretical surveys already exist inBauwens, Laurent, and Rombouts (2006) and Silvennoinen and Terasvirta (2009), while Lau-rent, Rombouts, and Violante (2010) conduct an extensive empirical comparison of the out-of-sample forecast performance of 125 conditional correlation models while also discussing someof the theoretical properties of the models they consider.

185

Page 198: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

result in the LDECO model possessing additional flexibility by allowing the

constituents of the portfolio of interest to enter and exit freely, and also the

number of portfolio constituents to change. The functional form of these mod-

els also plays an important role in being able to investigate potential avenues

for improving equicorrelation forecasts.

In the following, it will be seen that the LDECO model is similar in func-

tional form to the previously discussed GARCH model for univariate volatility.

This fact allows for similar extensions in the multivariate context to those al-

ready analysed in the univariate setting. That is, the LDECO model may be

adapted to include exogenous regressors as additional explanatory variables for

the purposes of modelling equicorrelation. This process in analogous to the

inclusion of the V IX or the realised volatility in the GARCH(1,1) model of

univariate volatility, as executed by, amongst others, Blair, Poon, and Taylor

(2001). Given the focus of this dissertation, this Chapter proposes an investi-

gation of whether an option implied measure of equicorrelation is of marginal

benefit within the LDECO framework for the purposes of forecasting.

As has been covered in the previous Chapters regarding univariate volatility

forecasting, forecasts generated by the options market implied volatility (IV)

can contain information incremental to that in the market for the underlying.

The standard argument for the inclusion of such implied measures is that as

options are priced with reference to a future-dated payoff, an efficient options

market should incorporate historical information in addition to a forecast of

variables relevant to the pricing of the options. The use of IV has been re-

ported by Poon and Granger (2003) to outperform time-series based forecasts

in the majority of research that they reviewed. Silvennoinen and Terasvirta

(2009) also find that a particular measure of IV, the VIX, is important in

forecasting conditional covariance. While this result is not directly related to

186

Page 199: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

equicorrelation, it does highlight a potential link between options markets and

future levels of correlation.

These findings motivate the current study of whether similar advantages may

be found in the multivariate setting of conditional equicorrelation forecasting.

It is shown below that if one makes the assumption of equicorrelation in the

options market, as is done in dispersion trading mentioned previously, then it

is possible to calculate the level of implied equicorrelation which may be used

as a competitor to measures of equicorrelation based on historical data alone.

Similar to including IV in univariate volatility models, the implied equicorrela-

tion may be added to the LDECO specification to test for the marginal benefit

in forecasting equicorrelation. This gives rise to the final key research question

addressed in this thesis: Does the use of implied measures lead to superior fore-

cast performance for the mean level of correlation?

Aside from the investigation into the potential benefit of implied equicorre-

lation, the potential benefit of realised measures of correlation is also studied.

In the univariate volatility forecasting literature, it is now well known that Re-

alised Volatility (RV) provides a superior measure of the latent volatility of an

asset relative to alternatives such as the square of daily returns; this fact was

covered in detail in Section 2.2.1. When added as an exogenous regressor to

GARCH models, Blair, Poon, and Taylor (2001) find that the co-efficient act-

ing on RV has statistically significant explanatory power, suggesting that the

RV contains information relevant to forecasting univariate volatility above that

contained in daily returns alone.

The use of realised measures of latent variables has recently been extended into

the multivariate setting by, for example, Barndorff-Nielson, Hansen, Lunde,

and Shephard (2010) and Corsi and Audrino (2007). In these papers, it is

187

Page 200: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

shown that utilising high-frequency intraday data provides superior estimates

of the level of latent covariance between assets relative to estimates from daily

returns, although one must be wary of market microstrucure effects. A time-

series model for correlation utilising intraday data has been put forth by Corsi

and Audrino (2007), who extend the univariate Heterogeneous Autoregressive

model for RV to its multivariate analogue and demonstrate promising results

in the bi-variate setting.

As shall be detailed, the existing measure of equicorrelation utilised in the

LDECO model Engle and Kelly (2008) is based upon the the daily closing

price returns of the portfolio constituents. The success of RV in the univariate

framework, and the promising results for multivariate realised measures just

described motivate an investigation of whether a realised measure of equicor-

relation may be utilised to improve the forecast performance of the LDECO

model. Three alternative measures of equicorrelation are proposed here that

are based upon intraday data and may be substituted into the LDECO model

in place of the daily returns based measure, this allows for an examination of

whether the use of high-frequency based measures offer similar improvement

in the equicorrelation setting similar to their benefit in the univariate context.

Two of these measures are based upon the realised (co)variance technologies

while the third is a non-parametric estimate of equicorrelation, the mean level

of Spearman rank correlation.

Hence, motivated by prior results in the univariate volatility forecasting lit-

erature, this Chapter investigates two potential improvements to an existing

framework for modelling equicorrelations. The LDECO model will be adapted

firstly to utilise a forecast of equicorrelation from the options market, and sec-

ondly to utilise high-frequency intraday measures of realised equicorrelation.

The in-sample fit of these adaptations shall be evaluated using traditional tests

188

Page 201: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

of variable significance, likelihood ratio tests for the nested specifications, and

the Vuong likelihood ratio test for non-nested models. Whether the proposed

modifications also generate an improvement in the out-of-sample forecast per-

formance shall also be examined.

In addressing the current research question, forecasts of equicorrelation are gen-

erated for up to a 22-trading-day ahead horizon using ten models that include

existing time-series specifications and the proposed amendments. To evaluate

the forecast performance of these models, the Model Confidence Set (MCS)

methodology of Hansen, Lunde, and Nason (2003, 2010) is again utilised. The

MCS has been utilised previously evaluating multivariate conditional correla-

tion by Laurent, Rombouts, and Violante (2010). An interesting result of the

their paper in the context of this Chapter is that in turbulent times the DECO

model, which is closely linked to the LDECO model employed here, dominates

among cDCC models. That is, the assumption of equicorrelation generates

forecasts that outperform less restricted models of conditional correlation, even

when the object of interest is not the equicorrelation.

The results of this Chapter are that the utilisation of a realised measure of

equicorrelation improves the in-sample fit of the LDECO model relative to the

original measure based on daily closing price returns. However, other in-sample

results show that the inclusion of the implied equicorrelation subsumes the

information content of all competing realised and daily measures of equicor-

relation. In the out-of-sample forecasting exercise, the superior performance

of the implied equicorrelation based models disappears relative to the realised

measures, although they still outperform the daily returns based measures of

equicorrelation. The best out-of-sample forecasts are universally generated by

the measure of equicorrelation based on the realised covariance technology.

189

Page 202: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

The Chapter proceeds as follows. Section 5.2 provides an overview of the nest-

ing framework and the models considered in this paper, Section 5.3 describes

how forecast performance is statistically evaluated, Section 5.4 details the data

utilised, Section 5.5 presents and analyses the results and Section 5.6 concludes.

5.2 General Framework and Models Considered

This Section begins with a brief description of the general framework that nests

the problem considered and a brief review of some of the previously discussed

MGARCH models. As defined in Bollerslev (1990) and Engle and Kelly (2008),

the multivariate conditional distribution of asset returns, when assumed to be

Gaussian, can be written as

rt|t−1 ∼ N(0,Ht), Ht = DtRtDt, (5.1)

where Dt is the diagonal matrix of conditional standard deviations and Rt

is a conditional correlation matrix. The multivariate Gaussian log-likelihood

function is given by

L = −1

2

T∑

t=1

(nlog(2π) + log|Ht| + r′tH−1t rt),

= −1

2

T∑

t=1

(nlog(2π) + 2log|Dt| + r′tD−2t rt − r′trt)

−1

2

T∑

t=1

(log|Rt| + r′tR−1t rt),

L = LV ol(θ) + LCorr(θ,Φ),

(5.2)

where rt are the volatility-standardised returns given by the n × 1 vector

rt = D−1t rt, and n is the number of assets under consideration.

190

Page 203: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

The parameters of the above log-likelihood function may be estimated via a

quasi-maximum likelihood procedure, with the optimisation able to be decom-

posed into an optimisation over the volatility specific parameters, θ, and a

secondary optimisation over the correlation parameters, Φ, which depend on

the volatility specific parameters through the standardised returns.

The focus here is on forecasting equicorrelation and not on producing the best

forecasts of the univariate volatility of each asset, that issue was addressed in

Chapter 3. Hence, while numerous choices exist for maximising the LV ol com-

ponent of the above log-likelihood function, it is of secondary importance here.

Therefore, the lead of Engle and Kelly (2008) is followed and it is assumed

that conditional volatilities follow a GARCH(1,1) process. This allows the fo-

cus to be placed solely on the maximisation of the LCorr component of the

above log-likelihood function; this is essentially a question of the most appro-

priate model choice for the evolution of the conditional correlation matrix, Rt.

Further, the in-sample log-likelihood results that are presented in Section 5.5

consist of comparisons of the LCorr(θ,Φ) component of Equation 5.2 alone. A

discussion of the possible choices for modelling the evolution of Rt is now given.

While numerous time-series models have been proposed for the forecasting of

Rt, the first candidate model discussed here is the consistent Dynamic Con-

ditional Correlation (DCC) model of Aielli (2009). As has been discussed in

Section 5.1, this choice is based on the criteria that an MGARCH model must be

flexible enough to model the dynamic structure of variances and covariances,

yet remain parsimonious for the purposes of estimation. The cDCC model

allows for time-varying pairwise correlations to be optimised across only two

parameters; that is Φ is a 2 × 1 vector, the model still requires the estimation

of n× 3 parameters for each of the univariate GARCH(1,1) models. Under the

191

Page 204: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

cDCC model, the conditional correlation matrix is given by

RcDCCt = Q

− 12

t QtQ− 1

2t , (5.3)

where Qt has the following dynamics

Qt = Q(1 − α− β) + αQ12t−1rt−1r

′t−1Q

12t−1 + βQt−1, (5.4)

where Q is the unconditional correlation matrix, Qt replaces the off-diagonal

elements of Qt with zeros but maintains its principal diagonal2, and the follow-

ing conditions must hold to ensure stationarity, α > 0, β > 0, α+ β < 1.

Similar in structure to the univariate GARCH model, the cDCC model al-

lows for an unconditional correlation matrix, or correlation targeting, as well

as an innovation term on the lagged volatility-standardised residuals, and a

persistence term for lagged values of Qt. The cDCC model is attractive given

its analytical tractability, flexibility, and low number of parameters; however,

for the practical applications for which portfolio managers require solutions,

the cDCC model begins to falter. As can be seen in Equation 5.2, the optimi-

sation process requires finding the inverse and determinant of potentially very

large matrices. While alternatives may be found for taking the inverse, such

as Gauss-Jordan elimination, the author is not aware of any such alternative

pathways for the determinant. As the calculation of these functions must be

repeated at each time step for each iteration of the optimisation algorithm, the

estimation procedure can quickly become cumbersome.

2Qt = Qt ◦ I, where I is the n × n identity matrix, and ◦ denotes the Hadamard product.

192

Page 205: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

5.2.1 The Linear Dynamic Equicorrelation Model

For some of the reasons outlined in Section 5.1, and to circumvent the computa-

tional issue just described, Engle and Kelly (2008) make use of the simplifying

assumption of equicorrelation in proposing an alternative means of modeling

the conditional correlation matrix. For each point in time, all off-diagonal ele-

ments of the conditional correlation matrix are assumed to have the same value,

the equicorrelation scalar ρt; the dynamics of this equicorrelation are then the

object of interest. Engle and Kelly (2008) propose that this variable may be

modelled by firstly using the cDCC specification to generate the conditional

matrices, Qt, and then taking the mean of the off-diagonal elements of these

matrices, this results in a time-series vector of estimated equicorrelations. This

approach is termed the Dynamic Equicorrelation (cDCC-DECO) model and

results in the equicorrelation scalar being given by

ρDECOt =

2

n(n− 1)

n−1∑

i=1

n∑

j=i+1

qi,j,t√qi,i,tqj,j,t

, (5.5)

where qi,j,t is the i, jth element of the matrix Qt from the cDCC model given

before. The resulting equicorrelation scalar may then be used to create the

conditional correlation matrix

Rt = (1 − ρt)In + ρtJn, (5.6)

where Jn is the n× n matrix of ones.

The assumption of equicorrelation employed by the DECO model significantly

decreases estimation time by allowing for analytical solutions to both the inverse

and determinant of the conditional correlation matrix, Rt, to be substituted into

the log-likelihood function given in Equation 5.2; these are given respectively

193

Page 206: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

by Equations 5.7 and 5.8 below

R−1t =

1

1 − ρt

(In − ρt

(1 + [n− 1]ρt)Jn

), (5.7)

and

|Rt| = (1 − ρt)n−1(1 + [n− 1]ρt), (5.8)

where ρt is the equicorrelation from Equation 5.5, and In is the n-dimensional

identity matrix; the inverse R−1t exists iff ρt 6= 1 and ρt 6= −1

n−1 , and Rt is

positive definite iff ρt ∈(

−1n−1 , 1

).

While this has the advantage of simplifying estimation, it still possesses some

drawbacks that prevent it from being the model of choice here. Again consider

the practical perspective of a portfolio manager, an important limitation of the

cDCC, DECO, and MGARCH models in general is that they are unable to han-

dle changes in portfolio composition or the number of assets in the portfolio.

For example, this occurs quite regularly for S&P 500 Index, where the portfolio

constituents may change frequently3. Hence, Engle and Kelly (2008) propose a

variation of the DECO model which allows for such changes in constituency to

be accommodated; the Linear DECO (LDECO) model may be written generally

as

ρt = ω + αXt−1 + βρt−1, (5.9)

where Xt is a measure of equicorrelation on day t. It may be observed that the

LDECO model of conditional equicorrelation is similar in functional form to

the GARCH(1,1) model of univariate volatility; the co-efficient α is the weight

to placed on the innovation term Xt−1, and β is the weight to be placed on the

persistence term, ρt−1.

3In the 12 months from June 1st 2010 to June 1st 2011, 11 firms were removed from theIndex.

194

Page 207: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

The model may also be considered independent of the cDCC model; whereas

the DECO model is built on the matrices Qt from the cDCC specification, the

LDECO model is an autoregressive form estimated on historical measures of

equicorrelation alone, no output from the cDCC or any other MGARCH model

is required.

The specification given in Equation 5.9 is general in that it defines Xt only

as a measure of equicorrelation, the calculation of this variable may take a

number of forms. In proposing the LDECO model, Engle and Kelly (2008)

note that “key in this approach is extracting a measurement of the equicorre-

lation in each time period using a statistic that is insensitive to the indexing of

assets in the return vector” (pp. 13, Engle and Kelly, 2008); that is, what is the

best choice of the measure Xt? Engle and Kelly (2008) propose a statistic that

they argue fulfills this criterion, their statistic and the alternatives proposed by

this Chapter are now discussed.

The measure of equicorrelation proposed by Engle and Kelly (2008), which may

be substituted into Equation 5.9 for the variable Xt, is based on the volatility-

standardised daily closing price returns of each of the portfolio constituents and

is given by

ut =[(∑

i ri,t)2 −∑i(r

2i,t)]/n(n− 1)∑

i(r2i,t)/n

. (5.10)

The equicorrelation innovation term, ut, can be decomposed into an estimate

of the covariance of returns, the numerator, and an estimate of the variance

for all assets, the denominator. As rt are volatility-standardised returns they

should have unit variance and, therefore, the numerator should be a correla-

tion estimate. However, the numerator is not technically restricted to lie in the

range that ensures positive definiteness of Rt, and it lacks robustness to devia-

tions from unity for the conditional variance estimate (Engle and Kelly, 2008).

195

Page 208: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

However, Engle and Kelly (2008) then demonstrate that the denominator of ut

standardises this covariance estimate by an estimate of the common variance;

this ensures that ut lies within the range necessary for positive definiteness of

the correlation matrix.

It is here that the first adaptation to the LDECO model is proposed. It has been

demonstrated in previous Chapters that the realised volatility methodology is

the superior approach for measuring latent univariate volatility. Similar to

the univariate case, recent work by, among others, Barndorff-Nielsen, Hansen,

Lunde, and Shephard (2010), and Corsi and Audrino (2007) has demonstrated

that the utilisation of high-frequency intraday data provides superior measures

of the interrelationships between assets relative to alternatives such as clos-

ing price returns. These findings motivate an examination of whether high-

frequency measures of equicorrelation generate superior forecasts relative to

the daily returns measure proposed by Engle and Kelly (2008).

The argument for using a realised measure of correlation within a time-series

model is not new, and has been applied by Corsi and Audrino (2007) in their

multivariate Heterogeneous Autoregressive model for conditional correlation

matrices, although this is the first occasion where the argument has been applied

in the context of equicorrelation. To implement the proposed modification, the

first requirement is to adapt the realised (co)variance technologies to provide

an estimate of the realised equicorrelation, three methods are utilised here and

are now discussed in turn.

The first proposed realised measure of equicorrelation may be estimated by

using the entire covariance matrix constructed from a multivariate realised mea-

sure. While several alternative multivariate realised measures exist, such as the

realised kernel approach of Barndorff-Nielsen, Hansen, Lunde, and Shephard

196

Page 209: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

(2010), the approach adopted here is similar to that used in Laurent, Rom-

bouts, and Violante (2010) in their empirical study4 and is based on results

from Andersen, Bollerslev, Diebold, and Labys (2003) and Barndorff-Nielson

and Shephard (2004c); the realised covariance (RCOV) on a given day may be

calculated as follows.

Partition each trading day, t, into Lt distinct, non-overlapping trading inter-

vals, denoted by lt = 1, ..., Lt, and define the n-vector of asset returns for the

interval lt by rlt,t. The length of each of these lt periods is allowed to vary

such that all of the asset returns for a given period are non-zero, with the re-

striction that the minimum window length is 15-minutes5 to minimise market

microstructure impacts, such as the Epps (1979) effect. The varying length of

intervals is reflected in the time subscript notation of Lt, as each day may have

a different number of total trading intervals6. The realised covariance matrix

relating to the trading portion on a given day is the sum of the products of the

rlt,t vectors,

4For the purposes of robustness, it should be noted that Laurent, Rombouts, and Violante(2010) compare their results from the approach given in Equation 5.11 (which is slightlydifferent as they used fixed window lengths of 5-minutes in calculating RCOV rather than theadaptive window length employed here) with a realised kernel estimator and find qualitativelysimilar results. Hence, no realised kernel estimators are employed in this Chapter. Further, theminimum window length in calculating RCOV was also set at 1-, 5-, and 30-minute horizonswith no qualitative impact on the results.

5The choice of 15 minutes is partially motivated by the findings of Sheppard (2006) whofinds that a minimum length of 10 minutes is sufficient to get unbiased estimates of thecorrelation between constituents of the DJIA. The choice of 15 minutes rather than 10 isbased on it resulting in a whole number of periods within the day.

6Although varying trading interval lengths are allowed, for the overwhelming majority ofcases the trading interval is indeed 15-minutes. The average length of time for all assets tohave non-zero returns is 2.57 minutes, with the maximum length of time being 108 minutes.This results in the calculation of the RCOV for the most part being closely aligned with the15-minute fixed window used in the calculation of univariate realised volatility and Spearmanrank correlations, which are discussed shortly.

197

Page 210: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

RCOVt =

Lt∑

lt=1

rlt,tr′

lt,t,

τlt,t = min τ, for lt = 1, ..., Lt,

s.t. ri,lt,t 6= 0, ∀i, τlt,t − τlt−1,t ≥ 15,

(5.11)

where τlt,t is the time of the end of the lt-th period of day t.

Similar to Equation 5.5 where ρDECOt is calculated from the off-diagonal el-

ements of the matrix Qt, the realised equicorrelation (REC) may be found by

taking the mean of the off-diagonal elements of RCOV

RECt =2

n(n− 1)

n−1∑

i=1

n∑

j=i+1

RCOVi,j,t√RCOVi,i,tRCOVj,j,t

. (5.12)

As Equation 5.12 produces a scalar level of equicorrelation, it may substituted

in as the variable Xt in place of the Engle and Kelly measure ut in Equation 5.9.

An alternative approach to calculating a realised measure of equicorrelation

is not to use the realised covariance methodology but to utilise the individual

realised volatilities of each of the assets within the relevant portfolio. That is,

instead of directly measuring each correlation pair and then taking an average,

it is possible to form a realised equicorrelation measure from individual RVs

alone. As has been discussed, the RV of an asset is well known as a superior

measure of latent volatility relative to alternatives such as daily squared returns

(see Section 2.2.1) and is calculated by the sum of squared intraday returns,

the RV of an asset on day t is given by

RV(m)t ≡

m∑

k=1

r2k,t, k = 1, ...,m, (5.13)

198

Page 211: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

where r2k,t is the squared intraday log-return from period k− 1 to k for each of

the m fixed-length7 periods within day t. Noting that one may calculate the

RV of an index as well as the RVs for each of the index’s constituents, a realised

measure of equicorrelation may be constructed by using the portfolio variance

identity

σ2 =n∑

i=1

w2i s

2i + 2

n−1∑

i=1

n∑

j=i+1

wiwjsisjρi,j , (5.14)

and making the assumption of equicorrelation; re-arranging yields

DREC =σ2 −

∑ni=1w

2i s

2i

2∑n−1

i=1

∑nj=i+1wiwjsisj

, (5.15)

where σ2 is the RV of the index, wi the weight in that index placed on asset

i, and s2i is the RV of asset i. This realised equicorrelation measure is defined

here as DREC, where the D denotes that it only uses the individual RVs, which

are the diagonal elements of a covariance matrix. By using just the individual

RVs to estimate the equicorrelation, a potential benefit of the DREC measure

is that it avoids the Epps effect (Epps, 1979) altogether, whereby estimates of

covariance may be biased downwards due to asynchronous trading. This ap-

proach again produces an equicorrelation scalar that may be substituted in for

the variable Xt discussed previously.

As well as enabling the use of realised (co)variance technologies, the availabil-

ity of high-frequency intraday data allows for a third approach to measuring

equicorrelation. As both the REC and DREC measures are calculated from

raw intraday returns, they may be excessively influenced by large shocks in

returns or excess volatility in a small number of the constituent stocks. A non-

parametric approach insensitive to the magnitude of the largest and smallest

7Based on the research of Hansen and Lunde (2006) and related articles, the RV is calcu-lated based on 15-minute intervals; 1-, 5-, and 30-minute intervals are also used for robustnesswith no qualitative effect on the results.

199

Page 212: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

returns is the Spearman rank correlation, which examines how the rankings of

returns are related throughout the course of the trading day8. The Spearman

rank correlation between two assets i and j, SRi,j , is calculated from 15-minute

log-returns

SRi,j = 1 − 6∑m

k=1 d2k

m(m− 1), (5.16)

where dk is the difference in rankings of returns for period k for each of the

m 15-minute periods of day t. For use in the current context, the mean of the

off-diagonal elements of this matrix of Spearman rank correlations is then taken

to generate the Spearman rank equicorrelation (SREC)

SREC =2

n(n− 1)

n−1∑

i=1

n∑

j=i+1

SRi,j , (5.17)

The above discussion presents alternative measures of equicorrelation to the

original measure proposed by Engle and Kelly (2008). Rather than utilising

daily closing price returns, three measures based on high-frequency intraday

data may be used instead; REC uses a realised covariance approach, DREC

uses the index and individual asset realised volatilities, and SREC uses a non-

parametric ranking of returns. Given the functional form of the original LDECO

model, each of these approaches may be substituted into Equation 5.9 in place

of the original measure, ut. Essentially, in the functional form ρt = ω+αXt−1+

βρt−1, there now exist four alternative measures of the equicorrelation Xt−1;

8Thank goes to Andrew Harvey for suggesting this measure.

200

Page 213: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

all of this is summarised below:

LDECO: ρt = ω + αut−1 + βρt−1,

REC: ρt = ω + αRECt−1 + βρt−1,

DREC: ρt = ω + αDRECt−1 + βρt−1,

SREC: ρt = ω + αSRECt−1 + βρt−1.

(5.18)

This Chapter examines the empirical performance of each of the above specifi-

cations as well as any benefit that may exist from the incorporation of measures

of equicorrelation implied by the options market; how such information may be

utilised is now discussed.

5.2.2 Incorporating Implied Equicorrelation

As well as proposing realised measures of equicorrelation in place of the ut mea-

sure defined in Equation 5.10, this Chapter also investigates whether a measure

of equicorrelation based on information implicit in options markets is able to

offer superior forecasting power to models based on historical returns alone.

The motivation for this extension to the original LDECO model of Engle and

Kelly (2008) again stems from the univariate volatility literature. Prior Chap-

ters have documented a significant amount of research into whether implied

volatility contains information incremental to that of historical returns, as well

as contributing novel research to this literature.

Similar to the case of the V IX in the univariate context, it is possible to

construct a model-free measure of the level of implied equicorrelation (IC) from

201

Page 214: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

options market data; this variable may be added as an exogenous regressor to

the LDECO model in a similar fashion to the V IX being added as an exogenous

regressor to the GARCH model. The calculation of the IC, its background, and

then how it may be incorporated into the LDECO specification is now described.

For a market index on which options trade, the DJIA for example, it is well

known that a model-free estimate of the implied volatility of the index can be

constructed, i.e. the V XD9. For each of the constituent stocks of an index

on which options trade, a similar model-free estimate of its IV may be found.

Recalling the portfolio variance identity used in calculating the DREC measure

and again invoking the assumption of equicorrelation, a similar calculation may

be made using the model-free estimates of index and individual asset IVs to

form the IC,

IC =σ2 −∑n

j=1w2j s

2j

2∑n−1

i=1

∑nj=i+1wiwjsisj

, (5.19)

where σ is now the annualised implied 22-day-ahead standard deviation of the

index, wi is the portfolio weight given to asset i, and si is the annualised implied

22-day-ahead standard deviation of asset i.

In their paper introducing the LDECO model, Engle and Kelly (2008) cal-

culate the Dow Jones Industrial Average (DJIA) IC and show that it closely

matches the fitted equicorrelation from both the cDCC-DECO and LDECO

models, although they do not use the IC directly in model estimation or fore-

casting. Further, the IC has been used previously by Castren and Mazzotta

(2005) in a bivariate setting of exchange rates and they find that a combination

forecast of IC and an MGARCH model is preferred, these conclusions are based

9The V XD is the DJIA equivalent of the potentially more well known V IX for the S&P500 Index; a model-free, risk-neutral, option implied forecast of the mean annualised volatilityof the index over a fixed 22 trading day horizon.

202

Page 215: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

on the in-sample adjusted R2 values and they do not conduct a forecasting ex-

ercise. These results, and the publishing of IC for the S&P 500 Index by the

CBOE10, encourage an investigation of the incremental information content of

the IC relative to the various proposed measures of the equicorrelation, Xt; how

such information may be incorporated is now discussed.

Following Blair, Poon and Taylor (2001) who consider the role of the V IX

in a univariate GARCH model of volatility, it is proposed that the LDECO

specification be extended to include the IC as an exogenous regressor. This

amendment combines the historical information contained within the historical

returns series of the portfolio constituents with the information implied by the

options market,

ρt = ω + αXt−1 + βρt−1 + γICt−1, (5.20)

where Xt may be any of the previously proposed measures of equicorrelation.

The proposed model retains the attractive property of analytical solutions for

the inverse and determinant of Rt as given by Equations 5.7 and 5.8 respec-

tively as equicorrelation is assumed in calculating IC and Equation 5.20 is a

linear combination of two equicorrelation measures. Hence, the proposed model

is easily incorporated into the previously defined general framework in Section

5.2 and may be estimated by quasi-maximum likelihood methods by optimising

Equation 5.2.

In a fully efficient options market, the co-efficient on Xt is expected to be

statistically indistinguishable from zero as the historical time-series informa-

tion should be incorporated by options market participants in generating their

IC forecast. It is worth noting that even if restrictions are placed on Equation

10Details on the implied equicorrelation published by the Chicago Board of Exchange isavailable online at the CBOE S&P 500 Implied Correlation Index micro site (2009).

203

Page 216: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

5.20 such that it utilises information from the options market only, α = β = 0,

one would not expect a co-efficient of unity for γ as, similar to the volatility

literature, one might expect a correlation risk premium. Hence, even if the

options market generates perfect forecasts through the IC measure, one would

not expect the results typical of a Mincer-Zarnowitz (1969) regression of zero

as a constant and unity as a co-efficient. This style of specification shall also

be examined in this Chapter.

Finally, it should be noted that in generating equicorrelation forecasts, the

IC variable will be combined with each of the alternative measures of Xt.

LDECO-IC: ρt = ω + αut−1 + βρt−1 + γICt−1,

REC-IC: ρt = ω + αRECt−1 + βρt−1 + γICt−1,

DREC-IC: ρt = ω + αDRECt−1 + βρt−1 + γICt−1,

SREC-IC: ρt = ω + αSRECt−1 + βρt−1 + γICt−1,

IC: ρt = ω + γICt−1.

(5.21)

For a similar choice of the Xt measure, the model descriptions in Equation 5.21

above clearly nest those in Equation 5.18; the REC-IC model nests the REC

model. A standard likelihood ratio test of these two models then allows for an

examination of the improvement in model fit yielded by the inclusion of the

IC, but only for the REC measure. Any improvement of the REC-IC model

over the DREC model may not be attributable to the IC term and may not be

measure by the standard likelihood ratio test as one model does not nest the

other, an alternative method of comparing these models is required.

204

Page 217: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

To compare the in-sample fit of non-nested models, the non-nested likelihood

ratio test of Vuong (1989) is employed, and is now briefly described. In the

situation where two non-nested models are competing to explain the same vari-

able, ρt in the current context, Vuong (1989) demonstrates that under certain

regularity conditions the variable

T−1/2LRT /ξTD→ N(0, 1), (5.22)

where LRT = LiT −Lj

T or the difference in log-likelihood between models i and

j, and ξT is the variance of the likelihood ratio statistic,

ξT =1

T

T∑

t=1

[log

fi(ρt)

fj(ρt)

]2

−[

1

T

T∑

t=1

logfi(ρt)

fj(ρt)

]2

, (5.23)

and fi(ρt) here is the calculated LCorr component of Equation 5.2 for model i

for each of its fitted values of ρt.

The above discusses how the in-sample fit of the various specifications may

be evaluated, how the relative out-of-sample forecast accuracy of the candidate

models shall be compared is now described.

5.3 Forecast Evaluation

In this Section, details are provided regarding the procedure by which point

forecasts of correlation are generated and also describe the Model Confidence

Set methodology for comparing the statistical performance of the respective

forecasts.

205

Page 218: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

5.3.1 Generating Forecasts

As well as an in-sample comparison of the log-likelihood values and parameter

estimates of the models considered, multi-step-ahead point forecasts of equicor-

relation are generated for up to K periods ahead, where K is the 22-day horizon

over which the IC is constructed. Unlike variance and covariance, however, one

cannot aggregate correlation through time and each point forecast must be

evaluated individually, rather than the mean 22-day correlation. That is, eval-

uations are conducted on the forecast performance of each of the models for

each k-day ahead forecast, up to k = 1, ..., 22 = K days.

To generate a multi-period forecast, one must assume that Et[Xt+k] ≈ Et[ρt+k],

which can then be utilised to generate recursive forecasts,

Et[ρt+k] = ω + (α+ β)Et[ρt+k−1] + γEt[ICt+k−1]. (5.24)

With regard to forecasting the implied equicorrelation forward through time,

no a priori guidance exists as to its dynamics and a simple AR(1) time-series

forecasting model is chosen here11. Under such dynamics, the K-period forecast

of IC is given by

Et[ICt+K ] = θK1 ICt + µ(1 − θK

1 ), (5.25)

where µ is the drift term in the AR(1) process and θ1 is the co-efficient acting on

the lagged value of ICt. Recursively substituting Equation 5.24 into Equation

5.25 leads to the following expression for multi-step-ahead forecasts,

11This choice facilitates an analytical solution for multi-step ahead forecasts that would nothave been possible had more complicated dynamics been chosen.

206

Page 219: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

ρt+K = ω

[1 − (α+ β)(K−1)

1 − α− β

]+ (α+ β)K−1ρt+1

+ γK−2∑

k=0

(α+ β)K−2−k

[1 − θ(k+1)

1 − θ

]+ θ(k+1)ICt

].

In discussing potential avenues for forecasting equicorrelation, the focus thus far

has been on utilising alternative historical measures of equicorrelation directly

in the estimation procedure; however, there does exist a natural alternative to

this approach. Rather than estimating prior levels of equicorrelation and fore-

casting forward using Equation 5.9, it is possible to generate forecasts of a more

general covariance matrix without the equicorrelation restriction imposed. One

may then calculate the forecast equicorrelation as the mean of the off-diagonal

elements of this less restricted matrix; that is, the equicorrelation restriction

may be imposed post hoc to the estimation procedure. Even though the fore-

cast object will still be the level of equicorrelation, it may be argued that this

is a more flexible approach in generating the forecast; each of the correlation

pairs is allowed to evolve in a less restricted framework. The chosen model for

this alternative approach is the cDCC model of Aielli (2009) given its bench-

mark status in the literature; forecasts generated in this fashion will be denoted

cDCC.

Finally, it should be noted that each of the models is estimated over a rolling

fixed estimation length window12 of 1000-trading-days. After allowing for a

1000-trading-day initial estimation window, 941 out-of-sample forecasts are gen-

erated for the 22-day-ahead horizon; while more forecasts could have been gen-

erated for the shorter forecast horizons, it was decided to keep the sample size

the same across all statistical analyses.

12Expanding window estimation was also carried out with no qualitative difference in results.

207

Page 220: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

5.3.2 Statistical Evaluation of Forecasts

In order to statistically evaluate the relative forecast performance of the mod-

els considered, a measure of the “true” equicorrelation on each of the days for

which point forecasts are generated is required. Some considerable time has

been spent in prior Sections arguing that the realised (co)variance technologies

produce superior measures of latent variables relative to measures based on,

say, closing price returns. Therefore, the realised equicorrelation, REC, is the

preferred measure of the “true” level of equicorrelation for the remainder of this

Chapter. However, as a robustness check, each of the alternative equicorrelation

measures proposed as variables to be utilised in estimation shall also be used

as measures of the “true” level of equicorrelation; that is, the Engle and Kelly

(2008) measure defined in Equation 5.10, ut, the diagonal realised equicorrela-

tion (DREC) defined in Equation 5.15, and the Spearman rank equicorrelation

defined in Equation 5.16, SREC, shall all be used as the target level of “true”

equicorrelation.

Consistent with the previous empirical Chapters, the Model Confidence Set

(MCS) methodology of Hansen, Lunde and Nason; 2003, 2010) is again em-

ployed to examine the forecast performance of each of the models considered.

Again, the loss functions utilised are the mean-square-error (MSE) and QLIKE,

MSEMk = (ρt+k − f i

t,k)2, (5.26)

QLIKEMk = log(f i

t,k) +ρt+k

f it,k

, (5.27)

where fMt,k are individual forecasts (formed at time t for k-days ahead) obtained

from the individual models, M, and ρt+k is the measure of “true” equicor-

relation. As there are four alternative measures of equicorrelation, two loss

functions, and two statistics (the range and semi-quadratic statistics) used, or

208

Page 221: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

sixteen Model Confidence Sets, it is infeasible to present and discuss the MCS

results for each set. Rather, the focus is placed on the results of the MSE loss

function under the range statistic when the realised equicorrelation is the proxy

measure of “true” equicorrelation, the remaining collections of MCS’s are avail-

able in the Appendix A.3. To recap, the MCS contains the best forecasting

models which are of equal predictive accuracy with a given level of confidence.

5.4 Data

The results are based on the DJIA over the period starting on the 1st of Novem-

ber 2001 through to the 30th of October 2009, resulting in 1964 observations13.

The dataset is constructed from three distinct data sources; the OptionsMetrics

IvyDB US database for calculating model-free implied volatilities for individ-

ual stocks, the CBOE for the daily closing values of the V XD index, and

ThomsonReuters Tick History contains minute-by-minute intraday prices used

in calculating the realised equicorrelation measures.

Similar to the more commonly known V IX for the S&P 500 Index, the V XD

is a model-free 22-day-ahead at-the-money implied volatility forecast for the

DJIA. To fix ideas, the day t implied equicorrelation is given by

ICt =V XD2

t −∑n

j=1w2j,ts

2j,t

2∑n−1

i=1

∑nj=i+1wi,twj,tsi,tsj,t

, (5.28)

where the weights and standard deviations now have a t subscript to denote

that the constituents of the index vary through time 14.

13The DJIA is chosen as it is possible to obtain the implied volatilities of each of its con-stituent stocks for each day of the sample and therefore to calculate the implied equicorrelationwithout approximation error. This is not true of the S&P 500 Index, for which the CBOEpublishes its IC based on an approximation from the largest 50 stocks within the index, asnot all of its constituent stocks have listed options traded.

14Although the DJIA is relatively more stable than, say, the S&P 100, only 17 of the original30 constituents remain in the index consistently for the sample period.

209

Page 222: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

For comparative purposes, the full in-sample values for each of the five alterna-

tive equicorrelation measures utilised in this Chapter are plotted in Figure 5.1;

the ut measure proposed by Engle and Kelly (2008), the implied equicorrela-

tion, the realised equicorrelation, the realised diagonal equicorrelation and the

Spearman rank equicorrelation.

Figure 5.1: Plots of each of the five equicorrelation measures utilised in this Chapterfor the full sample period of 1st November 2001 through to 30th of October 2009.

Panel A: Daily Equicorrelation

ut

Panel B: Implied Equicorrelation

ρIC

t

Panel C: Realised Equicorrelation

ρR

EC

t

Panel D: Realised Diagonal Equicorrelation

ρR

DE

Ct

Panel E: Spearman Rank Equicorrelation

ρS

RE

Ct

01-Nov-2001 06-Nov-2003 09-Nov-2005 05-Nov-2007 30-Oct-20090

0.51

00.5

1

00.5

1

00.5

1

00.5

1

From the plot of ut in Panel A of Figure 5.1, it may be observed that the mea-

sure proposed by Engle and Kelly (2008) is quite noisy, perhaps more noisy

than one would expect of the mean correlation of thirty of the largest US firms.

However, to demonstrate that this measure is still quite persistent, the centered

44-day-moving-average, the mean equicorrelation of data one month either side

of a given day, is plotted in white.

210

Page 223: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

As can be seen in Panel B of Figure 5.1, the IC of options markets appears

to be significantly less noisy than any of the alternative measures. It also ap-

pears to track the realised measures quite closely, which augurs well for the

out-of-sample forecasting exercise given these measures are used as alternative

“true” equicorrelation proxies. Further anecdotal support for the use of IC in

equicorrelation forecasting comes from the fact that the ICt tends to peak in

times of market turmoil; when large indexes fall, the majority of assets suffer

losses and this is reflected in a high level of correlation across assets.

Panels C, D and E of Figure 5.1 plot the realised equicorrelation measures

and the Spearman rank equicorrelation over the sample period, it may be ob-

served that the measures follow similar dynamics although the realised diagonal

equicorrelation is the least noisy of the thre; this may suggest that it will pos-

sess more power in separating the out-of-sample forecast performance of the

competing models.

To reinforce the point regarding the noisiness of ut relative to the alternative

equicorrelation measures, some descriptive statistics are provided for each of the

series in Table 5.1. It can be seen that the ut measure is extremely noisy, with

its standard deviation of 0.2681 larger than its mean of 0.2631; the ut measure

is also weakly correlated with all of the alternative measures. The standard

deviation of the other four measures are significantly smaller, with all falling

between 0.105 and 0.135, and they are more highly correlated with each other.

The two measures of realised equicorrelation, DREC and REC, are somewhat

different in their means at 0.3556 and 0.2782 respectively, with the standard

deviation of the DREC measure slightly smaller, they are unsurprisingly highly

correlated with each other at 0.7237. The mean of the IC measure, at 0.4218,

is higher than all of the other measures of equicorrelation and probably reflects

a correlation risk premium being priced in the derivatives market. Finally, it

211

Page 224: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

should be noted that the measures of equicorrelation from the physical market

are not that highly correlated with the IC from the options market, so there

should not be any adverse effects from multicollinearity by including multiple

measures of equicorrelation.

Table 5.1: Descriptive statistics over the full-sample period of the four equicorrelationmeasures utilised in this Chapter. The correlation statistic is of the measure for that rowwith the measure in the column header.

Descriptive Statistics of Equicorrelation Measures

Measure Mean Std ρut ρICt ρRECt ρDRECt ρSRECt

ut 0.2631 0.2681 1 0.2075 0.2112 0.1624 0.1680

ICt 0.4218 0.1152 0.2075 1 0.4649 0.5047 0.4429

RECt 0.3556 0.1347 0.2112 0.4649 1 0.7237 0.6932

DRECt 0.2782 0.1054 0.1624 0.5047 0.7237 1 0.5523

SRECt 0.3363 0.1221 0.1680 0.4429 0.6932 0.5523 1

5.5 Results

The empirical results of this Chapter are composed of two sub-Sections; the

in-sample results are presented first to discuss the relative fit of the candidate

models, followed by the out-of-sample forecasting performance.

The in-sample estimation results are based upon the quasi-maximum likeli-

hood estimation (QMLE) of the log-likelihood function given in Equation 5.2.

For each of the candidate models, this is a two stage procedure. The first stage

involves an estimation of a GARCH(1,1) univariate volatility model for each of

212

Page 225: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

the portfolio constituents, this optimisation maximises the LV ol component of

Equation 5.2 and is identical across all competing models and is therefore of no

interest in model comparison. What is of interest is the LCorr term of Equation

5.2, whichever model maximises this term will also maximise the log-likelihood

function overall.

Following the QMLE procedure, traditional analysis of the parameter estimates

may be conducted, such as t-tests regarding statistical significance of individual

co-efficients. Further, standard likelihood ratio tests may be executed to ex-

amine the statistical significance of any improvement in model fit across nested

models. Here nested models are those that have the same equicorrelation mea-

sure for the variable Xt in Equation 5.20, for instance the REC-IC specification

nests the REC model. The Vuong likelihood ratio test is used to analyse non-

nested models, such as the REC-IC model and the DREC model.

5.5.1 In-Sample Estimation Results

There are two broad questions addressed in this Section. Firstly, of the pro-

posed alternatives, what is the optimal choice for the equicorrelation measure

Xt? Secondly, does the information contained within IC lead to superior model

fit above those models based on historical returns alone? A secondary question

is also addressed, is the incremental value from the addition of the IC term

conditional on the choice of the Xt measure or is model fit improved across all

proposed alternatives?

To begin addressing the question of the optimal choice for the equicorrela-

tion measure Xt, the results for the restricted models are presented first. These

models exclude the information content of the IC by enforcing the restriction of

γ = 0 in Equation 5.20; their parameter estimates and respective LCorr terms

213

Page 226: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

from Equation 5.2 are presented in Panel A of Table 5.2. It may be observed

that of the restricted models the best in-sample fit is given by the choice of REC

as the equicorrelation measure, it generates the highest log-likelihood function

value and the associated co-efficient is also statistically significant at more than

three robust standard errors from zero. The worst in-sample fit is given by the

ut measure proposed by Engle and Kelly (2008) while the relevant co-efficient

is statistically insignificant at approximately 1.5 robust standard errors from

zero. It is briefly noted that all estimated equicorrelation values lie within the

range ρt ∈(

−1n−1 , 1

), ensuring positive definiteness of the conditional correlation

matrix.

The relative log-likelihood values of the restricted models may be assessed

through the Vuong likelihood ratio test results in Panel A of Table 5.3. At

traditional levels of significance, only one claim may be made: that utilising

the REC measure offers statistically significant improvement over the ut and

DREC measures of Xt, it cannot be statistically separated from the Spearman

rank equicorrelation measure. No other proposed measure of equicorrelation

offers a significant difference in the LCorr term relative to its competitors.

By incorporating information contained within IC by relaxing the restriction

that γ = 0 in Equation 5.20, an interesting finding emerges from the results

presented in Panel B of Table 5.2. Firstly, it may be observed that none of

the estimated co-efficients of the proposed measures of equicorrelation, Xt, are

statistically significant; in each case the standard error is of larger magnitude

than the parameter estimate. This would suggest that the choice of the Xt mea-

sure is irrelevant as they all lack significant explanatory power in this setting.

This result is confirmed by the Vuong likelihood ratio test results presented in

214

Page 227: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table 5.2: In-sample parameter estimates (robust standard errors are given in parenthe-ses beneath the parameter estimates) and log-likelihood values for the nine candidate mod-els; likelihood ratio test p-values for the nested models are also presented. Presented resultsare based on the QMLE of the log-likelihood function given in Equation 5.2. Models maybe categorised into two sub-sets of the general specification: ρt = ω+αXt−1+βρt−1+γICt,where Xt−1 is a measure of equicorrelation and ICt is the implied equicorrelation. PanelA presents the restricted models where γ = 0 and Panel B presents the unrestricted modelresults; one exception exists where a model of IC alone is used: ρt = ω + γICt. In PanelB, likelihood ratio test p-values are given in parentheses under the LCorr value and com-pare the log-likelihood of the nested models where both models have the same measure ofXt−1, e.g. REC-IC and REC.

In-Sample Results

Panel A: Comparison of Models with Restriction γ = 0

Xt measure ω α β γ LCorr

ut 0.0778 0.0798 0.7523 - -8756.1362(0.0592) (0.0535) (0.178) -

REC 0.0422 0.1240 0.7769 - -8722.9397(0.0182) (0.0399) (0.0636) -

DREC 0.1058 0.2181 0.5832 - -8730.4046(0.0616) (0.1311) (0.2372) -

SREC 0.0365 0.0686 0.8497 - -8747.8650(0.0120) (0.0213) (0.0392) -

Panel B: Comparison of Models relaxing Restriction γ = 0

ut-IC 0.0210 0.0121 0.7462 0.1920 -8613.6772(0.0150) (0.0260) (0.0907) (0.0626) (0.0375)

REC-IC 0.0164 0.0199 0.7613 0.1779 -8613.1755(0.0140) (0.0288) (0.0671) (0.0517) (0.0989)

DREC-IC 0.0202 0.0258 0.7615 0.1872 -8614.3331(0.0340) (0.0345) (0.2551) (0.0625) (0.0870)

SREC-IC 0.0161 0.0143 0.7728 0.1737 -8613.6491(0.0147) (0.0255) (0.0681) (0.0587) (0.0493)

IC 0.1362 - - 0.6563 -8638.6055(0.0407) - - (0.0875)

Panel B of Table 5.3. Upon the inclusion of the IC term, none on the values

for LCorr are statistically different from each other. The only specification that

215

Page 228: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table 5.3: In-sample fit of variations of the equicorrelation model, ρt = ω + αXt−1 +βρt−1 +γICt, where Xt−1 is a measure of equicorrelation and ICt is the implied equicor-relation. The focus here is on the relative performance of the various proposed measuresof Xt, restricted models (γ = 0) are not compared with unrestricted models. As the mod-els are non-nested, the comparison is conducted using the Vuong likelihood ratio statisticoutlined in Equation 5.22, p-values are given in parentheses. The Vuong statistic of rowi and column j is positive if model i has a superior in-sample fit to model j. In each case,H0 : Li

Corr = LjCorr or that the in-sample fit of each model is equal; H1 : Li

Corr > LjCorr

or that model i offers superior in-sample fit.

Vuong Likelihood Ratio Statistics Comparing Xt measures

Panel A: Comparison of Models with Restriction γ = 0

Xt measure ut REC DREC SREC

ut - -1.4368 -1.1868 -0.4648(0.9246) (0.8823) (.6789)

REC 1.4368 - 1.6566 0.3182(0.0753) (0.0488) (0.3751)

DREC 0.4648 -1.6566 - -0.7740(0.3210) (0.9511) (0.7805)

SREC 1.1868 -0.3182 0.7740 -(0.1176) (0.6248) (0.2194)

Panel B: Comparison of Models relaxing Restriction γ = 0

ut-IC REC-IC DREC-IC SREC-IC IC

ut-IC - -0.1592 0.2472 -0.0103 1.4104(0.5632) (0.4023) (.5041) (0.0792)

REC-IC 0.1592 - 0.3504 0.2270 1.4523(0.4367) (0.3630) (0.4102) (0.0732)

DREC-IC -.2472 -0.3504 - -0.2539 1.4230(0.5976) (0.6369) (0.6002) (0.0773)

SREC-IC 0.0103 -0.2270 0.2539 - 1.3788(0.4958) (0.5897) (0.3997) (0.0839)

IC -1.4104 -1.4523 -1.4230 -1.3788 -(0.9207) (0.9267) (0.9226) (0.9160)

is consistently dominated is the specification that utilises information from the

options market alone, ρt = ω + γICt. As the various Xt measures are all sta-

216

Page 229: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

tistically insignificant yet this simple linear regression is statistically inferior to

the other models that incorporate IC, this result suggests that a persistence

term is required for the adequate modelling of ρt.

Overall, the above discussion demonstrates that the choice for the Xt mea-

sure is important only if the IC term is excluded from model estimation. If

the IC term is included, then the choice of the Xt measure is irrelevant as all

of these models will possess statistically indistinguishable in-sample fits. If the

model is restricted to utilise historical information alone, then a choice of the

REC measure will dominate the ut and DREC measures, but not the SREC

measure. As the relevance of the choice of Xt measure is dependent on the in-

clusion of the IC term, whether this term warrants inclusion is now addressed.

As has been discussed of the parameter estimate results for the unrestricted

models given in Panel B of Table 5.2, none of the proposed Xt measures are

statistically significant. However, the estimated co-efficients acting on the IC

term are statistically significant across all models. Further, all of the likelihood

ratio tests of the nested models show statistically significant improvement in

model fit; all of the relevant p-values are smaller than 0.10. That is, the inclu-

sion of the IC term appears to subsume the information content of all of the

alternative measures of equicorrelation that are based on historical data alone.

In addition, the Vuong likelihood ratio test results in Table 5.4 re-enforce the

fact that the inclusion of the IC term improves model fit. The Vuong statistics

in Panel A of Table 5.3 compared the relative performance of the restricted

models given in Equation 5.18, while the results presented in Panel B of Ta-

ble 5.3 were for unrestricted models given in Equation 5.21; both of these sets

of results focus solely on the choice of Xt measure. Comparing the restricted

against the unrestricted models via the Vuong likelihood ratio test allows for an

217

Page 230: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

examination of the statistical improvement in model fit offered by the inclusion

of the IC. Table 5.4 compares the log-likelihood values of those models that

include the IC term with those that do not, it is found that all of the models

which include the IC term dominate those that do not. Even the ρt = ω+γICt

specification outperforms all of the restricted models; each of the calculated

p-values is less than 0.01, suggesting clear rejection of the null hypothesis that

the models have equal log-likelihood.

Table 5.4: The panels below present the Vuong likelihood ratio statistics, with p-values inparentheses, for the purposes of comparing in-sample fit of non-nested candidate models.The likelihood ratio of row i and column j is positive if model i has a superior in-samplefit to model j. In each case, H0 : Li

Corr = LjCorr or that the in-sample fit of each model

is equal; H1 : LiCorr > L

jCorr or that model i offers superior in-sample fit.

Vuong Likelihood Ratio Statistics

Comparison of Restricted and Unrestricted Models

ut REC DREC SREC

ut-IC - 3.1116 3.1911 3.2282(0.0009) (0.0007) (.0006)

REC-IC 3.3672 - 3.1847 3.2563(0.0003) (0.0007) (0.0005)

DREC-IC 3.3182 3.0345 - 3.1733(0.004) (0.0012) (0.0007)

SREC-IC 3.3860 3.1643 3.1831 -(0.0003) (0.0007) (0.0007)

IC 2.7071 2.3342 2.6358 2.5764(0.0033) (0.0097) (0.0041) (0.0049)

The above results demonstrate that the inclusion of the IC term improves model

fit. The unrestricted models all have insignificant co-efficient estimates for the

historical measures of equicorrelation and significant parameter estimates for

the IC term. Further, all of the Vuong likelihood ratio test results suggest that

218

Page 231: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

the inclusion of the IC term leads to statistically significant model fit over those

models that do not include information from the options market.

In addition to the purely statistical analysis, a visual comparison of the fit-

ted equicorrelations, ρt from some of the candidate models15 is given in Figure

5.2. The plots re-enforce the prior statistical analysis that the choice of the Xt

measure is irrelevant if the IC term is included, Panels B, D, and F are remark-

ably similar. However, there is a clear distinction between the REC measure

from the ut and DREC measures, reinforcing the results from the earlier Vuong

tests presented in Table 5.3. It is apparent that, given the higher log-likelihood

of its competitors reported in Table 5.2, the fitted ρt when ut is chosen as the

Xt measure appears to miss variations in the object of interest. While obviously

not a constant conditional correlation model, this apparent inability to follow

prevailing market conditions does not bode well for its out-of-sample forecast

performance.

To summarise the in-sample results, all of the proposed adaptations and exten-

sions to the original LDECO model of Engle and Kelly (2008), which utilises the

daily returns based measure of equicorrelation ut, yield a higher log-likelihood

function value. However, only the REC measure offers statistically significant

improvement over the original measure among the restricted models that do

not include information from the options market. If the implied equicorrelation

measure is included, then this also results in a statistically significant improve-

ment over the original Engle and Kelly (2008) specification. However, it also

means the choice of equicorrelation measure is rendered redundant; none of the

unrestricted models can be separated by the Vuong likelihood ratio test.

15The univariate model of IC is excluded as it is obvious from the results in Table 5.2 thata persistence term is highly significant. Further, models incorporating the SREC measure arealso excluded as they are qualitatively similar to the REC models.

219

Page 232: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Figure 5.2: The in-sample fitted equicorrelations, ρt, of six of the competing models.

Panel A: ut

Panel B: ut-IC

Panel C: REC

Panel D: REC-IC

Panel E: DREC

Panel F: DREC-IC

01-Nov-2001 06-Nov-2003 09-Nov-2005 05-Nov-2007 30-Oct-20090

0.5

0

0.5

0

0.5

0

0.5

0

0.5

0

0.5

In answering the two questions posed at the beginning of this Section, the

results are clear. The inclusion of the IC term clearly improves model fit and

does so regardless of the choice of equicorrelation measure. In fact, the in-

clusion of the IC term means that the choice of the equicorrelation measure

for Xt is irrelevant. It is interesting to relate these findings to the univariate

volatility literature. Similar to results in that field, the use of realised measures

has been shown to offer improvements over measures that utilise daily returns

alone (REC dominates ut). Further, the information implicit in options mar-

kets has been found to be incremental to that contained in, firstly, the daily

measure ut and, secondly, the realised measures. Hence, the in-sample results

tend to suggest that the usefulness of options markets in generating forecasts

applies equally in the multivariate context of conditional correlation forecast-

ing as it does in univariate volatility forecasting; whether these results hold

out-of-sample is now examined.

220

Page 233: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

5.5.2 Out-of-sample Forecast Results

The focus now turns to evaluating the relative out-of-sample forecast perfor-

mance of the various measures of Xt and whether the addition of the IC term

leads to improvements. It should be repeated that as well as the models based

on historical and implied measures of equicorrelation, another approach exists

for generating equicorrelation forecasts. Rather than assuming equicorrelation

in the estimation process and then forecasting this scalar forward, one may

forecast a more general conditional correlation matrix with the mean of the

off-diagonal elements then being used to calculate an alternative equicorrela-

tion forecast. This alternative means of forecasting equicorrelation is achieved

through the cDCC model of Aielli (2009) for reasons outlined previously. The

forecast equicorrelation from this alternative approach may then be compared

with those forecasts based on the autoregressive forms already presented and

is denoted by cDCC; this comparison is achieved by means of the Model Con-

fidence Set methodology of Hansen, Lunde, and Nason (2003, 2010).

The results from the MCS procedure are presented in Table 5.5 with the fore-

cast performance evaluated using the Mean Square Error loss function under

the range statistic using the REC as the measure of “true” equicorrelation16.

This table presents a summary of the p-values of rejecting the null hypothesis

that the relevant model is not in the MCS; the higher the p-value, the more

likely that the relevant specification belongs in the set of statistically superior

models. The statistics are summarised by a spectrum of “ticks” reflecting the

probability of a model being included in the MCS; a blank entry indicates a

p-value between 0.00 and 0.05, X indicates a p-value between 0.05 and 0.10,

XX between 0.10 and 0.20, and XXX greater than 0.20. From the summarised

16Results are qualitatively similar for both the QLIKE and MSE loss functions under boththe range and semi-quadratic test statistics. Further, the use of the alternative measures of“true” equicorrelation, the DREC, SREC and ut measures, does not qualitatively alter theresults. Hence, for brevity’s sake only the results for one sub-set of the possible combinationsare presented here, the remainder are given in Appendix A.3.

221

Page 234: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

results in Table 5.5, two main observations may be made.

Firstly, the REC measure of equicorrelation almost universally generates the

best out-of-sample forecasts; it outperforms all the competing specifications at

every forecast horizon considered under both loss functions, test statistics and

all measures of “true” equicorrelation; however, there is one exception to this

case that is discussed shortly. Secondly, as the forecast horizon increases, it be-

comes increasingly difficult to statistically distinguish between the competing

models. Beyond the 5 day forecast horizon, the only models to be excluded un-

der any loss function, test statistic or measure of equicorrelation are the cDCC

model, and those models that use either the ut or DREC measures of equicorre-

lation; that is, those models that use either daily data or the diagonal realised

equicorrelation17.

Note that in the robustness check of using four measures of “true” equicor-

relation, there is one exception to the REC specification providing the best

forecast. In results reported in Appendix A.3, the DREC specification provides

the best forecast under the DREC measure of equicorrelation at the one-day

horizon under both loss functions and test statistics. For all other loss functions,

tests statistics, time horizons, and measures of equicorrelation18 the REC mea-

sure provides the superior out-of-sample forecast, it is therefore believed that

the finding that REC is the superior measure is a robust result.

It is important to note from Table 5.5 that those models relying on daily closing

price returns for their equicorrelation measure are typically the worst perform-

ing. Both the original LDECO model that utilises the measure ut, and the

process of taking the mean of the forecast correlation matrix from the cDCC

17Recall that the cDCC model uses daily closing price volatility-standardised returns in itsestimation.

18These results are presented in Appendix A.3.

222

Page 235: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

model, generally yield inferior forecasts to those specifications that use realised

and implied equicorrelation. In fact, a simple linear regression on IC typically

yields superior forecasts to those models utilising daily returns measures. These

out-of-sample forecasting results confirm the in-sample estimation results, as

well as corroborating a larger amount of the univariate literature that shows

that daily measures of latent variables perform poorly relative to realised and

implied measures.

However, when examining those models that incorporate realised measures of

equicorrelation (the DREC, REC or SREC measures), forecast performance

generally deteriorates over the longer term upon the inclusion of the IC mea-

sure. This perhaps reflects that the chosen AR(1) specification for IC does not

adequately match the true longer term dynamics of IC. However, the fact that

REC specification dominates at all time horizons suggests that even an im-

proved forecasting method for IC would not reverse the rankings of the models.

Perhaps if the REC-IC model dominated for shorter horizons before the REC

specification became the superior forecasting tool, a more thorough investiga-

tion of the dynamics of IC would be warranted, but this is not believed to be

the case here. In either scenario, the superior in-sample fit by including IC is

not replicated in the out-of-sample forecasting exercise, where information from

the physical market alone generates the best forecasts.

5.6 Conclusions

Accurate forecasts of multivariate volatility are of important practical use in

financial applications, such as optimal portfolio allocation, and have been an

area of active research from many years. Recently, the concept of equicorrela-

tion has been of increasing interest in the financial economics and econometrics

literature. This Chapter analysed the in-sample fit and out-of-sample forecast-

223

Page 236: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

ing performance of ten candidate models of equicorrelation after adapting the

LDECO model to utilise realised and implied measures of equicorrelation.

It was found that the measure of equicorrelation based on the realised co-

variance technology provided superior in-sample fit to all of the alternative his-

torical measures. This difference was statistically significant in the restricted

models where no option implied information was included, but the inclusion of

the options based measure of equicorrelation rendered the choice of Xt measure

irrelevant. In fact, all of the historical measures of equicorrelation were statisti-

cally insignificant when the implied equicorrelation was added as an exogenous

regressor to the LDECO specification. Further, this finding is similar to the

majority of research in the univariate volatility forecasting literature where op-

tion implied measures have information incremental to historical measures of

volatility, particularly squared daily returns.

In the out-of-sample forecasting results, the measure of equicorrelation based

on the realised covariance technology again provided superior performance, it

was the best performing model at all horizons, under both the range and semi-

quadratic test statistics, and under both the QLIKE and MSE loss functions;

with only one exception to its dominance. Further, its superior performance also

included typically generating more accurate forecasts than models that utilised

the implied equicorrelation measure. That is, the in-sample benefits of in-

cluding the implied equicorrelation measure did not hold out-of-sample against

the realised measures. However, the implied equicorrelation based models did

typically outperform those models based on daily measures of equicorrelation.

Again, these results resemble those in the univariate volatility literature where

implied measures typically outperform daily returns based measures, but do

not dominate realised measures of latent variables.

224

Page 237: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

In addressing the key research question of this Chapter, “Does the use of im-

plied measures of mean correlation lead to superior multivariate volatility fore-

cast performance?”, the answer is an unequivocal yes based on the in-sample

estimation results, and a qualified yes for the out-of-sample forecasting. The

use of implied measures does lead to superior forecasting ability against the

standard daily returns based measure of equicorrelation. However, if a realised

measure of equicorrelation based on high-frequency intraday returns is utilised

as the historical measure, then the implied measure of mean correlation does

not offer superior multivariate volatility forecast performance.

225

Page 238: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table 5.5: Summary of Model Confidence Set p-values using the Mean-Square-Error loss function under the range statistic when realised equicorrelationis the Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Summary of Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX

ρt+2 XXX

ρt+3 XXX

ρt+4 XXX X

ρt+5 XX XX XX XXX XXX XXX XX XX XX XXX

ρt+6 X XX XXX XX XX X X XX

ρt+7 XX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+8 XX XX XXX XXX XXX XXX XX XX XX XXX

ρt+9 XX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+10 XX XX XXX XXX XXX XXX XX XX XX XXX

ρt+11 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+12 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+13 XXX XXX XXX XXX XXX XXX XX XXX XXX XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+15 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+18 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+19 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+20 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+21 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+22 XX XXX XXX XXX XXX XXX XXX XX XX XX

226

Page 239: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Chapter 6

Conclusion

This thesis was motivated by the need for accurate forecasts of the volatility

and correlation of financial assets. Such inputs are crucial for practical finan-

cial applications such as portfolio optimisation. Broadly speaking, there are two

ways of producing forecasts of these important variables. Firstly, time-series

models apply a statistical weighting scheme to historical measurements of the

variable of interest. The alternative methodology extracts forecasts from the

market traded value of option contracts. As the price of an option is priced

in reference to a future dated payoff, the information set used in generating

forecasts from options markets should be larger than that of time-series models

and efficient use of this larger information set should lead to better forecasts.

Although much research has previously been conducted on the relative merits

of these approaches, three avenues for further study were identified. The main

conclusions from these studies will now be summarised, including the overar-

ching result from this thesis; some avenues for further research will also be

discussed.

Firstly, Chapter 3 demonstrated that adjusting the level of implied volatility

for the volatility risk premium leads to improved out-of-sample forecast perfor-

227

Page 240: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

mance. Due to the fact that the implied volatility is generated under the risk-

neutral measure and the target level of volatility is generated under the physical

measure, the level of implied volatility is typically upwardly biased. This thesis

documented a method for estimating the unconditional volatility risk premium

which was then used to transform the level of the implied volatility into its risk-

adjusted equivalent. This risk-adjusted implied volatility was then compared

with the raw level of implied volatility and several popular time-series models,

as well as some recent innovations in univariate volatility modelling. It was

found that this approach led to out-of-sample forecasts of volatility over 1-, 5-,

and 22-day horizons that were statistically superior for periods of low volatility.

That is, the risk-adjusted implied volatility forecast dominated both the raw

level of implied volatility and all competing time-series models across all time

horizons when the level of volatility was relatively low. Unconditionally, or over

the full sample period, the risk-adjusted implied volatility again dominated the

raw level of implied volatility. It also dominated several popular time-series

models such as the GARCH, SV, and GJR specifications. It could not, how-

ever, be separated from more recent innovations based the realised volatility. In

high volatility periods, the risk-adjusted implied volatility was typically inferior

to the competing models. This result motivates an avenue of further research

that is discussed shortly.

Secondly, Chapter 4 conducted a study of intraday volatility forecasting and

made two important contributions. A framework for intraday volatility was

utilised that decomposes high-frequency volatility into its daily, diurnal, and

intraday stochastic components and several alternative specifications of the

intraday stochastic component were examined. The first main result of this

chapter came from the demonstration that accurate forecasts of this intraday

stochastic component of volatility leads to superior out-of-sample forecasts of

intraday total volatility. That is, there are statistically significant benefits to

228

Page 241: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

modelling the evolution of this component of volatility rather than modelling

the daily and diurnal components only. The second main result concerns how

best to forecast the intraday level of volatility. A semi-parametric model that

has been successful at forecasting volatility at the daily horizon was also ap-

plied in the intraday setting but was found to be inferior to a GARCH style

model. However, four measures of information from the derivatives market were

proposed and found to lead to statistically significant improvement in forecast-

ing intraday volatility. These variables were standardised measures of the level

of implied volatility, and futures trading activity on the level of this implied

volatility. The results reinforced prior research that shocks to trading volume

in the futures market leads volatility in the spot market, and also that an asym-

metric relationship exists where the lead from the futures market is stronger in

times of bad news.

Thirdly, Chapter 5 extended the analysis of option implied information to the

case of conditional correlation matrix forecasting. An existing framework for

forecasting the level of equicorrelation of a portfolio was adapted to include the

level of option implied equicorrelation. Further, three high-frequency measures

of equicorrelation that utilise intraday data were proposed as an alternative to

the existing measure based on daily returns; these were the realised equicorrela-

tion from the realised covariance technology, the realised diagonal equicorrela-

tion from the realised variance technology, and the Spearman rank equicorrela-

tion which is a nonparametric estimate. It was found for the in-sample analysis

that the implied measure of equicorrelation totally subsumed the information

content of all of the historical measures of equicorrelation. In an out-of-sample

forecasting exercise, the implied measure of equicorrelation dominated all mod-

els that utilised a daily returns based measure of equicorrelation as well as the

realised diagonal equicorrelation measure. However, the realised equicorrela-

tion measure was universally the best out-of-sample forecasting method and

229

Page 242: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

the addition of the implied equicorrelation was not of incremental value.

Overall, this thesis has demonstrated the benefits of options based measures

of volatility and correlation in three separate empirical investigations. Informa-

tion extracted from the options market was shown to improve forecast perfor-

mance for volatility over both intraday and daily horizons, as well in the case of

conditional correlation forecasting. These improvements are particularly strik-

ing when compared to historical measures based on daily closing price returns

rather than high-frequency measures. The success of the options based informa-

tion across three separate datasets and applications highlights the usefulness of

the general approach of filtering forecasts from the options market rather than

relying on historical measures alone. The majority of previous studies into the

use of options measures for the purposes of forecasting have been confined to

pairwise comparisons of volatility over the daily horizon or longer. hence, this

dissertation extends the literature by furthering the understanding of the use

of options market based information in this daily volatility forecasting context,

as well broadening the use of implied measures to the related fields of intraday

volatility and conditional correlation matrix forecasting.

Given the results and conclusions of this thesis, some further avenues for re-

search are now proposed. It has been discussed that taking into account the

volatility risk premium leads to superior forecasts in low volatility periods, but

inferior forecasts in high volatility periods. It is posited that this is related to

the estimation process yielding an unconditional estimate of the volatility risk

premium, and not of the premium in that current state. As it is likely that

the volatility risk premium is strongly related to the level of volatility, perhaps

an estimate of the conditional volatility risk premium is more appropriate for

the purposes of forecasting. If an accurate and simple estimation framework

for the conditional volatility risk premium can be found, and may be updated

230

Page 243: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

daily, then it would interesting to see if this leads to a risk-adjusted implied

volatility forecast also performing well in more volatility periods. However, a

macroeconomic variable that reflects the conditional volatility risk premium

and updates at a daily frequency is not currently known, which limits such

an investigation. Further, premise that the jump risk premium is also fixed is

highly unlikely. Unfortunately, to relax this restriction would require significant

progress in theoretical econometrics before it could be utilised in an empirical

setting such as in this thesis. Another avenue for potential further research

concerns the usefulness of correlation forecasting generally. This dissertation

show that realised and implied measures of correlation were superior for out-of-

sample forecasting relative to daily measures. However, in portfolio allocation

problems, one must also consider the vector of expected returns as well as the

expected variances of each of the individual assets and then their level of corre-

lations. While forecasting methods for equicorrelation were evaluated, it would

be interesting to examine the economic value of these forecasts in a practical

setting, and their importance compared to expected returns and variances.

231

Page 244: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Appendix A

Appendix

A.1 Decomposition of Expectation of Squared Re-

alised Volatility

E

[(RV

(m)t

)2∣∣∣∣Ft−1

]= E

(

m∑

i=1

r2i,m,t

)2∣∣∣∣∣∣Ft−1

=m∑

i=1

E[r4i,m,t

∣∣Ft−1

]+ 2

m−1∑

i=1

m∑

j=i+1

E[r2i,m,tr

2j,m,t

∣∣Ft−1

]

=

m∑

i=1

σ4t

m2E

[r4i,m,t

σ4t /m

2

∣∣∣∣∣Ft−1

]

+ 2m−1∑

i=1

m∑

j=i+1

E[r2i,m,tr

2j,m,t

∣∣Ft−1

]

=3σ4

t

m+

2σ4t

m

1

2m(m− 1)

= 2+mm σ4

t

(A.1)

232

Page 245: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

A.2 Statistical loss Functions and their derivatives

Function Functional Form ∂∂f i

t

∂2

∂(f it )2

∣∣∣φt=f i

t

MSEi,t

(φt − f i

t

)2 −2(φt − f i

t

)2

MSE-LOGi,t

(log φt − log f i

t

)2 − 2f i

t

(log φt − log f i

t

)2

(f it)

2

MSE-SDi,t

(√φt −

√f i

t

)2√

f it−

√φt√

f it

12f i

t

MSE-propi,t

(φt

f it

− 1)2 2φt(f i

t−φt)(f i

t)3

2

(f it)

2

MAEi,t

∣∣φt − f it

∣∣ −(φt−f it)

|φt−f it |

0

MAE-LOGi,t

∣∣log φt − log f it

∣∣ −(log φt−log f it)

f it |log φt−log f i

t | 0

MAE-SDi,t

∣∣∣√φt −

√f i

t

∣∣∣

(1−

√φt√fit

)

2∣∣∣√

φt−√

f it

∣∣∣0

MAE-propi,t

∣∣∣φt

f it

− 1∣∣∣ φt(f i

t−φt)

(f it)

3∣∣∣∣

φt

fit

−1

∣∣∣∣0

QLIKEi,t log f ii,t + φt

f it

f it−φt

(f it)

21

(f it)

2

Table A.1: Statistical loss functions and their derivatives

233

Page 246: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

A.3 Model Confidence Set results for remaining equicor-

relation measures, loss functions, and test statis-

tics.

234

Page 247: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.2: Summary of Model Confidence Set p-values using the Mean-Square-Error loss function under the semi-quadratic statistic when realisedequicorrelation is the Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XX XX

ρt+2 XXX X

ρt+3 XXX

ρt+4 XXX X X X

ρt+5 X XX XX XXX XX XX XXX X X XX

ρt+6 XX XX XXX XX XX XX X XX

ρt+7 XX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+8 XX XX XXX XXX XXX XXX XX X X XXX

ρt+9 XX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+10 XX XX XXX XXX XXX XXX XX XX XX XXX

ρt+11 XX XX XXX XXX XXX XXX XX XX XX XXX

ρt+12 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+13 XX XX XXX XXX XXX XXX XX XX XX XX

ρt+14 XXX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+1X XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+18 XXX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+19 XX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+20 XX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+21 X XXX XXX XXX XXX XXX XXX X X XX

ρt+22 XX XXX XXX XXX XXX XXX XXX XX XX XX

235

Page 248: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.3: Summary of Model Confidence Set p-values using the QLIKE loss function under the range statistic when realised equicorrelation is the Xt

measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX

ρt+2 XXX

ρt+3 XXX

ρt+4 XXX

ρt+5 XX XXX XXX XXX XXX XXX XXX XXX XX XX

ρt+6 XX XX XX XXX XX XX XX XX XX XX

ρt+7 XX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+8 XX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+9 XX XXX XXX XXX XXX XXX XXX XX X XXX

ρt+10 XX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+11 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+12 XX XXX XXX XXX XXX XXX XXX XXX XX XXX

ρt+13 XX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+1X XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+18 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+19 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+20 XX XXX XXX XXX XXX XXX XXX XX XX XX

ρt+21 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+22 XX XXX XXX XXX XXX XXX XXX XX XX XX

236

Page 249: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.4: Summary of Model Confidence Set p-values using the QLIKE loss function under the semi-quadratic statistic when realised equicorrelation isthe Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX

ρt+2 XXX

ρt+3 XXX

ρt+4 XXX

ρt+5 X XX XX XXX XXX XXX XX X X XX

ρt+6 X XX XX XXX XX XX XX X XX XX

ρt+7 X XXX XXX XXX XXX XXX XXX X XX XXX

ρt+8 X XX XXX XXX XXX XXX XX X XXX

ρt+9 X XXX XXX XXX XXX XXX XXX X X XXX

ρt+10 X XX XX XXX XX XX XX X XX XX

ρt+11 XX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+12 XX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+13 X XXX XXX XXX XXX XXX XXX X X XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+1X XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+17 XX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+18 XX XX XXX XXX XXX XXX XXX XX XX XX

ρt+19 XX XXX XXX XXX XXX XXX XXX XX XX XX

ρt+20 XX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+21 XX XXX XXX XXX XXX XXX XXX XX XX XX

ρt+22 XX XXX XXX XXX XXX XXX XXX XX XX XX

237

Page 250: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.5: Summary of Model Confidence Set p-values using the Mean-Square-Error loss function under the range statistic when realised diagonalequicorrelation is the Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX

ρt+2 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+3 XXX XXX XXX XXX XXX XXX XXX XXX

ρt+4 XXX XXX XXX XXX XXX XXX XXX XXX

ρt+5 XXX XXX XXX XXX XXX XXX XXX XXX

ρt+6 XXX XX XXX XXX XXX XXX XXX XXX

ρt+7 XXX XX XXX XXX XXX XXX XXX XXX

ρt+8 X XXX XXX XXX XXX XXX XXX XXX

ρt+9 XXX XXX XXX XXX XXX XXX XXX

ρt+10 XXX XXX XXX XXX XXX XXX XXX

ρt+11 XXX XXX XXX XXX XXX XXX XXX

ρt+12 XXX XXX XXX XXX XXX XXX XXX

ρt+13 XXX XXX XXX XXX XXX XXX XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX

ρt+1X XXX XXX XXX XXX XXX XXX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX

ρt+18 XXX XXX XXX XXX XXX XXX XXX

ρt+19 XXX XXX XXX XXX XXX XXX XXX

ρt+20 XXX XXX XXX XXX XXX XXX XXX

ρt+21 XXX XXX XXX XXX XXX XXX XXX

ρt+22 XXX X XXX XXX XXX XXX XXX X XXX

238

Page 251: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.6: Summary of Model Confidence Set p-values using the Mean-Square-Error loss function under the semi-quadratic statistic when realiseddiagonal equicorrelation is the Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX

ρt+2 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+3 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+4 XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+5 XXX XXX XXX XXX XXX XXX XXX XX XXX

ρt+6 XXX XX XXX XXX XXX XXX XXX XXX XXX

ρt+7 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+8 X XXX XXX XXX XXX XXX XXX XX XXX

ρt+9 XXX XX XXX XXX XXX XXX XXX XXX XXX

ρt+10 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+11 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+12 XXX XX XXX XXX XXX XXX XXX XXX XXX

ρt+13 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+14 XXX X XXX XXX XXX XXX XXX XX XXX

ρt+1X XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+16 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+17 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+18 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+19 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+20 XXX X XXX XXX XXX XXX XXX XX XXX

ρt+21 XXX X XXX XXX XXX XXX XXX X XXX

ρt+22 XXX X XXX XXX XXX XXX XXX XX XXX

239

Page 252: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.7: Summary of Model Confidence Set p-values using the QLIKE loss function under the range statistic when realised diagonal equicorrelation isthe Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX

ρt+2 XX XX XX XXX XX XX XX XXX XX XXX

ρt+3 XXX XXX XXX XXX XXX XXX XXX

ρt+4 XX XX XX XXX XXX XX XX XXX

ρt+5 XXX X XXX XXX XXX XXX XXX XXX

ρt+6 XXX XXX XXX XXX XXX XXX XXX

ρt+7 XXX XXX XXX XXX XXX XXX XXX

ρt+8 XXX XXX XXX XXX XXX XXX XXX

ρt+9 XXX XXX XXX XXX XXX XXX XXX

ρt+10 XXX XXX XXX XXX XXX XXX XXX

ρt+11 XXX XXX XXX XXX XXX XXX XXX

ρt+12 XXX XXX XXX XXX XXX XXX XXX

ρt+13 XXX XXX XXX XXX XXX XXX XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX

ρt+1X XXX XXX XXX XXX XXX XXX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX

ρt+18 XXX XXX XXX XXX XXX XXX X XXX

ρt+19 XXX XXX XXX XXX XXX XXX XXX

ρt+20 XXX XXX XXX XXX XXX XXX XXX

ρt+21 XXX XXX XXX XXX XXX XXX XXX

ρt+22 XXX XXX XXX XXX XXX XXX XXX

240

Page 253: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.8: Summary of Model Confidence Set p-values using the QLIKE loss function under the semi-quadratic statistic when realised diagonal equicor-relation is the Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX

ρt+2 XX XX XX XXX XXX XX XX XX XX XXX

ρt+3 XX XXX XXX XXX XXX XXX XXX

ρt+4 XX X XX XXX XXX XXX XXX XXX

ρt+5 XXX XX XXX XXX XXX XXX XXX X XXX

ρt+6 XXX XXX XXX XXX XXX XXX X XXX

ρt+7 XXX X XXX XXX XXX XXX XXX XX XXX

ρt+8 XXX X XXX XXX XXX XXX XXX XX XXX

ρt+9 XXX X XXX XXX XXX XXX XXX X XXX

ρt+10 XXX XXX XXX XXX XXX XXX X XXX

ρt+11 XXX X XXX XXX XXX XXX XXX X XXX

ρt+12 XXX XXX XXX XXX XXX XXX XXX

ρt+13 XXX X XXX XXX XXX XXX XXX XXX XXX

ρt+14 XXX XXX XXX XXX XXX XXX X XXX

ρt+1X XXX XXX XXX XXX XXX XXX X XXX

ρt+16 XXX XXX XXX XXX XXX XXX XX XXX

ρt+17 XXX XXX XXX XXX XXX XXX X XXX

ρt+18 XXX X XXX XXX XXX XXX XXX XX XXX

ρt+19 XXX XXX XXX XXX XXX XXX XX XXX

ρt+20 XXX XXX XXX XXX XXX XXX X XXX

ρt+21 XXX XXX XXX XXX XXX XXX X XXX

ρt+22 XXX XXX XXX XXX XXX XXX X XXX

241

Page 254: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.9: Summary of Model Confidence Set p-values using the Mean-Square-Error loss function under the range statistic when Spearman equicorrelationis the Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX

ρt+2 X X XXX XX XX X XX XX

ρt+3 XXX XX XX XX

ρt+4 X X X XXX XX XX X X X XX

ρt+5 XX XX XX XXX XX XX XX X X XX

ρt+6 XX XX XX XXX XX XX XX X XX XX

ρt+7 XX XX XX XXX XX XXX XX X XX XXX

ρt+8 XX XX XXX XXX XXX XX XX XX XX XXX

ρt+9 X X XXX XXX XXX XXX X X X XXX

ρt+10 XX XXX XXX XXX XXX XX XXX

ρt+11 XXX X XXX XXX XXX XXX XXX X X XXX

ρt+12 XXX X XXX XXX XXX XXX XXX X X XXX

ρt+13 XXX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+14 XXX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+1X XXX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+17 XXX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+18 XXX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+19 XXX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+20 XXX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+21 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+22 XXX XX XXX XXX XXX XXX XXX XX XX XXX

242

Page 255: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.10: Summary of Model Confidence Set p-values using the Mean-Square-Error loss function under the semi-quadratic statistic when Spearmanrank equicorrelation is the Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX

ρt+2 X X X XXX X X X X X

ρt+3 X XXX XX XX XX

ρt+4 X X X XXX XX XX X XX

ρt+5 XX XX XX XXX XX XX XX X XX

ρt+6 XX XX XX XXX XX XX XX X X XX

ρt+7 XX XX XX XXX XXX XXX XX XX XXX

ρt+8 XX XX XXX XXX XXX XXX XX X XX XXX

ρt+9 XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+10 XXX XX XXX XXX XXX XXX XX X X XXX

ρt+11 XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+12 XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+13 XXX XX XXX XXX XXX XXX XXX X XX XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+1X XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX X XXX XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX X XXX XXX

ρt+18 XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+19 XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+20 XXX XX XXX XXX XXX XXX XXX X XX XXX

ρt+21 XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+22 XXX XX XXX XXX XXX XXX XXX X X XXX

243

Page 256: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.11: Summary of Model Confidence Set p-values using the QLIKE loss function under the range statistic when Spearman rank equicorrelation isthe Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX

ρt+2 XXX X X XXX X X

ρt+3 XXX

ρt+4 XXX

ρt+5 XX XX XX XXX XX XX XX XX XX

ρt+6 X XX XX XXX XX XX X X X XX

ρt+7 XX XX XX XXX XX XX XX X XX XX

ρt+8 XX XX XXX XXX XXX XXX XX XX XX XXX

ρt+9 XX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+10 X XX XXX XXX XXX XXX XX XX XXX

ρt+11 XX XX XXX XXX XXX XXX XX XX XX XXX

ρt+12 XX XX XXX XXX XXX XXX XX XX XX XXX

ρt+13 XX XX XXX XXX XXX XXX XX XX XX XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+1X XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+18 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+19 XX XX XXX XXX XXX XXX XXX XX XX XXX

ρt+20 XXX XXX XXX XXX XXX XXX XXX XX XX XXX

ρt+21 XXX XXX XXX XXX XXX XXX XXX XX XXX XXX

ρt+22 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

244

Page 257: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.12: Summary of Model Confidence Set p-values using the QLIKE loss function under the semi-quadratic statistic when Spearman rank equicor-relation is the Xt measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX

ρt+2 XXX X X XXX X X

ρt+3 XXX

ρt+4 XXX

ρt+5 XX XX XX XXX XX XX XX X XX

ρt+6 X X X XXX X X X X

ρt+7 XX XX XX XXX XX XX XX X XX

ρt+8 XX XXX XXX XXX XXX XX XXX X XXX

ρt+9 XX XXX XXX XXX XXX XXX XXX X XXX

ρt+10 XX XX XX XXX XXX XXX XX XX XXX

ρt+11 XX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+12 XX XX XXX XXX XXX XXX XX X XXX

ρt+13 XXX XX XXX XXX XXX XXX XX X XX XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+1X XXX XXX XXX XXX XXX XXX XXX X XXX

ρt+16 XXX XXX XXX XXX XXX XXX XXX X XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX X XXX

ρt+18 XXX XX XXX XXX XXX XXX XXX X XXX

ρt+19 XX XX XXX XXX XXX XXX XXX X XXX

ρt+20 XX XX XXX XXX XXX XXX XXX X XXX

ρt+21 XX XX XXX XXX XXX XXX XXX X XXX

ρt+22 XX XX XXX XXX XXX XXX XXX X XX

245

Page 258: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.13: Summary of Model Confidence Set p-values using the Mean-Square-Error loss function under the range statistic when ut is the Xt measure.Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 X X XXX XXX XXX X XXX XXX X

ρt+2 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+3 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+4 XXX XXX XXX XXX XXX XXX XXX X X XXX

ρt+5 XXX XXX XXX XXX XXX XXX XXX X X XXX

ρt+6 XXX XXX XXX XXX XXX XXX XXX XXX

ρt+7 XXX XXX XXX XXX XXX XXX XXX XXX

ρt+8 XXX XX XXX XXX XXX XXX XXX XXX

ρt+9 XXX XXX XXX XXX XXX XXX XXX XXX

ρt+10 XXX XX XXX XXX XXX XXX XXX X XX XXX

ρt+11 XXX XXX XXX XXX XXX XXX XXX

ρt+12 XXX X XXX XXX XXX XXX XXX X XXX

ρt+13 XXX XXX XXX XXX XXX XXX XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX

ρt+1X XXX X XXX XXX XXX XXX XXX X XXX

ρt+16 XXX X XXX XXX XXX XXX XXX X XXX

ρt+17 XXX XXX XXX XXX XXX XXX XXX

ρt+18 XXX X XXX XXX XXX XXX XXX X XXX

ρt+19 XXX XXX XXX XXX XXX XXX XX XXX

ρt+20 XXX XXX XXX XXX XXX XXX XXX

ρt+21 XXX XXX XXX XXX XXX XXX XXX

ρt+22 XXX X XXX XXX XXX XXX XXX X XXX

246

Page 259: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.14: Summary of Model Confidence Set p-values using the Mean-Square-Error loss function under the semi-quadratic statistic when ut is the Xt

measure. Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XX XXX XXX XXX XXX XXX XXX XX

ρt+2 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+3 XXX XXX XXX XXX XXX XXX XXX XXX XXX XXX

ρt+4 XXX XXX XXX XXX XXX XXX XXX X XXX XXX

ρt+5 XXX XXX XXX XXX XXX XXX XXX X XX XXX

ρt+6 XXX XXX XXX XXX XXX XXX XXX XX XXX

ρt+7 XXX XXX XXX XXX XXX XXX XXX XX XXX

ρt+8 XXX XXX XXX XXX XXX XXX XXX XX XXX

ρt+9 XXX XXX XXX XXX XXX XXX XXX X XXX

ρt+10 XXX XX XXX XXX XXX XXX XXX X XXX XXX

ρt+11 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+12 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+13 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+14 XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+1X XXX XX XXX XXX XXX XXX XXX XX XXX

ρt+16 XXX X XXX XXX XXX XXX XXX X XXX

ρt+17 XXX X XXX XXX XXX XXX XXX X XXX

ρt+18 XXX X XXX XXX XXX XXX XXX X XXX

ρt+19 XXX X XXX XXX XXX XXX XXX XX XXX

ρt+20 XXX X XXX XXX XXX XXX XXX XX XXX

ρt+21 XXX X XXX XXX XXX XXX XXX XX XXX

ρt+22 XXX X XXX XXX XXX XXX XXX XX XXX

247

Page 260: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.15: Summary of Model Confidence Set p-values using the QLIKE loss function under the range statistic when ut is the Xt measure. Xindicatesa p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX XXX XXX

ρt+2 XX XX XX XXX XX XX XX XX X XX

ρt+3 XXX XXX XX XXX XXX XXX XXX XX XX XXX

ρt+4 XXX XXX XXX XXX XXX XXX XXX X XXX

ρt+5 XXX XXX X XXX XXX XXX XXX XXX

ρt+6 XXX XXX XXX XXX XXX XXX XXX XXX

ρt+7 XXX XXX XXX XXX XXX XXX XXX XXX

ρt+8 XXX XXX XXX XXX XXX XXX XXX X XXX

ρt+9 XXX XX XX XXX XXX XXX XXX X XXX

ρt+10 XXX X XXX XXX XXX XXX XXX X XXX

ρt+11 XXX XXX XXX XXX XXX XX XXX

ρt+12 XX X XX XXX XXX XXX XX X XX

ρt+13 XXX XXX XXX XXX XXX XXX XXX

ρt+14 XXX XX XXX XXX XXX XXX XXX XXX

ρt+1X XXX X XXX XXX XXX XXX XXX X XXX

ρt+16 XX X XX XXX XXX XX XX X XX

ρt+17 XXX X XX XXX XXX XXX XXX X XXX

ρt+18 XXX XXX XXX XXX XXX XXX XXX

ρt+19 XX X XX XXX XXX XX XX X XX

ρt+20 XXX XXX XXX XXX XXX XXX XXX

ρt+21 XX XX XXX XXX XXX XXX XXX

ρt+22 XXX X XXX XXX XXX XXX XXX X XXX

248

Page 261: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Table A.16: Summary of Model Confidence Set p-values using the QLIKE loss function under the semi-quadratic statistic when ut is the Xt measure.Xindicates a p-value between 0.05 and 0.10, XXbetween 0.10 and 0.20, and XXXgreater than 0.20.

Mean Square Error Tr p-values of Model Confidence Set

Horizon cDCC ut ut-IC REC REC-IC SREC SREC-IC DREC DREC-IC ICρt+1 XXX XXX XXX XXX XXX

ρt+2 X X X XXX XX XX X X X

ρt+3 X X X XXX XX XXX XXX X XXX

ρt+4 XX XX XX XXX XXX XXX XX XX

ρt+5 XXX XX X XXX XXX XXX XXX XXX

ρt+6 XXX XX XXX XXX XXX XXX XXX X XXX

ρt+7 XX XX XX XXX XXX XXX XX X XX

ρt+8 XX XX XX XXX XXX XXX XX X XXX

ρt+9 X X X XXX XXX XX XX X XX

ρt+10 XX X XX XXX XXX XXX XX X XXX

ρt+11 XXX XXX XXX XXX XXX XX X XXX

ρt+12 XX XX XXX XXX XXX XX XX

ρt+13 XXX X XXX XXX XXX XXX XXX X XXX

ρt+14 XXX XXX XXX XXX XXX XXX XXX X XXX

ρt+1X XXX XX XXX XXX XXX XXX X XXX

ρt+16 X X X XXX XXX X X X X

ρt+17 XX X XXX XXX XX XX XX

ρt+18 XX XX XXX XXX XXX XXX X XXX

ρt+19 XX X XX XXX XXX XXX XX X XXX

ρt+20 XXX XXX XXX XXX XXX XXX X XXX

ρt+21 X XX XXX XXX XX XXX XX

ρt+22 XXX XXX XXX XXX XXX XXX X XXX

249

Page 262: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Bibliography

Aielli, G.P., Dynamic Conditional Correlations: On Prop-

erties and Estimation, 2009. SSRN Working Paper,

http://papers.ssrn.com/sol3/papers.cfm?abstract id=1507743

Amin, K. and Ng, V. 1997, “Inferring Future Volatility from the Information

in Implied Volatility in Eurodollar Options: A New Approach”, Review of

Financial Studies, 10, 333-367.

Andersen, T., 1996, “Return volatility and trading volume: an information

flow interpretation of stochastic volatility”, Journal of Finance, 51, 169204.

Andersen, T. G. and Bollerslev, T. 1998, “Intraday periodicity and volatility

persistence in financial markets”, Journal of Empirical Finance, 4, 115-158.

Andersen, T. G. and Bollerslev, T. 1998, “Answering the Skeptics: Yes, Stan-

dard Volatility Models Do Provide Accurate Forecasts”, International Eco-

nomic Review, 39:4, 885-905.

Andersen, T. G., Bollerslev, T., Christoffersen, P. F., and Diebold, F. X., 2006,

“Volatility and Correlation Forecasting”, In Elliott, G., Granger, C. W. J., and

Timmermann, A., Handbook of Economic Forecasting, Volume 1, 778-878.

Andersen, T. G., Bollerslev, T., Diebold, F. X., and Labys, P., 2000, “Great

Realizations”, Risk, March, 2000, 105-108.

250

Page 263: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Andersen, T. G., Bollerslev, T., Diebold, F. X., and Labys, P., 2001. “The

distribution of exchange rate volatility”. Journal of the American Statistical

Association, 96, 42-55.

Andersen, T. G., Bollerslev, T., Diebold, F. X., and Labys, P., 2003, “Modeling

and Forecasting Realized Volatility”, Econometrica, 71, 529-626.

Andersen, T. G., Bollerslev, T., Meddahi, N., 2004. “Analytical Evaluation of

Volatility Forecasts”, International Economic Review, 45, 1079-1110.

Andersen, T. G., Bollerslev, T., Diebold, F. X., and Vega, C., 2007, “Real-time

price discovery in global stock, bond and foreign exchange markets”, Journal

of International Economics, 73:2, 251-277.

Andersen, T. G., Nam, K., and Vahid, F., 1999, “Asymmetric nonlinear

smooth transition GARCH models”, In: Rothman, P. (Ed.): nonlinear time

series analysis of economic and financial data, 191-207. Kluwer, Boston.

Anthony, J. H., 1988, “The Interrelation of Stock and Options Market Trading

Volume Data”, Journal of Finance,43, 949-964.

Arteche, J., 2004, “Gaussian semiparametric estimation in long memory in

stochastic volatility and signal plus noise models”, Journal of Econometrics,

119, 131-154.

Asai, M., and McAleer, M., 2003, “Dynamic leverage and threshold effects in

stochastic volatility models”, unpublished paper, Faculty of Economics, Tokyo

metropolitan University.

Ballie, R. T., Bollerslev, T. and Mikkelsen, H. O., 1996, “Fractionally Inte-

grated Generalized Autoregressive Conditional Heteroscedasticity”, Journal of

Econometrics, 74:1, 3-30.

251

Page 264: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Ballie, R. T., and Morana, C., 2009, “Modelling Long Memory and Structural

Breaks in Conditional VAriances: an Adaptive FIGARCH Approach”, Journal

of Economic Dynamics and Control, 33, 1557-1592.

Barndorff-Nielsen, O. E., Shephard, N., 2001, “Superposition of Ornstein-

Uhlenbeck type processes”, Theory of Probability and its Applications, 46,

175-194.

Barndorff-Nielsen, O. E., and Shephard, N., 2002, “Econometric analysis of re-

alised volatility and its use in estimating stochastic volatility models”, Journal

of the Royal Statistical Society Series B, 64, 253-280.

Barndorff-Nielsen, O. E., and Shephard, N., 2003, “Realised power variation

and stochastic volatility”, Bernoulli, 9, 243-265.

Barndorff-Nielsen, O. E., and Shephard, N., 2004, “Power and bipower vari-

ation with stochastic volatility and jumps (with discussion)”, Journal of Fi-

nancial Econometrics, 2, 1-48.

Barndorff-Nielson, O. E., and Shephard, N., 2004, “A Feasible Central Limit

Theory for Realised Volatility under Leverage”, Manuscript, Nuffield College,

Oxford University.

Barndorff-Nielson, O. E., and Shephard, N., 2004, “Econometric Analysis of

Realized Covariation: High Frequency Based Covariance, Regression, and Cor-

relation in Financial Economics”, Econometrica, 72:3, 885-925.

Barndorff-Nielsen, O. E., and Shephard, N., 2006, “Econometrics of Testing for

Jumps in Financial Economics Using Bipower Variation”, Journal of Financial

Econometrics, 4:1, 1-30.

Barndorff-Nielsen, O., Hansen, P. R., Lunde, A., and Shep-

hard, N., 2010, “Multivariate realised kernels: consistent posi-

tive semi-definite estimators of the covariation of equity prices

252

Page 265: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

with noise and non-synchronous trading”, SSRn Working Paper,

http://papers.ssrn.com/sol3/papers.cfm?abstract id=1154144

Bates, D. S., 1996, “Jumps and Stochastic Volatility: Exchange Rate Processes

Implicit in Deutsche Mark Options”, Review of Financial Studies, 9:1, 69-107.

Bates, D. S., 2000. “Post-‘87 Crash Fears in the S&P500 Futures Option Mar-

ket” Journal of Econometrics, 94, 181-238.

Bauwens, L., Laurent, S., Rombouts, J. V. K., Multivariate GARCH Models:

A Survey, 2006. Journal of Applied Econometrics 21, 79-109.

Becker, R., and Clements, A. E., 2008, “Are combination forecasts of S&P

500 volatility statistically superior??”, International Journal of Forecasting,

24, 122-133.

Becker, R., Clements, A. E., and Hurn, S., 2011, “Semi-parametric forecast-

ing of realized volatility”, forthcoming in Studies in Nonlinear Dynamics and

Econometrics.

Becker, R., Clements, A. E., and McClelland, A. J., 2009, “The jump com-

ponent of the S&P 500 volatility and the VIX index”, Journal of Banking &

Finance, 33:6, 1033-1038.

Becker, R., Clements, A. E., and White, S. I., 2006, “Does implied volatil-

ity provide any information beyond that captured in model-based volatility

forecasts?”, Journal of Banking & Finance, 31, 2535-2549.

Becker, R., Clements, A. E., and White, S. I., 2007, “On the informational ef-

ficiency of S&P500 implied volatility”, North American Journal of Economics

and Finance, 17, 139-153.

Beckers, S. 1981, “Standard Deviations Implied in Option Prices as Predictors

of Future Stock Price Variability”, Journal of Banking & Finance, 5, 363-382.

253

Page 266: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Bera, A. K., and Kim, S., 2002, “Testing constancy of correlation and other

specifications of the BGARCH model with an application in international eq-

uity returns”, Journal of Empirical Finance, 9, 171-195.

Berger, D., Chaboud, A., and Hjalmarsson, E., 2009, “What drives volatility

persistence in the foreign exchange market?”, Journal of Financial Economics,

94, 192-213.

Bessimbinder, H., and Seuin, H. J., 1992, “Futures trading activity and stock

price volatility”, Journal of Finance, 47, 2015-2034.

Black, F., 1975, “Fact and Fantasy in the Use of Options”, Financial Analysts

Journal, 31, 36-41.

Black, F., 1976, “Noise”, Journal of Finance, 41, 529-543.

Black, F. and Scholes, M. 1973, “The Pricing of Options and Corporate Lia-

bilities”, Journal of Political Economy, 81, 637-659.

Blair, B. J., Poon, S-H., and Taylor 2001, “Forecasting S&P 100 Volatility:

The Incremental Information Content of Implied Volatilities and High Fre-

quency Index Returns”, Journal of Econometrics, 105, 5-26.

Bollerslev, T., 1986, “Generalized Autoregressive Conditional Heteroskedas-

ticity”, Journal of Econometrics, 31, 307-327.

Bollerslev, T., 1987, “A conditionally heteroscedastic time series model for

speculative prices and rates of return”, Review of Economics and Statistics,

69, 542-547.

Bollerslev, T., 1990, “Modelling the Coherence in Short-Run Nominal Ex-

change Rates: A Multivariate Generalized ARCH model”, Review of Eco-

nomics and Statistics, 72, 498-505.

254

Page 267: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Bollerslev, T., Engle, R. F, and Wooldridge, J. M., 1988, “A Capital Asset

Pricing Model with Time-Varying Covariances”, Journal of Political Economy,

96:1, 116-131.

Bollerslev, T., Ghysels, E., 1996, “Periodic Autoregressive Conditional Het-

eroscedasticity”, Journal of Business & Economic Statistics, 14:2, 139-151.

Bollerslev, T., Gibson, M., Zhou, H., 2011, “Dynamic Estimation of Volatility

Risk Premia and Investor Risk Aversion from Option-Implied and Realized

Volatilities”, Journal of Econometrics, 160:1, 235-245.

Bollerslev, T. and Mikkelsen, H. O. 1996, “Modelling and Pricing Long Mem-

ory in Stock Market Volatility”, Journal of Econometrics, 73:1, 151-184.

Bollerslev, T., Tauchen, G., and Zhou, H., 2009, “Expected Stock Returns

and Variance Risk Premia.” Review of Financial Studies, 22:11, 4463-4492.

Bollerslev, T., Zho, H., 2002, “Estimating Stochastic Volatility Diffusion Using

Conditional Moments of Integrated Volatility”, Journal of Econometrics, 109,

3365.

Brailsford, T. J. and Faff, R. W. 1996, “An Evaluation of Volatility Forecasting

Techniques”, Journal of Banking & Finance, 20:3, 419-438.

Breeden, D. T., and Litzenberger, R. H., 1978. “The Pricing of State-

Contingent Claims Implicit in Option Prices.”, Journal of Business, 51, 621-

651.

Breidt, F. J., Crato, N. and de Lima, P. 1998, “The detection and estimation

of long memory stochastic volatility”, Journal of Econometrics, 83, 325-348.

Britten-Jones, M., and Neuberger, A., 2000, “Option Prices, Implied Price

Processes, and Stochastic Volatility.”, Journal of Finance, 55:2, 839-866.

255

Page 268: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Campbell, J. Y., and Hentschel, L., 1992, “No News is Good News: an asym-

metric model of changing volatility in stock returns”, Journal of Financial

Economics, 31, 281-318.

Campbell, J. Y., Lo, A. W., MacKinlay, A. G., 1997, “The Econometrics of

Financial Markets”, Princeton University Press, Princeton NJ.

Canina, L. and Figlewski, S. 1993, “The Informational Content of Implied

Volatility”, Review of Financial Studies, 6:3, 659-681.

Cao, C., Chen, Z. , and Griffin, J., 2005, “Informational Content of Option

Volume Prior to Takeovers”, Journal of Business, 78, 1073-1109.

Cappiello, L., Engle, R. F., and Sheppard, K., 2006, “Asymmetric dynamics

in the correlations of global equity and bond returns”, Journal of Financial

Econometrics, 4, 537-572.

Castern, O., and Mazzotta, S., 2005, “Foreign Exchange Option

and Returns based Correlation Forecasts: Evaluation and Two Ap-

plications”, European Central Bank Working Paper Series, 447,

http://ideas.repec.org/p/ecb/ecbwps/20050447.html

Chakravarty, S., Gulen, H., and Mayhew, S., 2004, “Informed Trading in Stock

and Option Markets”, Journal of Finance, 59, 1235-1258.

Chan, K., 1990, “Information in the Cash Market and Stock Index Futures

Market”, unpublished Ph.D. dissertation, Ohio State University, Columbus.

Chan, K., 1992, “A Further Analysis of the Lead–Lag Relationship Between

the Cash Market and Stock Index Futures Market”, The Review of Financial

Studies, 5:1, 123-152

Chan, K., Chung, Y. P. and Fong, W., 2002, “The Informational Role of Stock

and Option Volume”, Review of Financial Studies, 15, 1049-1075.

256

Page 269: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Chatrath, A., Ramchander, S., and Song, F., 1996, “The role of futures trading

activity in exchange rate volatility”, Journal of Futures Markets, 16, 561-584.

Chernov, M., and Ghysels, E. 2000, “A Study towards a Unified Approach to

the Joint Estimation of Objective and Risk Neutral Measures for the Purposes

of Options Valuation”, Journal of Financial Economics, 56:3, 407-458.

Chib, S., Omori, Y., and Asai, M., 2009, “Multivariate Stochastic Volatility”,

In: Mikosch, T., Kreiss, T., Davis, J.-P., Andersen, R. A., Gustav, T. (Eds.),

Handbook of Financial Time Series. Springer, pp. 365-402.

Chicago Board of Exchange, CBOE S&P 500 Implied Correlation Index, 2009.

http://www.cboe.com/micro/impliedcorrelation/ImpliedCorrelationIndicator.pdf

Chicago Board of Exchange, The CBOE Volatility Index, 2003.

http://www.cboe.com/micro/vix/vixwhite.pdf

Chiras, D., Manaster, S. 1978, “The Information Content of Option Prices

and a Test of Market Efficiency”, Journal of Financial Economics, 6, 213-234.

Christensen, T. M., Hurn, A. S., and Lindsay, K. A. 2008, “The Devil is in

the Detail: Hints for Practical Optimisation”, Economic Analysis and Policy,

38:2, 345-368.

Christensen, D. J., and Prabhala, K. A. 1998, “The Relation between Implied

and Realized Volatility”, Journal of Financial Economics, 50:2, 125-150.

Christofferson, P., Feunou, B., Jacobs, K., and Meddahi, N., 2010,

“The Economic Value of Realized Volatility”, SSRN Working Paper,

http://papers.ssrn.com/sol3/papers.cfm?abstract id=1572756

Christofferson, P., Heston, S. L. and Jacobs, K. 2006, “Option valuation with

conditional skewness”, Journal of Econometrics, 131, 253-284.

257

Page 270: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Christofferson, P., Heston, S. L. and Jacobs, K. 2009, “The Shape and Term

Structure of the Index Option Smirk: Why Multifactor Stochastic Volatility

Models Work So Well”, CREATES Working paper, 2009-34.

Clark, P. K., 1973, “A subordinated stochastic process model with finite vari-

ance for speculative prices”, Econometrica, 41, 135-156.

Clements, A. E., Hurn, A. S., White, S. I., 2003, “Discretised Non-Linear

Filtering of Dynamic Latent Variable Models with Application to Stochas-

tic Volatility”, Discussion Paper No 179, School of Economics and Finance,

Queensland University of Technology.

Cleveland, W. S., 1979, “Robust locally weighted regression and smoothing

scatterplots”, Journal of the American Statistical Society, 74, 829-836.

Cochrane, J.H., 2001. “Asset Pricing”, Princeton University Press: Princeton,

NJ.

Comte, F., and Renault, E., 1998, “Long memory in continuous-time stochas-

tic volatility models”, Mathematical Finance, 8, 291-323.

Cootner, P. H. 1964, “The Random Character of Stock Market Prices”, Risk

Books, Risk Publications, Haymarket, London, UK.

Corsi, F., 2009, “A Simple Approximate Long-Memory Model of Realized

Volatility”, Journal of Financial Econometrics, 7:2, 174-196.

Corsi, F., Audrino, F., Realized Correlation Tick-by-Tick,

2007. Universitat St. Gallen Discussion Paper 2007-02,

http://www.vwa.unisg.ch/RePEc/usg/dp2007/DP02-Au.pdf

Darrat, A., and Rachman, S., 1995, “Has futures trading activity caused stock

price volatility?”, Journal of Futures Markets, 15, 537-557.

Day, T. E., and Lewis, M. 1992, “Stock Market Volatility and the Information

Content of Stock Index Options”, Journal of Econometrics, 52, 267-287.

258

Page 271: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

de Jong, F., and Donders, M. W. M., 1998, “Intraday lead-lag relationships

between the futures-, options and stock market”, European Finance Review,

1, 337-359.

Derman, E., and Kani, I., 1994. “Riding on a Smile.” RISK, 7, 32-39.

Diebold, F. X., 1986, “Modeling the Persistence of Conditional Variances: A

Comment”, Econometric Reviews, 5, 51-56.

Diebold, F. X., and Inoue, A., 2001, “Long Memory and Regime Switching”,

Journal of Econometrics, 105, 131-159.

Diebold, F. X., and Mariano, R. S., 1995, “Comparing Predictive Accuracy”,

Journal of Business and Economic Statistics, 13, 253-263.

Ding, Z., Granger, C.W.J., Engle, R.F., 1992, “A long memory property of

stock market returns and a new models”, Journal of Empirical Finance, 1,

83-106.

Doidge, C., and Wei, J. 1998, “Volatility Forecasting and the Efficiency of

the Toronto 35 Index Options Market”, Canadian Journal of Administrative

Sciences, 15:1, 28-38.

Driessen, J., Maenhout, P., and Vilkov, G., 2009, “Option Implied Correlations

and Correlation Risk”, Journal of Finance, 64:3, 1061-1519.

Duffie, D., Pan, J, and Singleton, K. 2000, “Transform Analysis and Asset

Pricing for Affine Jump-Diffusions”, Econometrica, 68:6, 1343-1376.

Dumas, B., Fleming, J., and Whaley, R. E., 1998, “Implied Volatility Func-

tions: Empirical Tests”, Journal of Finance, 53:6, 2059-2106.

Dupire, B., 1994, “Pricing with a Smile.”, RISK, 7, 18-20.

259

Page 272: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Easley, D., OHara, M., and Srinivas, P. S., 1998, “Option Volume and Stock

Prices: Evidence on Where Informed Traders Trade”, Journal of Finance, 53,

431-465.

Ederington, L. H. and Guan, W. 1999, “The Information Frown in Option

Prices”, working paper, University of Oklahoma.

Ederington, L. H. and Guan, W. 2000, “Forecasting Volatility”, working paper,

University of Oklahoma.

Ederington, L. H. and Guan, W. 2002, “Is Implied Volatility an Information-

ally Efficient and Effective Predictor of Future Volatility”, Journal of Risk,

4:Spring, 29-46.

Ellul A., Holden, C. W., Jain, P., and Jennings, R., 2003, “Determinants of

Order Choice on the New York Stock Exchange”, manuscript, Indiana Uni-

versity.

Elton, E. J., Gruber, M. J., 1973, “Estimating the Dependence Structure

of Share Prices-Implications for Portfolio Selection”, Journal of Finance, 28,

1203-1232.

Engle, R. F. 1982, “Autoregressive Conditional Heteroscedasticity with Es-

timates of the Variance of United Kingdom Inflation”, Econometrica, 50:4,

987-1007.

Engle, R. F., 2002, “Dynamic Conditional Correlation: A Simple Class of Mul-

tivariate Generalized Autoregressive Conditional Hetereoskedasticity Models”,

Journal of Business and Economic Statistics, 20:3, 339-350.

Engle, R. F., 2005, “Integrating Investment Risk and Trading Risk”,

manuscript, Department of Finance, NYU.

Engle, R. F., and Bollerslev, T., 1986, “Modelling the Persistence of Condi-

tional Variances”, Econometric Reviews, 5, 1-50.

260

Page 273: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Engle, R. F., and Gallo, G. M., 2006, “A multiple indicators model for volatil-

ity using intra-daily data”, Journal of Econometrics, 131, 3-27.

Engle, R. F., Kelly, B. T., Dynamic Equicorrelation, 2008. NYU Working

Paper, http://pages.stern.nyu.edu/ rengle/Dynamic%20Equicorrelation.pdf

Engle, R. F., Kelly, B. T., Dynamic Equicorrelation, 2009. SSRN Working

Paper, http://papers.ssrn.com/sol3/papers.cfm?abstract id=1354525

Engle, R. F., and Kroner, F. K., 1995, “Multivariate simultaneous generalized

ARCH”, Econometric Theory, 11, 122-150.

Engle, R. F., and Lee, G. G. J., 1999, “A permanent and transitory component

model of stock return volatility”, In: Engle, R. F., While, H., (Eds.), Coin-

tegration, Causality, and Forecasting: A Festschrift in Honor of Clove W. J.

Granger., Oxford University Press, Oxford, pp. 475-497.

Engle, R. F., and Ng, V. K., 1993, “Measuring and Testing the Impact of

News on Volatility”, Journal of Finance, 48, 1749-1777.

Epps, T. W., 1979 “Comovements in Stock Prices in the Very Short Run”,

Journal of the American Statistical Association, 74, 291-298.

Evans, K. P., and Speight, A. E. H., 2010, “Intraday periodicity, calendar and

announcement effects in Euro exchange rate volatility”, Research in Interna-

tional Business and Finance, 24, 82-101.

Fackler, P. L., and King, R. P., 1990, “Calibration of option-based proba-

bility assessments in agricultural commodity markets”, American Journal of

Agricultural Economics, 72, 73-83.

Fama, E. F., 1965, “The Behaviour of Stock Market Prices”, Journal of Busi-

ness, 38:1, 34-105.

Fleming, J., 1998, “The quality of market volatility forecasts implied by S&P

100 index option prices”, Journal of Empirical Finance, 5:4, 317-345.

261

Page 274: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Fleming, J., Ostdiek, B., and Whaley, R. E., 1995, “Predicting Stock Market

Volatility: A New Measure”, Journal of Futures Markets, 15:3, 265-302.

Frennberg, P., and Hansson, D., 1996, “An Evaluation of Alternative Models

for Predicting Stock Volatility”, Journal of Financial Markets, Institutions &

Money, 5, 117-134.

Fung, H-G., Lie, C-J. and Moreno, A., 1990, “The Forecasting Performance of

the Implied Standard Deviation in Currency Options”, Managerial Finance,

16:3, 24-29.

Fung, W. K. H. and Hsieh, D. A. 1991, “Empirical Analysis of Implied Volatil-

ity: Stocks, Bonds and Currencies”, work. paper, dept. finance, Fuqua School

of Business.

Gallant, A.R., Hsu, C.-T., Tauchen, G., 1999, “Using daily range data to

calibrate volatility diffusions and extract the forward integrated volatility”,

Review of Economics and Statistics, 84, 617-631.

Gemmill, G., 1986, “The Forecasting Performance of Stock Options on the

London Traded Options Markets”, Journal of Business, Accounting & Fi-

nance, 13:4, 535-546.

Ghysels, E., Santa-Clara, P., Valkanov, R., 2002, “The MIDAS touch: mixed

data sampling regression”, Discussion Paper UCLA and UNC available at:

http://www.unc.edu/ eghysels.

Ghysels, E., Santa-Clara, P., Valkanov, R., 2006, “Predicting volatility: get-

ting the most out of return data sampled at different frequencies”, Journal of

Econometrics, 131, 59-95.

Giot, P., and Laurent, S., 2007, “The information content of implied volatility

in light of the jump/continuous decomposition of realized volatility”, Journal

of Futures Markets, 23, 289-305.

262

Page 275: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Glosten, L. R., Jagannathan, R., and Runkle, D. E., 1993, “On the relation

between the Expected Value and the Volatility of the Nominal Excess Return

on Stocks”, Journal of Finance, 48:5, 1779-1801.

Gonzales-Rivera, G., 1998, “Smooth Transition GARCH Models”, Studies in

Nonlinear Dynamics and Econometrics, 3, 161-178.

Gourieroux, C., Jasiak, J., 2001, “Financial Econometrics”, Princeton Univer-

sity Press: Princeton NJ.

Granger, C. W. J. 2001,“Long Memory Processes - An Economist’s View-

point”, work. paper 99-14, UC-San Diego.

Granger, C. W. J., and Newbold, P., 1977, “Forecasting Economic Time Se-

ries”, Orlando, Florida: Academic Press.

Gray, S. F., 1996, “Modeling the Conditional Distribution of Interest Rates as

a Regime-Switching Process”, Journal of Financial Markets, 42, 27-62.

Griffiths, M., Smith, B., Turnbull, D., and White, R. W., 2000, “The costs and

the determinants of order aggressiveness”, Journal of Financial Economics,

56, 6588.

Guo, D., 1996, “The Information Content of Implied Stochastic Volatility from

Currency Options”, Canadian Journal of Economics, 29:S, 559-561.

Haas, M., Mittnik, S., and Paolella, M. S., 2004,“A New Approach to Markov-

Switching GARCH Models”, Journal of Financial Econometrics, 4, 493-530.

Hagerud, G., 1997,“A New Non-Linear GARCH model”, EFI Economic Re-

search Institute, Stockholm.

Hamilton, J. D., and Susmel, R., 1994, “Autoregressive Conditional Hetere-

oskedasticity and Changes in Regime”, Journal of Econometrics, 64, 307-333.

263

Page 276: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Hansen, P. R, 2005: A Test for Superior Predictive Ability”, Journal of Busi-

ness and Economic Statistics, 23, 365380.

Hansen, P. R., Huang, Z., and Shek, H. H., 2010, “Realised GARCH: A Com-

plete Model of Returns and Realized Measures of Volatility”, CREATES Re-

search Paper 2010-13, Aarhus University.

Hansen, P. R., and Lunde, A., 2006, “Realised Variance and Market Mi-

crostructure Noise”, Journal of Business and Economic Statistics, 24, 127-

218.

Hansen, P. R., and Lunde, A., 2005, “Forecast Comparison of Volatility Mod-

els: Does Anything Beat a GARCH(1,1)?”, Journal of Applied Econometrics,

20, 873-889.

Hansen, P. R., Lunde, A., Nason, J. M., 2003. “Choosing the best volatility

models: The model confidence set approach”, Oxford Bulletin of Economics

and Statistics 65, 839-861.

Hansen, P. R., Lunde, A., Nason, J. M., The

Model Confidence Set, 2010. SSRN Working Paper,

http://papers.ssrn.com/sol3/papers.cfm?abstract id=522382

Harris, F., McInish, T., Shoesmith, G., and Wood, R., 1995, “Cointegration,

error correction and price discovery on informationally-linked security mar-

kets”, Journal of Financial and Quantitative Analysis, 30, 563581.

Harris, L., 1990, “Estimation of stock variance and serial covariance from

discrete observations”, Journal of Financial and Quantitative Analysis, 25,

291-306.

Harris, L., 1991, “Stock price clustering and discreteness”, Review of Financial

Studies, 4:3, 389-415.

264

Page 277: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Harvey, A. C., 1998,“Long-memory in stochastic volatility”, In Forecasting

volatility in financial markets, (ed. J. Knight and S. E. Satchell). London:

Butterworth-Heineman.

Harvey, A. C., Ruiz, E. and Shephard, N. 1994,“Multivariate stochastic vari-

ance models”, Review of Economic Studies, 61, 247-264.

Harvey, A. C., and Shephard, N. 1996,“Estimation of an asymmetric stochastic

volatility model for asset returns”, Journal of Business and Economic Statis-

tics, 14, 429-434.

Hasbrouck, J., 2004, “Empirical Market Microstructure: Economic and Sta-

tistical Perspectives on the Dynamics of Trade in Securities Markets”. Lecture

notes, Stern School of Business, New York University.

Hasbrouck J., and G. Saar, 2002, “Limit Orders and Volatility in a Hybrid

Market: The Island ECN”, Stern School of Business Working Paper, July

2002.

Hayashi, T., and Yoshida, N., 2005, “On covariance estimation of non-

synchronously observed diffusion processes”, Bernoulli, 11, 359379.

Heston, S. L. 1993, “A Closed-Form Solution for Options with Stochastic

Volatility with Applications to Bond and Currency Options”, Review of Fi-

nancial Studies, 6:2, 327-343.

Heston, S. L. and Nandi, S. 2000, “A Closed-Form GARCH Option Valuation

Model”, Review of Financial Studies, 13:3, 585-625.

Hull, J. C. and White, A., 1987, “The Pricing of Options on Assets with

Stochastic Volatilies”, Journal of Finance, 42:2, 281-300.

Hull, J. C. and White, A., 1988, “An Analysis of the Bias in Option Pricing

Cause by a Stochastic Volatility”, Advances in Futures and Options Research,

3, 27-61.

265

Page 278: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Hurvich, C. M., and Soulier, P., 2009, “Stochastic Volatility Models with Long

Memory”. In: Mikosch, T., Kreiss, T., Davis, J.-P., Andersen, R. A., Gustav,

T. (Eds.), Handbook of Financial Time Series. Springer, pp. 345-354.

Hwang, S. and Satchell, S. 1998, “Implied Volatility Forecasting: A Com-

parison of Different Procedures Inlcuding Fractionally Integrated Models with

Applications to U.K. Equity”, in Forecasting Volatility in Financial Markets.

John Knight and Stephen Satchell, eds. Butterworth Heinemann, ch. 7, 193-

225.

Ingersoll, J. E. Jr., 1975, “A Theoretical and Empirical Investigation of the

Dual Purpose Funds: An Application of Contingent Claim Analysis”, Sloan

School of Management Working Paper No. 782-75, MIT, Cambridge, Mass.

Jackwerth, J. C., 2000, “Recovering Risk Aversion from Option Prices and

Realised Returns.”, Review of Financial Studies, 13:2, 433-451.

Jacod, J., 1994, “Limit of random measures associated with the increments of a

Brownian semimartingale”, Unpublished paper: Laboratorie de Probabilities,

Universitie P and M Curie, Paris.

Jacquire, E., Polson, N., and Rossi, P., 2004, “Bayesian analysis of stochastic

volatility models with fat-tails and correlated errors”, Journal of Economet-

rics, 122, 185-212

Jarrow, R. A., and Eisenberg, L. K., 1991, “Option Pricing with Random

Volatilities in Complete Markets”, Federal Reserve Bank of Atlanta Working

Paper 91-16.

Jeantheau, T., 1998, “Strong consistency of estimators for multivariate ARCH

models”, Econometric Theory, 14, 70-86.

Jiang, G. J., & Tian, Y. S. 2003. “Model-free implied volatility and its infor-

mation content”, Unpublished manuscript.

266

Page 279: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Jiang, G. J., & Tian, Y. S. 2003. “Extracting Model-Free Volatility from Op-

tion Prices: An Examination of the Vix Index”, Journal of Derivatives, 14:3,

35-60.

Johnson, H. and Shanno, D., 1987, “Option Pricing when the Variance is

Changing”, Journal of Financial and Quantitative Analysis, 22, 143-151.

Johannes, M., and Dubinsky, A., 2006, “Earnings An-

nouncements and Equity Options”, Working Paper,

www0.gsb.columbia.edu/faculty/mjohannes/PDFpapers/Earnings 2006.pdf

Johannes, M., Polson, N., and Stroud, J. 2002, “Se-

quential Optimal Portfolio Performance”, Working Paper.

http://ljsavage.wharton.upenn.edu/ stroud/papers/port.pdf

Jorion, P., 1995, “Predicting Volatility in the Foreign Exchange Market”, Jour-

nal of Finance, 50:2, 507-528.

Jorion, P., 1996, “Risk and Turnover in the Foreign Exchange Market”, in The

Microstructure of Foreign Exchange Markets. J.A. Frankel, G. Galli, and A.

Giovannini, Chicago: U. Chicago Press.

Karatzas, I. and Shreve, S. E. 1988, “Brownian Motion and Stochastic Calcu-

lus”. NY: Springer Verlag.

Kawaller, I. G., Koch, P. D., and Koch, T. W., 1987, “The temporal price

relationship between S&P 500 futures and the S&P 500 index, Journal of

Finance, 51:5, 1309-1329.

Klaassen, F., 2002, “Improving GARCH Volatility Forecasts with Regime-

Switching GARCH”, Empirical Economics, 27, 363-394.

Koopman, S.J., Jungbacker, B., Hol, E., 2005. “Forecasting daily variability

of the S&P 100 stock index using historical, realised and implied volatility

measurements”, Journal of Empirical Finance, 12, 445-475.

267

Page 280: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Latane, H. A., and Rendleman, R. J., 1978. “Standard Deviations of Stock

Price Ratios Implied in Option Prices.” Journal of Finance, 31:2, 369-381.

Lamoureux. C. G., and Lastrapes, W. D., 1990, “Forecasting Stock-Return

Variance: toward an Understanding of Stochastic Implied Volatilities”, Review

of Financial Studies, 6:2, 293-326.

Laurent, S., Rombouts, J. V. K., and Violante, F., On the Forecast-

ing Accuracy of Multivariate GARCH Models, 2010. CIRPEE Working pa-

per, http://www.cirpee.org/fileadmin/documents/Cahiers 2010/CIRPEE10-

21.pdf

Lee, C. M. C., and Ready, M. J., 1991, “Inferring trade direction from intraday

data”, Journal of Finance, 46:2, 733-746.

Li, K. 2002, “Long-Memory versus Option-Implied Volatility Prediction”,

Journal of Derivatives, 9:3, 9-25.

Liesenfeld, R., and Jung, R. C., 2000, “Stochastic volatility models: condi-

tional normality versus heavy-tailed distributions”, Journal of Applied Econo-

metrics, 15, 137-160.

Liu, X., Shackleton, M. B., Taylor, S. J. and Xu, X. X. 2007, “Closed-

form transformations from risk-neutral to real-world distributions”, Journal

of Banking & Finance, 31:5, 1501-1520.

Lo, A. W., MacKinlay, A. C., and Zhang, J., 2002, “Econometric models of

limit order execution”, Journal of Financial Economics, 65, 31-71.

Lynch, D. P. G., and Panigirtzoglou, N., 2003, “Option Im-

plied and Realised Measures of Variance”, SSRN Working Paper,

http://papers.ssrn.com/sol3/papers.cfm?abstract id=497547

Manaster, S., and Rendleman, R., 1982, “Option Prices as Predictors of Equi-

librium Stock Prices”, Journal of Finance, 37, 1043-1057.

268

Page 281: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Mandelbrot, B., 1963, “The Variation of Certain Speculative Prices”, Journal

of Business, 36:4, 394-419.

Martins, M., and Zein, J., 2002, “Predicting Financial Volatility: High-

Frequency Time-Series Forecasts vis-a-vis Implied Volatility”, working paper,

Erasmus U.

McAleer, M., and Medeiros, M., 2008, “Realised Volatility: A Review”, Econo-

metric Reviews, 27, 10-45.

Meddahi, N., 2002, “Theoretical Comparison Between Integrated and Realized

Volatility”, Journal of Applied Econometrics, 17, 479-508.

Merton, R. C. 1973, “Theory of Rational Option Pricing”, Bell Journal of

Economics and Management Science, 4:1, 141-183.

Merton, R. C. 1980, “On estimating the expected return on the market: an

exploratory investigation”, Journal of Financial Economics, 8, 323-361.

Merton, R. C. and Samuleson, P. A., 1974, “The Fallacy of the Log-Normal

Approximation of Optimal Portfolio Decision-Making Over Many Periods”,

Journal of Financial Economics, 1, 67-94.

Meese, R. A., and Rogoff, K., 1988, ”Was it Real? The Exchange Rate -

Interest Differential Relation Over the Modern Floating-Rate Period”, Journal

of Finance, 43, 933-948.

Mincer, J., and Zarnowitz, V., 1969, “The Evaluation of Economic Fore-

casts and Expectations.” in Mincer ed, Economic Forecasts and Expectations.

NBER, New York.

Mizrach, B., 1992, “Multivariate nearest-neighbour forecasts of EMS exchange

rates”, Journal of Applied Econometrics, 7, S151-S163.

269

Page 282: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Moodley, N. 2005, “The Heston Model: A Practical Ap-

proach”, Honour’s Thesis, University of the Witwatersrand, http :

//math.nyu.edu/ atm262/files/fall06/compmethods/a1/nimalinmoodley.pdf

Moran, M. T., and Dash, S., 2007, “VIX Futures and Options: Pricing and

Using Volatility Products to Manage Downside Risk and Improve Efficiency

in Equity Portfolios”, The Journal of Trading, 2:3 96-105.

Morgan, W. A., 1939-1940, ”A Test for the Significance of the Difference

Between the two Variances in a Sample From a Normal Bivariate Population,”

Biometrika, 31, 13-19.

Muller, U., Dacorogna, M., Dav, R., Pictet, O., Olsen, R., and J. Ward,

1993, “Fractals and intrinsic time A challenge to econometricians”, 39th In-

ternational AEA Conference on Real Time Econometrics, 1415 October 1993,

Luxembourg.

Nelson, D. B. 1991, “Conditional Heteroscedasticity in Asset Returns: A New

Approach”, Econometrica, 59:2, 347-370.

Newey, W. K., and West, K. D., 1987, “A Simple Positive Semi-Definite Het-

eroskedasticity and Autocorrelation Consistent Covariance Matrix”, Econo-

metrica, 55, 703-708.

Noh, J., Engle, R. F., and Kane, A., 1994, “Forecasting Volatility and Option

Prices of the S&P 500 Index”, Journal of Derivatives, 2, 17-30.

Pagan, A. R., 1996, “The econometrics of financial markets”, Journal of Em-

pirical Finance, 3, 15-102.

Pagan, A. R. and Schwert, G. W., 1990, “Alternative Models for Conditional

Stock Volatility”, Journal of Econometrics, 45:1-2, 267-290.

Pan, J., 2002, “The jump-risk premia implicit in options: evidence from an

integrated time-series study”, Journal of Financial Economics, 63, 3-50.

270

Page 283: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Pan, J., and Poteshman, A., 2006, “The information in option volume for

stock Prices”, Review of Financial Studies, 19:3, 871-908.

Pantula, S. G., 1989: “Testing for Unit Roots in Time Series Data”, Econo-

metric Theory, 5, 25671.

Patton, A. J., 2011, “Volatility Forecast Comparison using Imperfect Volatility

Proxies”, Journal of Econometrics, 160:1, 246-256.

Patton, A., Sheppard, K., 2007, “Evaluating Volatility and Correlation Fore-

casts”, in: Mikosch, T., Kreiss, T., Davis, J.-P., Andersen, R. A., Gustav, T.

(Eds.), Handbook of Financial Time Series. Springer, pp. 17-42.

Pelletier, D., 2006, “Regime switching for dynamic correlations”, Journal of

Econometrics, 131, 445-473.

Piqueira, N., and Hao, H., 2011, “Short Sales and Put Options: Where is

the Bad News First Traded?”, 2011 Annual Meeting of the Midwest Finance

Association, 25 March 2011, Chicago.

Pollet, J. M., and Wilson, M., 2010, “Average correlation and stock market

returns”, Journal of Financial Economics, 96, 364-380.

Pong, S., Shackleton, M. B., Taylor, S. J., and Xu, X., 2004,. “Forecasting

Currency Volatility: A Comparison of Implied Volatilities and AR(FI)MA

Models.” Journal of Banking and Finance, 28, 2541-2563.

Poon, S-H. 2009, “The Heston Option Pricing Model”, Available online at

http://www.personal.mbs.ac.uk/spoon/documents/SV Heston.pdf

Poon, S-H. and Granger, C. W. J., 2003, “Forecasting Volatility in Financial

Markets: A Review”, Journal of Economic Literature, 41, 478-539.

Poon, S-H. and Granger, C. W. J., 2005, “Practical Issues in forecasting

volatility”, Financial Analysts Journal, 61, 45-56.

271

Page 284: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Poteshman, A. M. 2000, “Forecasting Future Volatility from Option Prices”,

working paper, U. Illinois Urbana-Champaign.

Praetz, P. D., 1972, “The Distribution of Share Price Changes”, Journal of

Business, 45:1, 49-55.

Romano, J. P., and Wolf, M., 2005, “Stepwise Multiple Testing as Formalized

Data Snooping”, Econometrica, 73, 12371282.

Rourke, T., and Maraachlian, H., 2011, “Sign- and Volatility-Motivated Trad-

ing in Stock and Option Markets”, 2011 Annual Meeting of the Midwest Fi-

nance Association, 25 March 2011, Chicago.

Rubenstein, M., 1985, “Nonparametric Tests of Alternative Option Pricing

Models Using All Reported Trades and Quotes on the 30 Most Active CBOE

Option Classes from August 23, 1976 through August 31, 1978”, Journal of

Finance, 40:2, 455-480.

Rubenstein, M., 1994, “Implied Binomial Trees”, Journal of Finance, 49:3,

771-818.

Schmalensee, R., and Trippi, R. R., 1978, “The Distribution of Share Price

Changes”, Journal of Business, 45:1, 49-55.

Scott, D.W., 1992, Multivariate Density Estimation: Theory, Practice and

Visualization, New York: John Wiley.

Scott, L. O., 1987, “Option Pricing when the Variance Changes Randomly:

Theory, Estimation and an Application”, Journal of Financial and Quantita-

tive Analysis, 22:4, 419-438.

Sentana, E., 1995, “Quadratic ARCH Models”, Review of Economic Studies,

62, 217-244.

272

Page 285: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Shephard, N., and Andersen, T. G., 2009, “Stochastic Volatility: Origins and

Overview”. In: Mikosch, T., Kreiss, T., Davis, J.-P., Andersen, R. A., Gustav,

T. (Eds.), Handbook of Financial Time Series. Springer, pp. 17-42.

Shephard, N., and Sheppard, K., 2010, “Realising the future: Forecasting

with high frequency based volatility (HEAVY) models”, Journal of Applied

Econometrics, 25, 197-231.

Sheppard, K., 2006, “Realized Covariance and Scrambling”, Working Paper,

http://www.kevinsheppard.com/images/7/7e/Scrambling.pdf

Shreve, S., 2004, “Stochastic Calculus for Finance”, Vol 2. New York: Springer.

Silvennoinen, A., Terasvirta, T., 2005, “Multivariate autoregressive condi-

tional heteroskedasticity with smooth transitions in conditional correlations”,

SSE/EFI Working Paper Series in Economics and Finance, 577.

Silvennoinen, A., Terasvirta, T., 2009, “Multivariate GARCH models”, In:

Mikosch, T., Kreiss, T., Davis, J.-P., Andersen, R. A., Gustav, T. (Eds.),

Handbook of Financial Time Series. Springer, pp. 201-232.

Silverman, B.W., 1986, Estimation for Statistics and Data Analysis, London:

Chapman and Hall.

Singleton, K., 2006. Empirical Asset Pricing. New Jersey: Princeton University

Press.

Sloyer, M., and Tolkin, R., 2008, “The VIX as a Fix: Equity Volatility as a

Lifelong Investment Enhancer”, Duke University, North Carolina.

So, M., Li, W., and Lam, K., 2002, A threshold stochastic volatility model”,

Journal of Forecasting, 21, 473-500.

Sokalska, M. E., Chanda, A., and Engle, R. F., 2002, “High Frequency Mul-

tiplicative Component GARCH”, NYU Working Paper, Available at SSRN:

http://ssrn.com/abstract=1293633

273

Page 286: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Stein, E. M., and Stein, J. C., 1991, “Stock Price Distributions with Stochastic

Volatility: An Analytical Approach”, Review of Financial Studies, 4:4, 727-

752.

Stoll, H. R., and Whaley, R. E., 1990, “The dynamic of stock index and stock

index futures returns”, Journal of Futures Markets, 25:4, 441-468.

Szado, E., 2009, “VIX Futures and Options - A Case Study of Portfolio Di-

versification During the 2008 Financial Crisis”, University of Massachusetts.

Szakmary, A., Ors, E., Kim, J. K., and Davidson, W. D., 2003, “The Predictive

Power of Implied Volatility: Evidence from 35 Futures Markets”, Journal of

Banking & Finance, 27, 2151-2175.

Tauchen, G., and Pitts, M., 1983, “The price variabilityvolume relationship

on speculative markets”, Econometrica, 51, 485505.

Tauchen, G., and Zhou, H., 2006, “Realized jumps on finan-

cial markets and predicting credit spreads”, SSRN Working Paper,

http://papers.ssrn.com/sol3/papers.cfm?abstract id=920773

Taylor, J. W. 2001, “Volatility Forecasting with Smooth Transition Exponen-

tial Smoothing”, work. paper, Oxford U.

Taylor, N., 2004, “Trading intensity, volatility, and arbitrage activity”, Journal

of Banking & Finance, 28, 11371162.

Taylor, N., 2004, “Modeling discontinuous periodic conditional volatility: ev-

idence from the commodity futures market”, Journal of Futures Markets, 24,

805834.

Taylor, S. J. 1986, Modelling Financial Time Series, Wiley.

Taylor, S. J. 2005, Asset Pricing Dynamics, Volatility and Prediction, Prince-

ton.

274

Page 287: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Taylor, S. J., Yadav, P. K., and Zhang, Y., 2010, “The information content

of implied volatilities and model-free volatility expectations: Evidence from

options written on individual stocks”, Journal of Banking & Finance, 34,

871-881.

Taylor, S. J., and Xu, G. X. 1997, “The Incremental Volatility Information

in One Million Foreign Exchange Quotations”, Journal of Empirical Finance,

4:4, 317-340.

Tersvirta, T., 2009, “An introduction to univariate GARCH models”. In:

Mikosch, T., Kreiss, T., Davis, J.-P., Andersen, R. A., Gustav, T. (Eds.),

Handbook of Financial Time Series. Springer, pp. 17-42.

Thorp, E. O. 1973, “Extensions of the Black-Scholes Option Model”, 39th

Session of the International Statistical Institute, Vienna.

Tse, Y. K., 1991, “Stock Return Volatility in the Tokyo Stock Exchange”,

Japan and the World Economy, 3, 285-298.

Tse, Y. K., 2000, “A test for constant correlations in a multivariate GARCH

model”, Journal of Econometrics, 98, 107-127.

Tse, Y. K., and Chan, W-S., 2010, “The lead-lag relation between the S&P

500 spot and futures markets: An intraday-data analysis using a threshold

regression model”, The Japanese Economic Review, 61:1, 133-144.

Tse, Y. K. and Tung, S. W., 1992, “Forecasting Volatility in the Singapore

Stock Market”, Asia Pacific Journal of Management, 9:1, 1-13.

Vasilellis, G. A., and Meade, N., 1996, “Forecasting Volatility for Portfolio

Selection”, Journal of Business, Finance & Accounting, 23:1, 125-143.

Vuong, Q. H., 1989, “Likelihood ratio tests for model selection and non-nested

hypotheses”, Econometrica, 57, 307-333.

275

Page 288: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Watanabe, T., 1999, “A non-linear filtering approach to stochastic volatility

models with an application to daily stock returns”, Journal of Applied Econo-

metrics, 14, 101-121.

Watanabe, T., and and Asai, M., 2001, “Stochastic volatility models with

heavy-tailed distributions: A bayesian analysis” IMES Discussion Paper Se-

ries, http://www.math.chuo-u.ac.jp/ sugiyama/15/15-02.pdf

West, K. D., 1996, “Asymptotic inference about predictive ability”, Econo-

metrica, 64, 1067-1084.

White, H., 2000, “A reality check for data snooping”, Econometrica, 68, 1097-

1126.

White, S. I., 2006, “Stochastic Volatility: maximum Likelihood Estimation

and Specification Testing”, Ph.D thesis, Queensland University of Technology.

Wiggins, J. B., 1987, “Option Values Under Stochastic Volatility: Theory and

Empirical Estimates”, Journal of Financial Economics, 19, 351-377.

Wilmott, P., Howison, S., and Dewynne, J., 2005, The Mathematics of Finan-

cial Derivatives, Cambridge University Press.

Wood, R. A., McInish, T. H., and Ord, J. K., 1985, “An investigation of

transaction data for NYSE stocks”, Journal of Finance, 40, 723739.

Xu, X. and Taylor, S. J., 1994, “The term structure of volatility implied by

foreign exchange options”, Journal of Financial and Quantiative Analysis, 29,

57-74.

Xu, X. and Taylor, S. J., 1995, “Conditional Volatility and the Informational

Efficiency of the PHLX Currency Options Market”, Journal of Banking &

Finance, 19:5, 803-821.

276

Page 289: Forecasting Volatility and Correlation: The Role of …eprints.qut.edu.au/53138/1/Christopher_Coleman-Fenn_Thesis.pdf · Forecasting Volatility and Correlation: The Role of Option

Yang, J., Balyeat, R. B., and Leatham, D. J., 2005, “Futures Trading Activ-

ity and Commodity Cash Price Volatility”, Journal of Business Finance &

Accounting, 32:1-2, pp. 297-323.

Yu, J., 2005, “On leverage in a stochastic model”, Journal of Econometrics,

127:2, 165-178.

277


Top Related