modelling volatility clustering in electricity price return series for forecasting value at risk

24
Modelling volatility clustering in electricity price return series for forecasting value at risk R. G. Karandikar * ,y , N. R. Deshpande, S. A. Khaparde and S. V. Kulkarni Power Electronics and Power System Group, Department of Electrical Engineering, IIT-Bombay, Mumbai 400 076, India SUMMARY Modelling of non-stationary time series using regression methodology is challenging. The wavelet transforms can be used to model non-stationary time series having volatility clustering. The traditional risk measure is variance and now a days Value at Risk (VaR) is widely used in finance. In competitive environment, the prices are volatile and price risk forecasting is necessary for the market participants. The forecasting period may be 1 week or higher depending upon the requirement. In this paper, a model is developed for volatility clustering in electricity price return series and its application for forecasting VaR is demonstrated. The first model is using GARCH (1, 1). The VaR of variance rate series, that is worst-case volatility is calculated using variance method using wavelet transform. The model is used to forecast variance rate (volatility) for a sample case of 1-week half-hourly price return series. The second model developed is for forecasting VaR for price return series of 440 days. This model is developed using wavelets via multi-resolution analysis and uses regime-switching technique. The historical data of daily average prices is obtained from 100% pool type New South Wales (NSW), a zonal market of National Electricity Market (NEM), Australia. Copyright # 2007 John Wiley & Sons, Ltd. key words: electricity price return series; heteroscedasticity; multi-resolution analysis; Value at Risk (VaR); volatility clustering; wavelet transform 1. INTRODUCTION Restructured power sector has caused electricity to be a market commodity. Markets are classified as physical market or financial market. Generally, these markets are settled in three different ways, viz. pool, bilateral, and exchange [1]. In case of bidding in a pool system, the market participants submit their bids in terms of prices and quantities. The bids are accepted in the order of increasing price, until the total demand is met. Therefore, Generator Company (GENCO), which is able to forecast the pool prices can adjust its own price/production schedule depending on hourly pool prices and its own production costs. In the bilateral contract, buyer and seller agree upon a certain amount of energy to be transferred through the network at certain fixed price. Both sides agree this price beforehand and it is based on price forecasting. In some markets, financial institutes participate in energy trading through EUROPEAN TRANSACTIONS ON ELECTRICAL POWER Euro. Trans. Electr. Power 2009; 19:15–38 Published online 1 October 2007 in Wiley InterScience (www.interscience.wiley.com) DOI: 10.1002/etep.205 *Correspondence to: R. G. Karandikar, Department of Electrical Engineering, IIT-Bombay, Mumbai 400 076, India. y E-mail: [email protected] Copyright # 2007 John Wiley & Sons, Ltd.

Upload: r-g-karandikar

Post on 11-Jun-2016

217 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Modelling volatility clustering in electricity price return series for forecasting value at risk

EUROPEAN TRANSACTIONS ON ELECTRICAL POWEREuro. Trans. Electr. Power 2009; 19:15–38Published online 1 October 2007 in Wiley InterScience

(www.interscience.wiley.com) DOI: 10.1002/etep.205

*CoyE-

Co

Modelling volatility clustering in electricity price return seriesfor forecasting value at risk

R. G. Karandikar*,y, N. R. Deshpande, S. A. Khaparde and S. V. Kulkarni

Power Electronics and Power System Group, Department of Electrical Engineering, IIT-Bombay, Mumbai 400 076,India

SUMMARY

Modelling of non-stationary time series using regression methodology is challenging. The wavelet transforms canbe used to model non-stationary time series having volatility clustering. The traditional risk measure is varianceand now a days Value at Risk (VaR) is widely used in finance. In competitive environment, the prices are volatileand price risk forecasting is necessary for the market participants. The forecasting period may be 1 week or higherdepending upon the requirement. In this paper, a model is developed for volatility clustering in electricity pricereturn series and its application for forecasting VaR is demonstrated. The first model is using GARCH (1, 1). TheVaR of variance rate series, that is worst-case volatility is calculated using variance method using wavelettransform. The model is used to forecast variance rate (volatility) for a sample case of 1-week half-hourly pricereturn series. The second model developed is for forecasting VaR for price return series of 440 days. This model isdeveloped using wavelets viamulti-resolution analysis and uses regime-switching technique. The historical data ofdaily average prices is obtained from 100% pool type New South Wales (NSW), a zonal market of NationalElectricity Market (NEM), Australia. Copyright # 2007 John Wiley & Sons, Ltd.

key words: electricity price return series; heteroscedasticity; multi-resolution analysis; Value at Risk (VaR);volatility clustering; wavelet transform

1. INTRODUCTION

Restructured power sector has caused electricity to be a market commodity. Markets are classified as

physical market or financial market. Generally, these markets are settled in three different ways, viz.

pool, bilateral, and exchange [1]. In case of bidding in a pool system, the market participants submit

their bids in terms of prices and quantities. The bids are accepted in the order of increasing price, until

the total demand is met. Therefore, Generator Company (GENCO), which is able to forecast the pool

prices can adjust its own price/production schedule depending on hourly pool prices and its own

production costs. In the bilateral contract, buyer and seller agree upon a certain amount of energy to be

transferred through the network at certain fixed price. Both sides agree this price beforehand and it is

based on price forecasting. In some markets, financial institutes participate in energy trading through

rrespondence to: R. G. Karandikar, Department of Electrical Engineering, IIT-Bombay, Mumbai 400 076, India.mail: [email protected]

pyright # 2007 John Wiley & Sons, Ltd.

Page 2: Modelling volatility clustering in electricity price return series for forecasting value at risk

16 R. G. KARANDIKAR ET AL.

power exchange. Most of the restructured electricity markets use combination of pool and bilateral

contracts. Normally, the pool prices are volatile in nature. Hence, GENCOs can optimise their

production schedules so that they can hedge pool price volatility via bilateral contracts. Thus, good

knowledge of future pool prices helps more accurately to valuate bilateral contracts. In such

competitive markets, participants needs an effective tool to model and forecast price risk so that the

returns in the given time horizon are protected while doing energy transaction.

Different approaches are adopted for modelling electricity prices and price return series are based on

Expert System, Stochastic Method, Neural Network, Hidden Markov Model, Time Series, etc. The

price return series is derived from the price time series by taking log return as explained in Section 3.

The structure of the Expert System discussed in Reference [2] consists of two parts: ‘price model base’

and ‘knowledge interface and management’. By adopting the way of fuzzy estimation and the

reasoning system of forward deduction, the electricity prices are determined. Several price models

related to electricity pricing are stored in model base system. These models have the characteristic of

single function, simple practicality and completeness. User can choose one of these models

corresponding to structured decision from model base for reference and analysis. The ‘knowledge

interface and management system’ includes two subsystems, viz. ‘man–machine interface and

management system’ and ‘knowledge acquisition and inferred and interpretation system’. The expert

system-based model needs extensive analysis of the system thus, it is very difficult to develop single

global model. In Reference [3], problem of modelling electricity price dynamics in deregulated

markets is studied. The stochastic price model is developed by considering fluctuations in prices. These

fluctuations are modelled based on stochastic nature of the demand, market participants’ strategic

behaviour, and the power system reliability indices. In Reference [4], Input Output Hidden Markov

Models (IOHMM) is proposed. In electricity markets, spot price series reflects switching nature related

to discrete changes in participants strategies. The Markov model is helpful to interpret and to

understand the evolution of the market. To achieve this, the model should be capable of discovering

hidden market states and labelling them by their more relevant explanatory variables. It can be

extended for forecasting based on past information with an acceptable level of accuracy. Generally,

these models are very computation intensive. The system marginal price (SMP) short-term forecasting

implementation using the Artificial Neural Networks (ANN) computing technique is discussed in

Reference [5]. This approach uses three-layered ANN paradigm with back-propagation. The training

and testing of ANN model is done on retrospective SMP real world data. The Neural Network-based

model is very complex and normally needs feature extraction pre-processing.

The other approach of modelling and comparatively less computation intensive tool is time series

analysis. An approach based on the time series analysis is better suited for modelling irregularity in

electricity prices [6]. Complete development of time and frequency domain analysis of time series is

given in Reference [7]. The analysis of time series is based on statistical point of view, without

examining the underlying physical processes. Although, traditional time series models are based on the

hypothesis of stationarity, significant non-stationary component can be observed in electricity price

time series. Such modelling approach can be effectively used to predict risks associated in payoff.

Application of the efficient frontier, one of the portfolio management tools, is demonstrated to assess

the risk associated in expected payoff in References [8,9]. The main thrust of recent research is

modelling of volatility of the electricity prices [10]. The time series having volatility clustering can be

modelled as single global model or switching model [11]. In single global model, time-varying

variance (volatility) and low/high volatility clustering (regimes) are captured in one model. On the

other hand, the regime-switching models use set of models. Depending upon regime, appropriate model

is used. In this method, two separate models are developed, one for low and other for high volatility

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 3: Modelling volatility clustering in electricity price return series for forecasting value at risk

MODELLING VOLATILITY CLUSTERING 17

regimes. The switching of the models is based on the regime of analysis. Electricity price return series

can be characterised by seasonality, mean reversion and infrequent but large jumps. The stochastic part

can be modelled by diffusion-type Stochastic Differential Equation (SDE) (1),

dXt ¼ mðX; tÞdt þ sðX; tÞdbt (1)

Where,m(X, t)dt is the drift, s(X, t)dt is the volatility and dbt is the increment of a standard Brownian

Motion. Mean reversion is typically introduced into the model by a drift term m(X, t)dt. The second

main feature of electricity prices and their return series is the ‘jumpy’ character. The prices after a jump

tend to remain high for several time periods. The regime-switching models are used to capture this

behaviour. The switching mechanism is typically assumed to be governed by a random variable that

follows a Markov chain with different possible states.

The aim of proposed work is to develop non-parametric models for forecasting as compared to

conventional parametric models based on HMM, ANN, etc. The main disadvantage of such models is

they are parametric and forecasting is based on the accuracy and consistency of dependency of market

prices on the independent variables, that is they are data driven. The study of the dependency of market

prices on such variables and forecast is a separate issue and is well established in literature using

conventional models. In non-parametric models, time series analysis is one of the basic models. The

advantage of heteroscedasticity-based time series models like generalised auto regressive conditional

heteroscedasticity (GARCH) (p, q) is that they model not only heteroscedastic (time-varying variance)

nature of price time series but also the volatility clustering. The main disadvantage of this model is that

we can do forecasting of shorter duration. Therefore, there is a need of non-parametric model which

will take into consideration Heteroscedasticity and volatility clustering as well as have longer

forecasting period. The multi-resolution analysis using wavelet transform can be used for such type of

non-parametric modelling.

In this paper, a model is developed for volatility clustering in electricity price return series and its

application for forecasting Value at Risk (VaR) is demonstrated. The first model is using GARCH (1, 1)

and the VaR of variance rate series (worst-case volatility) is calculated using variance method using

wavelet transform for a sample case of short duration, 1-week half-hourly price return series. The

second model developed is for forecasting VaR for price return series of longer duration, that is of

440 days. This model is developed using wavelets via multi-resolution analysis and uses regime-

switching technique. The historical data of daily average prices is obtained from 100% pool type New

South Wales (NSW), a zonal market of National Electricity Market (NEM), Australia [17].

The rest of the paper is organised as follows: Section 2 presents concept of VaR. Model for

forecasting VaR for variance rate in case of 1-week half-hourly price return series is discussed in

Section 3. The Section 4 presents modelling volatility clustering using multi-resolution analysis via

wavelet transform based on regime-switching approach. The main conclusions of this work are

summarised in Section 5.

2. FUNDAMENTALS OF VALUE AT RISK (VAR)

VaR is used to express and summarise the total risk with financial assets. The VaR is accompanied with

two parameters: the time horizon (h) and the probability level (C). It means withC percentage certainty,

the loss on the asset/portfolio will not be more than VaR in the next time horizon of h. Losses greater

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 4: Modelling volatility clustering in electricity price return series for forecasting value at risk

18 R. G. KARANDIKAR ET AL.

than the VaR are only suffered with a very small specified probability of (1�C). The VaR measures the

worst expected loss over a given horizon under normal market conditions at a given confidence level.

Consider the value of asset at the end of a particular day, say day n, is fn. Then, the return rn is the

continuously compounded return of this portfolio/asset during day n (Equation (2)).

rn ¼ lnfn

fn�1

� �(2)

Assume now that the initial value of the asset is f0. Then, the value of the portfolio/asset at the end of

the day is given by

f ¼ f0ern (3)

To determine value of asset ( f) at the end of the day the Equation (3) can be approximated as

¼ f0ð1þ rnÞ (4)

Now assume that the worst possible return on asset is r�n . Thus, the worst possible value of asset is

f ¼ f0ð1þ r�nÞ

Then, VaR can be defined as the loss relative to the mean (m) by replacing the return un by mean (m).

VaRðmÞ ¼ f0ð1þ mÞ � f0ð1þ r�nÞ¼ f0ðm� r�nÞ

There are different approaches for VaR estimation (1) Historical simulation, (2) Variance method, (3)

Delta normal method, (4) Monte Carlo simulation [8]. Variance method is a parametric approach to

determine VaR.

2.1. VaR under normality assumption

Co

� L

py

et ft be the return series.

� L

etDftþ h follow a Normal distribution, that isN (m, s2).Where ( is expected return or mean and s2

standard deviation.

� D

efine Z¼ (Dftþ h –m)/s. So, Z follows N (0, 1) distribution. The value of Z at given probability

can be obtained from standard statistics table of normal distribution.

� L

et a¼ 0.05, that is Confidence level¼ 95%.

� N

ow probability P(Z��1.65)¼ 0.05. This means P(D ftþ h�m� 1.65s)¼ 0.05.

� S

o, required VaR relative to zero mean¼ (0� (m� 1.65s)). As s> 0, (m� 1.65s)< 0, which

means that VaR> 0.

2.2. VaR from return distribution

Estimation using the distribution of portfolio/asset returns r:

� L

et f0 is the current value of the portfolio and r is the h-period return.

� S

o, value f after h-period is f¼ f0(1þ r).

� L

et r follows N (m, (2). So, Z¼ (r�m)/( is a N (0, 1) variable.

� L

et (¼ 0.05, that is Confidence level¼ 95%.

� T

he probability P(Z��1.65)¼ 0.05.

right # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 5: Modelling volatility clustering in electricity price return series for forecasting value at risk

Da

DaTiPrLPLo

Co

MODELLING VOLATILITY CLUSTERING 19

� T

y

temeice¼g

pyr

his means P(r� (� 1.65s)¼ 0.05. Let r�¼ ((� 1.65s), r�< 0.

� N

ow, P( f� f0(1þ r�))¼ 0.05¼P (Df� f0r�).

� S

o, required 1 day (h) VaR relative to mean m¼�f0(m� r�)¼ ( f0)(1.65) ((),

VaR for h days ¼ ðf0Þ ð1:65Þ ðsÞffiffiffih

p(5Þ

� W

hen r is not a Normal variable, we need to estimate r� based on the actual non-normal

distribution.

3. MODEL FOR FORECASTING VAR FOR VARIANCE RATE IN CASE OF 1-WEEK

HALF-HOURLY PRICE RETURN SERIES

The historical data used for the development of this model is half-hourly 1-week prices from 1 January

2006 to 6 January 2006 of 100% pool type NSW market, Australia [17]. Normally, prices are quoted

according to relevant currency used in the market. Therefore, logarithmic prices are used which

facilitates processing of data irrespective of its unit. In financial analysis, log return prices are used as

base data and the same is applied for this model. Logarithmic scaling also results in smoothening of the

data. Let Xt be a discrete stochastic process. The relative increase or the return of the process is rt¼ ln

(Xt/Xt� 1). The other advantage of log return prices from risk manager point of view is easy conversion

of risk parameters like VaR from one time scale to another.

The first model is heteroscedasticity-based GARCH (p, q) model. The VaR of variance rate is

determined using MATLABTM standard function and using wavelet variance. The historical log return

prices are given in Table I. The data size selected is 256 as for wavelet analysis data size required in 2n

scale.

The Figure 1 gives the log return historical and the actual (future) price return series. The future

returns are for the period half-hourly prices form hour 9.00 of 6/1/06 to hour 16.30 of 11/01/06.

3.1. Heteroscedasticity

The time series is heteroscedastic if the observations have time-dependent variances. The

homoscedasticity is copementary of heteroscedasticity. The term scedastic is Greek for ‘variance’,

and hetero, means ‘different’, gives heteroscedastic, or different variance. Many times in stastitical

analysis we assume the obervation series (vector) has a constant variance, that is the series is stationary.

This will be true if the observations are assumed to be drawn from identical distributions.

Heteroscedasticity is a violation of this assumption. Typically in case of non-stationary time series,

variance varies and typically increases with the time.

The analysis of the stationary time series is simple as the mean and the variance are constant. The

stationarity of series is a parameter which ensures the stability of the probability distribution of a series

at different time points and hence permits to identify and estimate time series models based on a single

Table I. Log returns historical prices.

1 2 . . . 256

1/1/06 1/1/06 . . . 6/1/060.30 1.00 . . . 8.3019.67 18.56 . . . 16.98

ln (price) 2.9790 2.9210 . . . 2.8320returns (LP2�LP1) — �0.05809 . . . 0.00827

ight # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 6: Modelling volatility clustering in electricity price return series for forecasting value at risk

Figure 1. Historical and actual log return prices.

20 R. G. KARANDIKAR ET AL.

realisation of the time series. In case of such series, the series oscillates about a fixed mean value, the

size of oscillations is approximately constant and correlation between separated observations depends

only on time distance between them. The concept of stationarity is defined in two forms (1) strong

stationarity, (2) weak/covariance stationarity. In practical approach, we look at the later. A time series

Xt is said to be covariance stationary, that is weak stationary if (1) E(Xt)¼m, a constant, for all time

point t, (2) Variance (Xt)¼ s2, constant for all t, (3) Covariance (Xt, Xt� k)¼ r(k), a function of k only,

for all (t, k).

If a given time series is non-stationary, we first transform it to a stationary one and carry out all

analysis based on the transformed (stationary) series. Finally, we apply inverse-transformation to get

the results in relation to original (non-stationary) series. In case of the non-stationary series, a natural

way of limiting the effects on stationarity caused by a linear trend is to fit a straight line through the data

using simple linear regression. The model used for the estimation is

Xt ¼ b0 þ b1t þ wt (6)

Where, Xt is time series, b0, b1 are constants, wt is white noise series. Using the least square

estimation, we can estimate constants b0 and b1.

The linear regression model (Equation (6)) assumes that the observations are time dependent and

there is no correlation of the past observation and future one. The least square estimation is used to

determine constants and the modified white noise series is the series of innovations or residuals. The

residuals must be within the limits or must follow particular trend. The model is inappropriate if the

series is non-stationery. Hence, there is need to model volatility in transformation of non-stationary

series to stationary one. The heteroscedasticity-based auto regressive models are used to model

non-stationary time series.

3.2. Model based on heteroscedasticity

The volatility clustering is a typical form of heteroscedasticity which is observed in case of volatile

pool type market price series. The volatility/variability of the price also changes over period of high

(low) volatility followed by period of low (high) volatility. This phenomenon is known as ‘volatility

clustering’.

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 7: Modelling volatility clustering in electricity price return series for forecasting value at risk

MODELLING VOLATILITY CLUSTERING 21

The conditional means dependence on the past data. Two common methods of time series analysis

are, time domain and frequency domain analysis. The time domain analysis is based on the assumption

that there exists correlation between adjacent points in time. The time domain approach focuses on

modelling future value of time series as parametric function of current and past values. The regression

or prediction of time series is classified in three cases of financial market movement:

Co

1. N

pyr

o volatility clustering.

2. V

olatility clustering under zero expected return.

3. C

hanging returns and volatility clustering jointly.

The methods like auto regressive conditional heteroscedasticity (ARCH), GARCH and Exponential

GARCH (EGARCH), etc. are applied for the electricity price time series analysis for forecasting and

risk analysis [7,13]. The comparison of these global models is given in Reference [14]. Different

approaches to predict next-day electricity prices based on the GARCH and auto regressive integrated

moving average (ARIMA) methodology are presented in References [15,16].

Let Xt be the price return series. At time point (t), Xt is unknown and to forecast Xt using all

information available up to time (t� 1), linear AR (p) model is given by,

Xt ¼ a0 þ a1Xt�1 þ . . .þ apXt�p þ wt (7)

Where, p is a positive integer, a are constants and wt is the residual series with zero mean and

constant variance, that is white noise. The ARCH (p) model is for the series in which time-varying

variance, that is volatility is modelled. The conditional implies dependence on the observations of the

immediate past. Under the ARCH (p) process, volatility clustering phenomenon is captured through

the following relationship,

s2t ¼ a0 þ

Ppi¼1

aiX2t�i

Where a0 > 0;ai � 0 andPpi¼1

ai < 1

(8)

The parameters a1, a2 . . .ap are unknown and are estimated based on the historical data. The

GARCH (p, q) is an extension of the ARCH (p) model. It is a GARCH model. This model is based on

the past observations as well as past variance to forecast future values. In GARCH (p, q) model if q¼ 0,

then we get ARCH (p) model and if p¼ 0, then we get ARCH (q) model. The conditional variance rate

using GARCH (p, q) model is given by,

s2t ¼ a0 þ

Xpi¼1

aiX2t�i þ

Xqj¼1

bis2t�j (9)

Where

a0¼ gVL,

a0> 0,

ight # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 8: Modelling volatility clustering in electricity price return series for forecasting value at risk

22 R. G. KARANDIKAR ET AL.

ai, bj� 0 for all i and j,

Xpi¼1

ai þXqj¼1

bj

!< 1

VL¼ long run average variance,

g ¼weight assigned to VL.

The GARCH (1, 1) model is given by,

s2t ¼ a0 þ a1X

2t þ b1s

2t (10)

s2t ¼ gVL þ a1X

2t þ b1s

2t

g þ a1 þ b1 ¼ 1(11)

The unconditional long run constant variance in case of GARCH (1, 1) model is given by,

s2 ¼ a0

1� a1 � b1

(12)

We can forecast future volatility or variance rate using GARCH (1, 1). The model used to forecast

volatility on day (nþ k) using information available at the end of day (n� 1) is

Eðs2NþKÞ ¼ VL þ ðaþ bÞkðs2

n � VLÞ

In case of GARCH (1, 1) model if the (a1þ b1)< 1, we get variance rate using Equation (13). It will

be increasing exponentially and exhibits a mean reversion level of VL. The (a1þ b1) is termed as

persistence rate [20]. For the development of first model GARCH, Tool Box available with

MATLABTM is used. The GARCH (p, q) model parameters for historical and actual (future) return

series are given in Table II and Table III, respectively.

3.3. Forecasting value at risk (VaR) using GARCH (1, 1)

Let hun and hsn be the return and variance rate of historical return series (hu) on nth day. Using Equation

(10) and parameters of GARCH (1, 1) from Table II, the variance rate for historical return series is given

by:

hsnðhistoricalÞ ¼ ha0 þ ha1ðhun�1 � huaveÞ2 þ hb1ðhsn�1Þ2 (14)

Table II. GARCH parameters for historical return series.

Kappa (ha0) Alpha1 (ha1) Alpha2 (ha2) Beta1 (hb1) Beta2 (hb2)

GARCH (1,1)0.0001236 0.5904 — 0.4096 —

GARCH (2,1)0.00017573 0.0308 0.3797 0.5895 —

GARCH (1,2)0.0001236 0.5903 — 0.4097 0.0

GARCH (2,2)0.00016993 0.0 0.3861 0.5453 0.0687

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 9: Modelling volatility clustering in electricity price return series for forecasting value at risk

Table III. GARCH parameters for actual (future) return series.

Kappa (fa0) Alpha1 (fa1) Alpha2 (fa2) Beta1 (fb1) Beta2 (fb2)

GARCH (1,1)0.00047597 0.5042 — 0.4958 —

GARCH (2,1)0.00053668 0.2496 0.1706 0.5798 —

GARCH (1,2)0.00047597 0.5042 — 0.4958 0.0

GARCH (2,2)0.00053667 0.2496 0.1706 0.5798 0.0

MODELLING VOLATILITY CLUSTERING 23

Similarly, variance rate (fsn) of actual (future) return series (fu) is given by

fsnðactualÞ ¼ fa0 þ fa1ðfun�1 � fuaveÞ2 þ fb1ðfsn�1Þ2 (15)

To evaluate the performance measure of the proposed model, a verification approach based on

standard time series forecasting errors like cumulative forecast error (CFE), mean absolute deviation

(MAD) and mean square error (MSE), mean absolute percentage error (MAPE) are used.

Let,

Co

(1) x

pyrig

(t) be the historical data at time t¼ 1, . . ., n. In present case, x(t)¼ hsn.

(2) f (t) be the forecast (prediction) for time t. In present case, f(t)¼ fsn. (3) e (t) be the residual or forecast error at time t¼ x(t)� f(t).

CumulativeForecastErrorðCFEÞ ¼Pnt¼1

eðtÞ

MeanAbsoluteDeviationðMADÞ ¼Pnt¼1

jeðtÞj=n

MeanSquareErrorðMSEÞ ¼Pnt¼1

ðeðtÞÞ2=n

MeanAbsolutePercentageErrorðMAPEÞ ¼ ð 1n

Pnt¼1

eðtÞj jxðtÞ Þ � 100

wheret ¼ 1 . . . n:

Table IV gives the standard error terms for the historical and future variance rate series Equations

(14) and (15). The VaR for both the variance rate series is calculated using standard MATLABTM

function its value is relative to mean variance rate. The forecasted VaR based on historical variance rate

series, that is forecasted worst-case volatility (variance rate) and actual (future) VaR are given in

Table IV.

3.4. Value at risk estimation using wavelet energy distribution

The wavelets are being used for the analysis of financial data. The wavelets facilitate to represent low

and high volatility clustering of data without knowing the underlying functional form. In addition,

wavelets give the precise location of discontinuities and the isolation of shocks. Furthermore, the

ht # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 10: Modelling volatility clustering in electricity price return series for forecasting value at risk

Table IV. Standard forecasting errors and VaR.

CFE MAD SSE MSE %MAPE

0.1206 0.0362 2.6171 0.0102 0.8984Forecasted VaR using historical variance rate series VaR¼ 0.0978 (worst-case variance rate/volatility)VaR for the actual (future) variance rate series VaR¼ 0.0916 (worst-case variance rate/volatility)

24 R. G. KARANDIKAR ET AL.

process of smoothening found in the time-scale decomposition causes reduction of noise in the original

signal. The steps followed in wavelet analysis are (1) decomposing the signal into the wavelet

components, (2) eliminating all values with a magnitude below a certain threshold and (3)

reconstructing the original signal with the inverse wavelet transform. For original signal and inverse

wavelet transform, the preservation of energy ensures that the variances of the individual levels can be

summed up to the variance of the original time series.

The conventional time-varying volatility models for VaR estimation are based on historical data and

they assess the time series with its complete resolution, regardless of the time horizon chosen. In reality,

all the high and low frequency components hidden in the complete signal are effective for every

horizon. The risk managers estimating risk for a short horizon are interested in short-term behaviour

only, whereas risk managers estimating risk for a long horizon are also interested in low frequent

behaviour. Thus, the appropriate resolution is necessary for the right amount of information for a

specific horizon. Therefore, a tool is needed which enables to decompose the signal into all of its

components, separating higher frequent behaviour from lower frequent behaviour.

Wavelet analysis, as a tool, is able to perform such a process of separation, which is referred to as the

Discrete Wavelet Transform or DWT.When performing the DWT, two types of filters are applied to the

original signal. One type captures the trend of the signal referred to as a scaling filter or scaling signal.

Other type captures deviations from this trend referred to as a wavelet filter or wavelet. It is similar to

passing a signal through low-pass and high-pass filters. Let us consider 256 samples are taken of given

signal then the DWTyields two series of 128 coefficients. The series capturing the trend of the signal is

indicated as s1 and the series capturing all deviations from the trend is indicated as d1. The ‘1’ indicates

a 1-level DWT. Thus, two series of 128 coefficients are found using process where each coefficient is

obtained by taking one value for every two values in the original signal, by averaging successive values.

This process is called down sampling. The way the averaging is performed depends on the scaling

signal and wavelet. The s1 series captures the trend of the signal with much less noise than the original

signal and d1 series only captures the high-frequent noise. The ‘averaging’ means local behaviour is

captured by shifting a window along the signal. It is similar to Moving Average Moving Window

(MAMW), a well-established time series modelling method. In fact, applying a scaling signal or

wavelet is equal to shifting a window along the signal. Capturing local behaviour is possible as the

scaling signals and wavelets are locally defined in time. Each window is obtained by shifting or

translating the scaling signal or wavelet to the right.

In practice, this results into multiplying the original signal with a translated version of the original

version of the scaling signal or wavelet. This complete process is only a first step. In order to take steps

of further refinement, the complete process is iterated. The iteration takes place each time on the

previously obtained series capturing the trend, such that at each level a new separation is made between

trend and deviation from the trend. This is the way wavelet decomposition tree is obtained. Going

deeper into the tree entails capturing lower and higher frequent behaviour. At the top level, the original

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 11: Modelling volatility clustering in electricity price return series for forecasting value at risk

MODELLING VOLATILITY CLUSTERING 25

signal can be found, including both low- and high-frequent behaviour. Going a level deeper, the original

signal is separated into a series capturing the trend, s1, and a series capturing deviations from the trend,

d1, and by doing so the 1-level decomposition is performed. Repeating the same step, but on the series

capturing the 1-level trend, s1, trend is again separated from detail, yielding s2 and d2 or a 2-level

decomposition. The number of levels in wavelet decomposition depends on the nature and size of the

signal in 2n scaling.

It can be shown that calculating s1 and d1 or s2 and d2, for example can be directly obtained by

multiplying the original signal by a stretched or scaled version of the original wavelet or scaling signal.

The more stretched the wavelet is, the more lower-frequent behaviour it captures. The amount of

stretching the wavelet or scaling signal depends on the level of decomposition. The mathematical term

for scaling is dilation. The complete collection of translated and dilated wavelets forms a wavelet

family, where each member is derived from the original wavelet or mother wavelet. In the same way,

there is also a family of scaling signals. By using translated and dilated versions of the mother wavelet

or scaling signal, a window variable in both time and scale can be shifted along the signal. The

translating or shifting ensures that all elements in the signal are captured. Dilating ensures that all

behaviour in the signal is captured. The dilated or stretched wavelet or scaling signal can capture the

lowest frequent behaviour. This is the way wavelet transform can be performed in order to analyse, or

decompose, signals. The reverse process is called reconstruction or synthesis. The mathematical

manipulation that effects synthesis is called the inverse discrete wavelet transforms (IDWT). The

mathematical formulation of decomposition is given in Reference [18,19].

The wavelets are introduced in the domain of risk, by examining the variance at multiple scales. A

vector of wavelet coefficients is associated with changes at a particular scale. Essentially meaning that

each wavelet coefficient is constructed using a difference of two weighted averages. Thus, when the

DWT is applied to a stochastic process, it produces decomposition on a scale-by-scale basis. Therefore,

the variance of the underlying process can be decomposed on a scale-by-scale basis as well. In order to

decompose the variance at multiple levels, it is necessary to show that the DWT is energy preserving,

that is showing that the sum of all squared wavelet coefficients and the sum of squared scaling

coefficients at scale J equals the sum of squared elements from the underlying process ( f). Let w be

defined as the vector containing wavelet coefficients of levels j¼ 1, 2. . .J and the scaling coefficients oflevel J:

wk k2¼XJj¼1

XN

2j

k¼1

D2j;k þ S2J;0 ¼

XN�1

t¼0

f 2t ¼ fk k2 (16)

If the signal/time series f(t) is decomposed at one level then f (t)¼ S1þD1. The S1 is referred to as the

level-1 approximation of the original signal, whereas D1 is referred to as the level-1 detail of the

original signal. If the signal is level 3, then f(t)¼A3þD3þD2þD1. If the numberN of signal values is

divisible J times by two, then a J-level multi-resolution-analysis (MRA) will yield

f(t)¼AJþDJþ . . .þD2þD1. Thus, the signal is built up from a low-resolution signal AJ and detail

signals DJþ . . .þD2þD1 are added to form the original signal.

The variance is basic measure of risk and it gives amount by which the expected return will vary. If

the variance is time varying, that is different variance for different time horizon or scale the

multi-resolution analysis viawavelets is a better solution. For time series in its complete resolution, one

would expect to estimate the variance at a specific resolution. It is possible to estimate the variance for

each level j. The estimation of the variance for each level using wavelet is possible due to a specific

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 12: Modelling volatility clustering in electricity price return series for forecasting value at risk

26 R. G. KARANDIKAR ET AL.

property of wavelet analysis, the preservation of energy. Preservation of energy ensures that the

variances of the individual levels can be summed up to the variance of the original time series, similar to

the components in multi-resolution analysis Equation (22).

fk k2¼XJj¼1

Dj�� ��2 þ AJ

�� ��2 (17)

The return series considered in this model is hourly basis. The DWT is dyadic, that is it is based on

powers of two. Therefore, level-1 detail corresponds to a 2-hour horizon, whereas level-2 and level-j

details correspond to 4 and j hours horizons, respectively. The % relative amount of energy associated

at level j is the ratio energy of j level signal to energy of complete signal given by

er ¼Djk k2

fk k2� 100 (18)

If j¼ 8, for the data size of 256 hours, the ratio is for 256 hours. If we take j¼ 7 in (18), the ratio we

get is for 128 hours and so on.

sðmoderatedÞ ¼ ðerÞ ðsÞ

Using Equation (6), the equation for estimating the VaR at the confidence level a¼ 95%, the

expected return/mean returns W0 and horizon h is given by

VaR ¼ ðW0Þð1:65Þ ðsðmoderatedÞÞðffiffiffih

pÞ ðgÞ ðlÞ (19)

In Equation (19), g is the multiplying factor which represents per day volatility and scaling factor l is

to be assumed depending upon expected low or high volatility regime. For the first case of 256 hours, it

is assumed one. Thus, both the factors are depending on the trend expected trend in historical data and

expected in future. Equations (6) and (24) are similar but the moderated variance using wavelet can be

determine for hours in steps of 2j hours, that is if we want to forecast volatility for 128 days, in present

case we have to estimate ratio of energy with j¼ 7. The factor g also facilitates to forecast depending

upon period of low or high volatility. The sample calculation will further highlight on this approach

using wavelets.

3.5. Steps for calculation of VaR

Co

� G

py

et an equation for the preservation of energy using DWT in terms of a scale-by-scale

decomposition.

� T

hen, express the relative amount of energy associated with detail level j.

� T

hen, the total relative energy is applied to the original variance, to obtain a moderated variance.

� T

hen, estimate VaR for given weight corresponding to the confidence level, with the initial

portfolio value W0 and horizon h.

3.6. Sample calculation

� T

he GARCH (1, 1) is applied to both historical and actual data points of 256 hours.

� T

he value of the scaling factor l is assumed unity.

right # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 13: Modelling volatility clustering in electricity price return series for forecasting value at risk

Ty

dbdbGA

Co

MODELLING VOLATILITY CLUSTERING 27

� T

pe

24R

pyr

he sample calculation is for historical data and db2 wavelet. The VaR is estimated for the 256

days hence, take percentage relative energy for j¼ 8 detail level. If estimating for 128 days then

j¼ 7.

er ¼Djk k2

fk k2 � 100

¼

P256i¼1

ðD8ðiÞÞ2

P256i¼1

ðf ðiÞÞ2¼ 0:2451

1:4110 � 100 ¼ 17:37

� M

oderate variance is

sðmoderatedÞ ¼ ðerÞ ðsÞ¼ ð17:37Þ ð0:0051Þ

¼ 0:0878

� T

he VaR for historical series at 95% confidence level, for 256 days is determined using

VaR ¼ ðW0Þð1:65Þ ðsðmoderatedÞÞðffiffiffih

pÞ ðgÞ

¼ ð0:0200Þ ð1:65Þ ð0:0878Þ ðffiffiffiffiffiffiffiffi256

pÞð1:6Þ

¼ 0:1047

Table V gives forecasted VaR and actual VaR for variance rate, that is worst-case volatility for return

series of 256 data points. The value of VaR is relative to mean. The values are comparable and thus we

can use both the models for comparison and verification. The VaR forecasted based on the historical

data is dependent on the scaling factor g. The value of VaR forecasted and actual data is comparable in

case of 256 hours. In case of 128 hours (Table VI), the error is high but one can forecast based on the

expected regime of low or high volatility. Refer Figure 1, the historical data contains high volatility

regime in first 128 hours on the other hand the actual (forecast) data contains low volatility regime.

Thus, appropriate use of scaling factor will reduce error in forecasting. In the analysis discussed so far,

we have considered complete series and we have not separated regimes of low and high volatility for

modelling. Section 4 discusses implementation of regime-switching approach using wavelets via

multi-resolution analysis.

Table V. VaR using wavelet energy distribution (256 hours).

of wavelet Forecasted VaR for 256 hours

Historical data Actual (future) data

g ¼ 2.0 g ¼ 1.8 g ¼ 1.6 g ¼ 2.0 g ¼ 1.8 g ¼ 1.60.0760 0.0684 0.0608 0.0744 0.0670 0.05950.0928 0.0835 0.0743 0.0862 0.0776 0.0690

CH (1,1) 0.0978 0.0916

ight # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 14: Modelling volatility clustering in electricity price return series for forecasting value at risk

Table VI. VaR using wavelet energy distribution (128 hours).

Type of wavelet Forecasted VaR for 128 hours

Historical data Actual (future) data

g ¼ 2.0 g ¼ 1.8 g ¼ 1.6 g ¼ 2.0 g ¼ 1.8 g ¼ 1.6db2 0.0656 0.0591 0.0525 0.0031 0.0028 0.0025db4 0.0803 0.0723 0.0643 0.0027 0.0025 0.0022GARCH (1,1) 0.1048 0.0029

28 R. G. KARANDIKAR ET AL.

4. MODELLING VOLATILITY CLUSTERING USING MULTI-RESOLUTION ANALYSIS

VIA WAVELET TRANSFORM BASED ON REGIME-SWITCHING APPROACH

The fundamentals of wavelet analysis and multi-resolution are discussed in Section 3. Recent trends in

the application of multi-resolution analysis, via wavelet transform, for power systems engineering are

surveyed in Reference [12]. In the first model discussed, the worst-case rate of volatility is forecasted.

The proposed second model is to forecast VaR relative to mean returns for electricity price return series

of period of 440 days. In the case of bilateral contracts, the volatility of prices is minimised. On the

other hand, the pool prices are volatile in nature.

Hence, to model volatility clustering, a better approach is to use pool prices and use

regime-switching approach to model low and high clustering of prices. For the case study, the

daily average prices for the period December 2001 to November 2004 of NSW market are used as

historical data [17]. This market is 100% pool type. Therefore, this is the most volatile market. From

the risk manager’s point of view, the proposed model is used to forecast VaR for 440 days daily average

prices. The daily average prices in NSW market are in Australian Dollar. These are converted into log

return basis. The plots of year wise log return prices are shown in Figure 2. For the development of the

model, the data is analysed for trends, seasonality (month-to-month variation), mean reverting

behaviour, jumpy nature and duration of low and high volatility time slots (regimes), etc.

For regime-switching approach, the trend of variation is of importance to define regimes. The trend

observed in Figure 3 is very complex in nature and the periodicity of clustering is not on annual basis.

Therefore, the data is concatenated for the entire period of 3 years. Plot of these data points for 3-year

duration (log return prices) is shown in Figure 3. Refer Figure 3, the periodicity of regime is observed to

be less than 365 days. Therefore, the data is split into number of time slots depending on regimes of

clusters. The detailed analysis of the data showed that the slot of 220 data points results in appropriate

representation of regimes of volatility clustering. Therefore, the data is divided into five slots. The first

three slots (data points 1–660) are considered as historical data. The last two slots (440 data points) are

used for the verification of the model. The plot of data for the slots/regimes 1 to 3 is shown in Fig. 4.

It is observed that there are very few wild fluctuations in the historical data. These points are three in

numbers where the price is higher than 30 times the average price. These prices are very unpredictable

and beyond the limits of any forecasting methodology. These data points are replaced by those, which

are inline with the rest of the data points. The regime-switching model for time series using

multi-resolution is not based on the other system data/parameters which are included in case of other

type of models like ANN, expert systems, etc. Thus, it is not possible to introduce such variables in

price return series for sudden increase in prices.

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 15: Modelling volatility clustering in electricity price return series for forecasting value at risk

Figure 2. Year wise daily average log return prices.

MODELLING VOLATILITY CLUSTERING 29

From the analysis of data in the first three slots, considerable fluctuations in the data are observed.

There are regimes of slow moving as well as rapid fluctuations. The rapid ones are due to unpredictable

reasons like congestion, etc. Although exact modelling of these issues is very difficult, an attempt is

made to incorporate the presence of such rapid fluctuations in the net energy considering all points.

Therefore, it is essential to separate out the slow moving fluctuations from the rapid ones. Wavelet

Figure 3. Daily average log return prices for year 2002–2004.

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 16: Modelling volatility clustering in electricity price return series for forecasting value at risk

Figure 4. Log returns prices for the first three time slots/regimes.

30 R. G. KARANDIKAR ET AL.

transform technique is the best suited for this application as it supports perfect reconstruction of the

signal. The steps adopted for the development of the model are as follows.

4.1. Selection of wavelet

Different wavelets have different scaling signals. The important characteristics of wavelets are (1)

support width, (2) vanishing moments, (3) symmetry and (4) regularity. The Haar and Daubechies

transform are defined based on computing running averages and differences. However, these averages

and differences are computed with different scaling signals and wavelets, and, as the support of these

wavelets is larger, these averages and differences are computed by using more values of the signal,

providing a tremendous improvement in the capabilities of these transforms. The selection of wavelet is

very critical step as it affects variety of parameters, viz. output at each level, number of calculations,

data compression capability, etc. The Daubechies family is a family with compactly supported

wavelets, with the highest number of vanishing moments for given support width. The db1 wavelet is

similar to the Haar wavelet. The dbN wavelet has a support width of 2N� 1 and N vanishing moments.

Daubechies wavelets are not symmetric. AWavelet Tool Box available with MATLABTTM is used for

the development of the model. Figure 5 shows the Daubechies 2 Wavelet and the associated scaling

function, which are used in this paper.

4.2. Multi-resolution analysis

From the volatile data in each slot, features are extracted at different frequency resolutions using

wavelet transform. The detailed analysis showed that the optimum number of frequency resolutions is

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 17: Modelling volatility clustering in electricity price return series for forecasting value at risk

Figure 5. Daubechies 2 wavelet and scaling function.

MODELLING VOLATILITY CLUSTERING 31

three. If we use more number of frequency resolutions the net change in error in energy is minimum.

Using three resolutions the features are manipulated and the signal is reconstructed for each slot. The

algorithm is applied to all three time slots of historical data (1–660 data points), and three signals, say

f1, f2 and f3 of 220 points each, are synthesised. These signals are averaged with appropriate weight

factors k1, k2 and k3 and a coarse model of 220 data points, say f(n), is developed such that

f ðnÞ ¼ f1ðnÞ � k1 þ f2ðnÞ � k2 þ f3ðnÞ � k3

3

The series f(n) is generated such that energy error is minimised. The f(n) is an approximate

representation of the time series and hence its fine tuning is required depending upon the duration of

regimes of high and low clustering.

4.3. Tuning of the model

The tuning can be done at different frequency resolutions. A specific repetitive trend, that is regimes of

low and high volatility clustering exists in historical data. The model is tuned for trends according to

volatility behaviour of the regime. Therefore, suitable weighting factor is used so that the difference

between the energy levels of the forecasted values and actual data are minimised for each regime. From

the analysis of data, it is seen that there exists two distinct clusters (regimes) of volatility, viz. low

volatility followed by high volatility. These clusters are looked as regimes of behaviour. Let the

approximated wave of the first regime be represented as g1(n) and that of the second regime be g2(n) of

f(n). Let theweighted model be represented as F(n). Thus, the weighted model can be given as Equation

(20):

FðnÞ ¼ v1g1ðnÞ þ v2g2ðnÞ (20)

Where, v1 and v2 are the weight factors associated with regime 1 and regime 2, respectively. The

evaluation of optimised values for regime length, k1, k2, k3, v1 and v2 is necessary for prediction.

Different regime lengths are tested from say 70 data points in the first regime of low volatility to 150

data points in steps of 10 data points. Obviously, this will incorporate (220–70) to (220–150) data points

of the high volatility regime. The optimised values of k1, k2, k3, v1 and v2 are evaluated for minimising

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 18: Modelling volatility clustering in electricity price return series for forecasting value at risk

32 R. G. KARANDIKAR ET AL.

difference between the energy levels of forecasted values and actual data in the two regimes between

F(n) and f1(n) waves for various regime lengths. Similar exercise is repeated for f2(n) and f3(n) with

base wave of F(n). The optimised values of k1, k2, k3,v1 and v2 are evaluated for each case and a matrix

of optimised parameters is obtained. Trends in the values of k1, k2, k3, v1 and v2 are studied and set of

parameters is identified from the above matrix. The final set of optimised parameters for the model is

depicted in Table VII.

Refer Table VII, the historical data points are 440 and three resolutions are used. The optimal regime

length is 130 and 90 days of low and high volatility, respectively. The algorithm developed optimises

the parameters and regime length such that the standard energy errors are minimised. The energy errors

are between forecasted series based on the historical data and actual data.

4.4. Forecasting VaR using proposed model

Normally for a risk manager, it is convenient to forecast Vale at Risk for the returns on an annual basis.

Therefore, it is required to forecast for more than 220 days. The model developed is based on repetitive

units of 220 days. In order to achieve forecasting on the annual basis, the developed model is

concatenated for 2 units of 220 days resulting in 440 days. Any transform is based on the assumption

that the information content must be unaltered and when inverse transform is taken, the original signal

must be retrievable. The information content is measured in terms of the energy in the given signal. In

order to forecast VaR, the error in energy level of forecasted and actual series is minimised. Hence, the

approach adopted is to minimise the error in the energy of forecasted series based on the historical data

and actual series. To evaluate the performance measure of the proposed model, a verification approach

based on standard time series forecasting errors like CFE, MAD and MSE, MAPE are used. Table VIII

gives the energy error and values of other standard errors for the historical and the forecasted price

return series. As the tuning of the model is done based on the minimised error in energy level, the values

are well within the limits.

The VaR approach of risk quantification is widely used. It gives theworst-case loss in returns at given

probability level. The 5% probability means that out of total 440 days for 5% days estimated maximum

loss in the average return (expected return) will be equal to value of VaR. If VaR is zero that means the

loss is zero or very negligible compared to expected value or average value of asset or return. This

parameter is useful to the risk manager to decide the strategy in the market dealing based on the

Table VII. Parameters of model.

Sr. No. Description Value

1 Duration of forecast 440 days2 Resolution for wavelet analysis 33 Optimal k1 0.24 Optimal k2 1.25 Optimal k3 3.86 Optimal v1 2.07 Optimal v2 2.628 First regime of low volatility (of 220 days) 130 days9 Second regime of high volatility (of 220 days) 90 days10 Total regimes for 220 days 211 Total regimes for 440 days 4

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 19: Modelling volatility clustering in electricity price return series for forecasting value at risk

Table VIII. Standard error terms.

Sr. No. Description Value

1 Energy of forecasted price return wave for regime 1 of 440 days 0.22762 Energy of historical price return wave for regime 1 of 440 days 0.22893 Energy of forecasted price return wave for regime 2 of 440 days 0.54494 Energy of historical price return wave for regime 2 of 440 days 0.54665 Energy of forecasted price return wave for regime 3 of 440 days 0.22766 Energy of historical price return wave for regime 3 of 440 days 0.18617 Energy of forecasted price return wave for regime 4 of 440 days 0.54498 Energy of historical price return wave for regime 4 of 440 days 0.56409 Energy of forecasted price return wave of 440 days 0.375010 Energy of historical price return wave of 440 days 0.373111 CFE 2.445012 MAD 0.341013 MSE 0.286314 %MAPE 15.4451

Table IX. Forecasted VaR.

Sr. No. Description Value

1 VaR of historical price return wave (5%) 0.61722 VaR of forecasted price return wave (5%) 0.61123 VaR of historical price return wave (1%) 0.87314 VaR of forecasted price return wave (1%) 0.8668

MODELLING VOLATILITY CLUSTERING 33

forecasted VaR values of return series. Table IX gives the VaR for the historical and forecasted return

series.

5. CONCLUSION

In this paper, a problem of modelling non-stationary electricity price time series is studied and further

its application for forecasting VaR for returns series is demonstrated. The first heteroscedasticity based

GARCH (p, q) model forecast VaR for the half-hourly price return series. This model is to forecast

worst-case variance rate relative to mean variance rate, that is VaR for the variance rate.

The heteroscedasticity-based regressions models are based on historical data and assess the time

series with its complete resolution, regardless of the time-horizon chosen. Therefore, decomposition of

the signal into all of its components, that is separating higher frequent behaviour from lower frequent

behaviour is required in order to analyse which of these components entail relevant information. The

wavelets can be used for decomposition and further the regime-switching approach will lead to proper

modelling of clusters in time series.

The electricity pool prices are volatile and follow volatility clustering. The second model is to

forecast VaR for the price return series of pool market. It is based on the decomposition of time series

using multi-resolution via wavelet transform and also takes into account the regimes of low and high

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 20: Modelling volatility clustering in electricity price return series for forecasting value at risk

34 R. G. KARANDIKAR ET AL.

clustering while tuning of the model to minimise the net energy error. The model predicts VaR relative

to mean returns for the price return series for the period of 440 days based on the historical data of 660

days. The standard forecast error measures are well within limits. The model is parametric

representation as it is based on variables like slot length, regime length, etc.

6. LIST OF SYMBOLS AND ABBREVIATIONS

6.1. Abbreviations

ANN a

Copyright #

rtificial neural networks

AR (

p) auto regressive model

ARCH (

p) auto regressive conditional heteroscedasticity model

C p

robability level

CFE c

umulative forecast error

DWT d

iscrete wavelet transform

GARCH (

p, q) generalised auto regressive conditional heteroscedasticity model

GENCO G

eneration Company

IDWT in

verse discrete wavelet transforms

MAD m

ean absolute deviation

MAMW m

oving average moving window

MAPE m

ean absolute percentage error

MRA m

ulti-resolution analysis

MSE m

ean square error

NSW N

ew South Wales

SMP s

ystem marginal price

SDE s

tochastic differential equation

VaR V

alue at Risk

6.2. Symbols

db2 o

2007 John Wile

ne type of wavelet of Daubechies family

dbt in

crement of a standard Brownian motion

Eðs2NþKÞ e

xpected volatility on (N + K)th day

e(t) r

esidual or forecast error at time t

fn v

alue of asset at the end of a day n

f0 in

itial value of the asset

fun and fsn r

eturn and variance rate of future return series (fu)

f(t) f

orecasted (prediction) time series

h ti

me horizon

hun and hsn r

eturn and variance rate of historical return series (hu)

k1, k2, k3, v1 and v2 m

ultiplying and weight factors

r r

eturn on the investment

rn c

ontinuously compounded return of portfolio/asset during for day n

sn and dn th

e nth level decomposition of wavelet tree

Sn and Dn le

vel-n approximation and detail of the original signal

y & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 21: Modelling volatility clustering in electricity price return series for forecasting value at risk

MODELLING VOLATILITY CLUSTERING 35

VL lo

Copyright # 2007 John Wile

ng run average variance

w v

ector containing wavelet coefficients of levels j¼ 1, 2\ldots J and the scaling

coefficients of level J

Xt d

iscrete stochastic process

x(t) h

istorical time series

m(X, t)dt d

rift

m m

ean

s(X, t)dt p

rice volatility

s s

tandard deviation

g w

eight assigned to VL

a, b c

onstants

7. APPENDIX A

7.1. Multi-resolution analysis using wavelet transform

Multi-resolution analysis provides natural framework for understanding of wavelet basis and

constructions of the wavelet transform. The signal decomposition is multi-resolution analysis (MRA)

which consists of sequence of successive approximation spaces Vj and has the following properties:

Vj � Vjþ1 and Vjþ1 ¼ Vj �Wj (A.1)

Where Wj is orthogonal complement of Vj in Vj+ 1, and � means the orthogonal sum of two

subspaces. Orthonormal scaling functions (fj,k, k e Z) are included in Vj and the Orthonormal Wavelet

bases (fj,k; j, k e Z) are in Wj. The dilated and translated wavelet basis cj,k becomes

cj;kðxÞ ¼ 212 cð2jx� kÞ ¼

Xn

gn � 2kfj þ 1:nðxÞ (A.2)

gn � 2k ¼< cj;k;fjþ1;n > andXn

gn � 2kj j2 ¼ 1 (A.3)

An efficient hierarchical algorithm, for computation of the wavelet coefficients of a given function f,

has been developed. The wavelet coefficient at the jth level and kth time is

dj;k ¼< f ;cj;k > ¼Pn

gn � 2k < f ;fjþ1;n

¼Pn

gn � 2kcjþ1;n(A.4)

The coefficient dj,k is obtained by convolving the sequence ðcjþ1;nÞn " z with ðg� kÞ n " z, and then

decimating by factor of 2. The scaling function coefficient at jth level and kth time becomes

cj;k ¼< f ;fj;k > ¼Pn

hn � 2k < f ;fjþ1;n >

¼Pn

hn � 2kcjþ1;n(A.5)

hn�2k ¼< fj;k;fjþ1;n > andXn

hn � 2kj j2 ¼ 1 (A.6)

y & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 22: Modelling volatility clustering in electricity price return series for forecasting value at risk

Figure A.1. Two level decomposition using wavelets.

36 R. G. KARANDIKAR ET AL.

Figure A.1 depicts two level decomposition using Wavelet Analysis. The high-pass filter g��k is

constructed from the low-pass filter h��k by reversing the order of real coefficients and reversing the sign

of alternate coefficients. This results in the high-pass magnitude response being the mirror image of the

low-pass response when viewed with respect to filter bank’s middle frequency of P/2. This results in

dyadic (divide by two) splitting of the frequency bands.

It is seen that the filter implementation of wavelet transform is not an efficient algorithm due to

subsequent down sampling after each filtering step. In order to enhance the computational efficiency,

the scheme of lifting is adopted. Lifting scheme is a space-domain construction of bi-orthogonal

wavelets developed by Sweldens. Daubechies and Sweldens have proved that any analysis or synthesis

filter pair can be factored into lifting steps. Lifting scheme technique consists of the following steps:

Co

(1) S

pyrig

plitting: Splitting means separating the data into two distinct data sets. The data x(i) is

separated into even indexed subset xe(i) and odd indexed subset xo(i).

xeðiÞ ¼ xð2iÞ and xoðiÞ ¼ xð2i� 1Þ (A:7Þ

(2) P rediction d(i): It is also known as dual lifting. It is the difference between xo(i) and xe(i)

weighted by prediction operator P.

dðiÞ ¼ xoðiÞ � xeðiÞ (A:8Þ

(3) U pdate c(i): It is also known as primal lifting. It represents a coarse approximation of original

signal x(i). It is calculated from xe(i) and d(i) weighted by update operator U.

cðiÞ ¼ xeðiÞ þ UdðiÞ (A:9Þ

(4) N ormalisation: d(i) and c(i) are normalised as

dnðiÞ ¼ nLdðiÞ and cnðiÞ ¼ nH cðiÞ (A:10Þ

The number of predictions P and updates U and their values depend on the type of wavelet. The

inverse transformation can be obtained by reversing the sequence of the operations along with the signs.

xeðiÞ ¼ cðiÞ � U dðiÞ and xoðiÞ ¼ dðiÞ � P xeðiÞ (A.11)

Figures A.2 and A.3 show the diagrammatic representation of the lifting technique.

ht # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 23: Modelling volatility clustering in electricity price return series for forecasting value at risk

Figure A.2. Lifting scheme: forward transformation.

Figure A.3. Lifting scheme inverse transformation.

MODELLING VOLATILITY CLUSTERING 37

ACKNOWLEDGEMENTS

The authors thank the Ministry of Human Resource Development, Government of India for their support incarrying out this research work (Project No. F.27-16/2003.TS.V).

REFERENCES

1. Stoft S. Power system economics: Designing markets for electricity. John Wiley and Sons: New York, 2002.2. Yadong Q. An expert system for electricity pricing. Proceedings of TENCON ’93 IEEE Region 10 Conference,

1993; 5:383–384.3. Kian A, Keyhani A. Stochastic price modelling of electricity in deregulated energy markets. Proceedings of the 34th Annual

Hawaii International Conference on System Sciences, 2001: 7.4. Gonzlez AM, San Roque AM, Garca-Gonzlez J. Modelling and forecasting electricity prices with input output hidden

Markov models. IEEE Transactions on Power Systems 2005; 20(1):13–25.5. Szkuta BR, Sanabria LA, Dillon TS. Electricity price short term forecasting using artificial neural networks. IEEE

Transactions on Power Systems 1999; 14(1):851–857.6. Nogales FJ, Contreras J, Conejo AJ, Espnola R. Forecasting next-day electricity prices by time series models. IEEE

Transactions on Power Systems 2002; 17(2):342–348.7. Shumway RH, Stoffer DS. Time series analysis and its applications; 1st edn, Springer Verlag (ed.). Springer Verlag: New

York, 2000.8. Risk assessment and financial management. IEEE Tutorial, 1999.9. Karandikar RG, Abhyankar AR, Khaparde SA, Nagaraju P. Assessment of risk involved with bidding strategies of a Genco in

a day ahead market. International Journal of Emerging Electric Power Systems, The Berkeley Electronic Pres, vol. 1, no. 1,Article 1006, 2004.

10. Stevenson M. Filtering and forecasting spot electricity prices in the increasingly deregulated Australian electricity market.Quantitative Finance Research Papers, University of Technology, Sydney, September 2001.

11. Knittel CR, Roberts MR. An empirical examination of deregulated electricity prices. University of California Press:Berkeley, CA, 2001.

Copyright # 2007 John Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep

Page 24: Modelling volatility clustering in electricity price return series for forecasting value at risk

38 R. G. KARANDIKAR ET AL.

12. Deshpande NR, Kulkarni SV, Gadre VM, Khaparde SA. Recent trends in applications of wavelet transform to power systemengineering. Proceedings of 36th North American Power Symposium, NAPS04, Idaho, USA, 2004.

13. Bollerslev T. Generalized autoregressive conditional heteroscedasticity. Journal of Econometric 1986; 31(3):307–327.14. Hansen PR, Lunde A. Does anything beat a GARCH (1, l)? A comparison based on test for superior predictive ability, CIFR

2003; 301–307.15. Garcia RC, Contreras J, Akkeren M, Garcia JBC. AGARCH forecasting model to predict day-ahead electricity prices. IEEE

Transactions on Power Systems. 2005; 20(20):867–874.16. Contreras J, Espnola R, Nogales FJ, Conejo AJ. ARIMAmodels to predict next-day electricity prices. IEEE Transactions on

Power Systems 2003; 18(3):1014–1020.17. Available on line www.nemmco.com.18. Daubechies I. Ten lectures on wavelets. SIAM: Philadelphia, PA, 1992.19. Jensen Arne, la Cour-Harbo Anders. Ripples in Mathematics: The discrete wavelet transform. Springer Verlag: Berlin, 2001.20. Hull JC. Options, futures, and other derivatives. Pearson Education: Singapore, 2003.

AUTHORS’ BIOGRAPHIES

Copyright # 2007 Jo

R. G. Karandikar Born in 1965, is currently pursuing Ph.D. in Indian Institute ofTechnology-Bombay, Mumbai, India. He is a post graduate in Electrical Engineering fromV.J.T.I. Mumbai, India. His research interest includes issues of risk assessment in restructuredpower system and distributed generation.

N. R. Deshpande Born in 1975, worked in Crompton Greaves Limited for 5 years in R & D.He completed his M.Tech. in Electrical Engineering from Indian Institute of Technology-Bombay, Mumbai, India. His research interest includes application of wavelet transform forpower systems.

S. A. Khaparde is Professor at the Department of Electrical Engineering, Indian Institute ofTechnology-Bombay, Mumbai, India. His research interests include power system compu-tations, analysis, and deregulation in power industry.

S. V. Kulkarni is an Associate Professor, Department of Electrical Engineering, IndianInstitute of Technology-Bombay,Mumbai, India. Previously, heworked at Crompton GreavesLimited and specialised in the design and development of transformers up to 400 kV class. Hehas authored book ‘Transformer Engineering: Design and Practice’ published by MarcelDekker, Inc. He is the recipient of the Young Engineer Award (2000) from the Indian NationalAcademy of Engineering. His research interests include distributed generation, transformerdesign and analysis, and computational electromagnetics.

hn Wiley & Sons, Ltd. Euro. Trans. Electr. Power 2009; 19:15–38

DOI: 10.1002/etep