final

23
Pricing CDOs using the Intensity Gamma Approach Christelle Ho Hio Hen Aaron Ipsa Aloke Mukherjee Dharmanshu Shah Computing Methods December 19, 2006

Upload: kottyshock

Post on 05-Jul-2015

106 views

Category:

Economy & Finance


1 download

TRANSCRIPT

Page 1: Final

Pricing CDOs using the Intensity Gamma ApproachChristelle Ho Hio HenAaron IpsaAloke MukherjeeDharmanshu Shah

Computing MethodsDecember 19, 2006

Page 2: Final

Table of Contents

Table of Contents.................................................................................................................2Introduction..........................................................................................................................3

Advantages of Intensity Gamma Approach.....................................................................3Default Correlation in Intensity Gamma..........................................................................4Implementation Overview...............................................................................................4

Survival Curve.....................................................................................................................5Construction of the Business Time Path..............................................................................9Calculating IG forward intensities (ci)..............................................................................14Getting the Default Times from the Business Time Paths.................................................16CDO pricer.........................................................................................................................17Validation...........................................................................................................................17

Roundtrip Test...............................................................................................................17A Fast Analytic Price Approximation in the Intensity-Gamma Framework.................19

Calibration..........................................................................................................................21Future work........................................................................................................................22References..........................................................................................................................23

Page 3: Final

Introduction

It has become fashionable to attack David Li’s copula model [Li 2000] as an inadequate tool for pricing CDO tranches and more generally any security exposed to default correlation. This criticism belies the importance of the model. Despite its inadequacies it has framed and motivated much of the current research.

Mark Joshi and Alan Stacey’s Intensity Gamma [Joshi 2005] model attempts to address some of the issues of the standard Gaussian copula model by combining two important concepts from past models in both option pricing and credit modeling: conditional independence and business time. Conditional independence refers to the fact that events are independent conditioned on some random variable. In copula models default times are independent conditioning on a systematic factor. Business time refers to the idea that changes in economic variables are driven by the arrival of information and not simply by the passage of time. Both of these ideas underly the Variance Gamma model [Madan 1998] which the authors refer to and pay homage to in their naming of the model.

In a nutshell: “Conditional on the information arrival or business time process, (It), defaults of different names are independent.”

Advantages of Intensity Gamma Approach

1) Market Does not believe in the Gaussian Copula: Different tranches of a CDO have different base correlations. We observe an increasing base correlation with increasing detachment point. Hence, the market implies that there is no constant correlation amongst the names in the CDO, since this number keeps varying from tranche to tranche. The Intensity Gamma approach does not need a base correlation to price a CDO.

2) Pricing Non-Standard Tranches in CDOs: If the Gaussian Copula is to be used to price a non-standard tranche such as 0-35%, we would need to extrapolate from the market-observed base correlation curve of standard tranches. This can price a 0-34% tranche with a lower spread than the 0-35% tranche. The Intensity Gamma approach does not suffer from this.

3) Pricing exotic credit derivatives: Portfolio credit derivatives with complicated payoffs can be priced easily by Intensity Gamma. The Gaussian Copula needs a base correlation to spit out default times, and hence is unfit for use in exotic credit derivatives.

4) Time Homogeneity: The Intensity Gamma approach is not based on a particular time frame. Once the method is calibrated, it can be used to price portfolio credit derivatives of any maturity. In contrast the Gaussian Copula approach is dependent on the time frame that we are working in. The base correlation for pricing a 3 year CDO differs from that for a 5 year CDO, making it more difficult to work with that Intensity Gamma approach. However, there is a caveat that although the intensity gamma approach prices a lower base correlation with

Page 4: Final

increasing maturities, which is desired, the change is more pronounced than observed in the market. Hence, it is not advisable to price products across different maturities using the intensity gamma approach.

Default Correlation in Intensity Gamma

How does default correlation occur in such a model? Imagine a string marked at uniformly distributed intervals, each marking representing a default. On average each segment has about the same number of defaults. This represents the concept of conditional independence: for a given business time path events occur independently. Now imagine the string is kinked at various points, although the marks remain equally spaced when we look at the string edgewise we see that the marks (defaults) have moved closer together. This is illustrated in the diagram below.

The economic intuition is that information arrives in bursts. In periods of high information arrival we can expect more events to occur than when no information has arrived. The amount of business time which arrives between defaults is uniform but the amount of calendar time is not: we clearly see periods of clustered defaults.

Implementation Overview

Implementation of the IG model consists of the following steps:

1. Survival curve construction2. Calculating IG forward default intensities

Business time Calendar time

Page 5: Final

3. Simulation of business time paths4. Calculation of default times5. Pricing CDO tranches6. Calibration

Calibrating the model requires iterating steps 2 through 5 with different model parameters in order to match model prices to market prices. The relationship between the blocks is depicted schematically below:

The implementation was validated using the following tests:

1. Round trip test: verify CDS spreads are recovered2. Homogeneous portfolio approximation

Survival Curve

a) Intuition

6mo 1y 2y .. 5yname1 .name2..name125

CDS spreads

Survival Curve Construction

IG Default Intensities

Calibration

Parameter guess

Business time path generator

Default time calculator

Tranche pricer

Objective function0-3% …3-6% …6-9% …..

Market tranche quotes

Err<tol? NO

YES

Final parameters

Page 6: Final

A CDO is composed of a collection of CDSs and each CDS is a contract based on the chances that a firm will survive to the maturity of the CDS. A survival curve is essentially a mapping of the probability of survival of an entity, as time progresses. It is essential to the pricing of a CDO since it speaks about how likely it is for a firm to default.

Default is commonly modeled as Cox process, which is a doubly stochastic Poisson process. Here λ, the forward intensity of a Poisson process is not constant in time. While for a Coz process λ is stochastic, in our case we treat lambda as piecewise constant, changing with time. The probability that a firm survives to time T is given by

Pr (τ > T) = exp[0

( )T

u duλ−∫ ]

Thus, instead of the actual survival probability X(0,T), the forward intensity curve λ(t) gives the same information, i.e. both talk about the chances of a firm surviving in time. Hence, in practice the λ(t) curve is referred to as the survival curve, and is integral to the pricing of CDOs, CDSs and other credit products.

Survival Curve in terms of λ

0

0.001

0.002

0.003

0.004

0.005

0.006

0 1 2 3 4 5 6

Time

Inte

nsi

ty

b) CDS PricingA CDS is a contract where the “protection buyer” pays a running coupon c to the “protection seller”, in return for protection against default on some firm. If default on the specified firm takes place before the final CDS maturity, the protection seller pays the protection buyer 1−R at the time of default. R is the observed recovery rate. At time of

Page 7: Final

default, coupon payments cease. For a CDS to fairly priced, the present values of the fixed coupon leg and floating contingent payment at default must equal each other.

The present value of the protection buyers coupon payment stream considering a notional of $1 is

PVcoup(0) = c1

(0, )N

i i

i

B Tδ=∑

Where B(0,Ti) is the price of a risky bond. A risky bond can be priced by multiplying a risk free bond by the probability that the bond would survive till maturity.B(0,Ti) = P(0,Ti) * X(0,Ti)(Using notations and terminology as in Prof. Leif Andersen’s lecture notes at NYU)

The present value the protection seller’s payment isPVprot(0) = C(0; 1 − R)Where C(0; 1-R) represents the present value of the contingent payment of 1-R at time of default.

Hence, the present value of the CDS can be expressed as the difference of the two legs described above

PVCDS(0) = C(0; 1 − R) - c1

(0, )N

i i

i

B Tδ=∑

Equating the present value to zero, we obtain the fiar spread on the CDS as

Cpar =

1

(0,1 )

(0, )N

i i

i

C R

B Tδ=

Calculation of Present Value of Contingent claim : C(0,1-R)

At the time of default, the protection seller pays the protection buyer 1-R. It is our goal to find the present value of this contingent payment. Using elementary probability theory, we know that the present value is the discounted value of 1-R multipled by the probability of default. Also, we don’t know the exact time of default, and hence work with expectations.

C(0;1-R) = E[(1-R) . exp(0

( )r u duτ

∫ ) . 1τ<T]

Where 1τ<T is 1 if τ<T and 0 otherwise.

Now, let us first find this value such that default takes place in an infinitesimal time [z,z+dz]

C(0;1-R) = E[(1-R) . exp(-0

( )zr u du∫ ) . 1τ Є[z,z+dz] ]

C(0;1-R) = E[(1-R) . λ(z) . exp(-0

( ( ) ( ))z

r u u duλ+∫ ) . dz]

Page 8: Final

Now adding up over the entire interval [0,T], we obtain the general form as

C(0;1-R) = E[ 0

T

∫ (1-R) . λ(z) . exp(-0

( ( ) ( ))z

r u u duλ+∫ ) . dz]

But X(0,t) = exp(-0

( )t

u duλ∫ )

Hence we can writedX

dt= -λ . exp(-

0( )

tu duλ∫ )

Substituting in the equation for C(0;1-R), we obtain

C(0;1-R) = -0

T

∫ (1-R) . P(0,z) . (0, )dX s

dz dz

Where P(0,z) is the price of a risk free bond of maturity z.

However, in reality, we cannot integrate continuously. It seems logical to discretize the equation stated above, into time intervals equal to that between subsequent CDSs. Also, we make a practical assumption that irrespective of exactly when default takes place, we may treat it as taking place in the middle of that time period of contention.Using these ground rules, the equation can be restated as

C(0;1-R) = (1-R) . t T<=∑ P(0, 1

2j jt t− +

) [X(0,tj-1) – X(0,tj)]

c) Bootstrapping the Survival CurveBootstrapping procedure is commonly used to build a survival curve in practice. Let us assume that we observe CDSs of maturities T1, T2, T3…TN in the market. The survival curve is essentially a chart of λ versus time. It is customary to treat it piecewise constant, although in some cases piecewise linear may make more sense. Bootstrapping works with the following steps:

1) Assume λ(1) is the intensity for the survival curve from 0 to T1. Using this assumption of λ(1), we calculate the fair price of a CDS of maturity T1.

2) If the calculated fair spread is same as the spread observed in the market, we accept our guess of λ(1). Otherwise we need to make another guess of λ(1) so as to find a fair price that matches the market. In MATLAB we can simply use the fzero function to find this root.

3) After we have found λ(1), we make a guess for λ(2), which is the intensity from T1 to T2. Using known λ(1), and guessed λ(2), we compute the fair spread for the CDS with maturity T2. If the calculated fair price matches that seen in the market, we accept our guess of λ(2) or else we use some root finding method to find the value of λ(2) that matched the observed CDS price.

4) We keep doing this for subsequent λ’s.

Page 9: Final

Construction of the Business Time Path

The business time process was modeled as two gamma processes ( Γ) and a drift as follows,

),(),( 221 1λγλγ ttatI t Γ+Γ+=

A gamma process is a positive, pure jump process where the t∆ increments are independently gamma distributed as ),;( λγ txf ∆ . The parameter γ controls the rate of jump arrivals and λ inversely controls the size of the jumps. The naïve way to generate a gamma process is to break up an interval into small subintervals and draw gamma random variables ~ ),;( λγ txf ∆ for each period t∆ . However, this would be computationally intensive since one would have to generate gamma random variables, which are typically obtained via acceptance-rejection methods.

The gamma probability density function is defined,

)(

)(),;(

1

γλλλγ

λγ

Γ=

−− xexxf

…where )(γΓ is the gamma function, which can be thought of as a function that “fills in” the factorial function. If z is an integer, then,

!)1( zz =+Γ

Note that if γ =1, then the gamma density reduces to the exponential density with rate λ .

xexf λλλ −=),1;(

Furthermore, a gamma distributed random variable can be thought of as the sum of γ exponential random variables, even though γ is not restricted to being a whole number.

From Cont and Tankov1 we have the following, more efficient, expression for the gamma process as an infinite series:

=

=≤

Γ−−

=

i

jii

iTUit

T

VeXi

i

1

1

/1 1γλ

1 Rama Cont and Peter Tankov, Financial Modeling with Jump Processes (Chapman and Hall, 2003)

Page 10: Final

Ti, Vi – standard exponential random variable (contribute to determination of jump size)Ui – standard uniform (determines the jump time)

Cont and Tankov suggest truncating the series at a point when Γi exceeds a threshold τ. Joshi suggests that the remaining small jumps can then be approximated by a drift term. The algorithm we employed is as follows, from Cont and Tankov, where Tenor is the length in real time of our gamma process (5 years for a typical CDO, for example.)

Initialize k = 0;

REPEAT WHILE τ<∑=

k

iiT

1

Set k = k+1;Simulate kT , kV : standard exponentials.Simulate kU : uniform [0,Tenor].

END

Set G = cumulative sum of T. Sort U in ascending order – these can be thought of as the jumping times in a

compound Poisson process.Sort G in the same order as U.

Then the gamma process increments are:

kieV

X Tenor

G

iti

i

...1,* ==−

γ

λ

Noticing that the maximum iG will be approximately equal toτ , the terms we are

truncating in this series will be of the order

−Tenor*

expγ

τor less. What is the total

magnitude of this truncation error? We can express the residual series R as:

∑∞

+=

Γ−−

=

Γ−−=

>Γ=+=

1

/1

1

/11 };inf{,

kii

i

k

iiT

VeR

ikRVeX

i

i

γ

γ

λ

τλ

The expected value of the residual can be computed. This is the magnitude of the drift term. Given that Γk is the first point at which the sum of exponential variables exceeds

Page 11: Final

the threshold τ we can approximate it as τ+δ. What is the expected value of δ? In fact δ is also exponentially distributed with mean 1. This is related to the memorylessness property of exponential random variables: given that we have waited to a certain point in time the expected time till the next arrival is unchanged. We therefore approximate Γk as τ+1.

λγ

γγλ

λ

λ

λ

γτ

γτ

γγ

γ

γ

/)1(

1

/)1(1

1

//1

1

/1

1

/1

1

][

][

][][][

+−

=

+−−

=

−Γ−−

+=

Γ−−

+=

Γ−−

=

+=

=

=

=

+

e

e

eEe

eE

VEeERE

i

i

i

i

T

ki

kii

kik

i

i

Recall that all of the random variables are independent which allows us to factorize the expectations in the first and third lines. The fourth line follows by substituting into the moment generating function of a standard exponential (1 – t/λ)-1 with t = 1/γ. The fifth line is a standard result for geometric series.

What τ should we choose if we’d like to use a constant compensating drift b for all paths? Equating E[R] to b gives the following expression:

1log −

−=

γλγτ b

b

For generating gamma paths to an arbitrary time T, substitute γ with γT.

Business Time Sample Results

The function for generating Business Time Paths was tested by ensuring that the means and variances were correct. Sample results are shown below for the following parameters:

1.,3.,1,5. 2121 ==== λλγγ , drift a = 1, Tenor = 5, number of paths = 100,000.

Sample Mean Std Err of the Mean Expected Mean 63.2672 0.0723 63.3333

Sample Var Expected Var522.3126 527.7778

Page 12: Final

Sample Business Time Plots are shown in Figure 1 & Figure 2. Notice that the higherγ in the second plot results in an increased number of recorded jumps (the actual number of jumps is infinite, of course.)

Figure 1: Sample Business Time Path.

Page 13: Final

Figure 2: Sample Business Time Path

The truncation error was added back to the process in order to ensure the final Business Time distribution had the correct mean. How much computation time was saved by this implementation? As noted above, the truncation error was controlled by setting τappropriately. Table 1 shows that the speed enhancement due to this method was approximately 20%.

Time to Generate All PathsNo Truncation Error Adjustment, set Error = 0.001 42 secondsWith Truncation Error Adjustment, set Error = 0.05 34 seconds

Table 1: The table shows the speed enhancement achieved by truncating the gamma process earlier and

adding in the truncation error correction term. Parameters: 1.,3.,1,5. 2121 ==== λλγγ , drift a = 1, Tenor = 5, number of paths = 100,000. Note that setting the truncation error to 0.05 is relative to the timescale of CDO Tenor, which is 5 in this case. So 0.05 represents 1% error.

Page 14: Final

Calculating IG forward intensities (ci)

As discussed above the survival curve construction process produces a set of piecewise constant λ values for each name in the portfolio. These represent forward default intensities or hazard rate for the company. With the standard assumption of Poisson arrivals, survival probabilities for each name i are constructed using the expression:

∫=

−T

i dtt

eTX 0

)(

),0(λ

One way of looking at this is that the survival probability of each company decays at a rate proportional to time. That proportion is lambda. In the IG model survival probabilities decay at a rate proportional to business time. The proportion is c. In IG survival probabilities for each name i are constructed using the expression:

∫=

−T

ti dItc

eTX 0

)(

),0(

Here dIt represents the incremental arrival of information or business time. Calculating the appropriate value for ci relies on the fact that these quantities must be equal. Put another way the survival probability from the IG model must be consistent with the survival probability implied by the market. If it were not so, the calibrating instruments, single-name CDS, would not be priced consistently with the market: a fatal flaw for any model.

Although we could calculate ci from the survival probability itself, it is much more convenient to use the λ values already calculated. Since they are piecewise constant the first integral above can be replaced with a summation and the survival probability is seen as a product of conditional survival probabilities determined by the λ on each interval. The IG ci values are assumed piecewise constant on the same intervals. This leads to the following identity for c in terms of λ for an arbitrary interval from T1 to T2. To minimize confusion λ̃ refers to the hazard rate and λi refers to the parameter of the ith gamma process.

Page 15: Final

=

=

−−−−

+

−=∴

+

==

−≡

++=−−≡

2

1

2

1

)(~

221112

12

/1

1log

~

][

)),((

),(),(

12

i ii

i i

icaIIc

TT

cca

ceeEe

tGammaMGF

GammaGammaaII

TT

i

TT

λγλ

λλ

λλλγ

λτγλτγττ

τγττλ

γ

A subtle point noted by Joshi is that one of the parameters is in fact redundant. If the parameter a is multiplied by m and each λi is divided by m this is equivalent to multiplying each gamma path by m. Looking at the above we see that the resulting c values will simply be divided through by m. This means that defaults will occur at the same times and prices will be the same for the parameterization (a, γ1, λ1, γ2, λ2) as for (ma, γ1, λ1/m, γ2, λ2/m). Thus a can be set to 1 without losing any flexibility in the model.

The identity above does not have a simple closed form for c and must be solved using a zero-finding algorithm. The calculation must be done for each name and for each piecewise constant interval. Since the ci values are a function of the IG parameters, this inner calibration must be performed for each iteration of the outer calibration which attempts to find IG parameters which consistently price different CDO tranches. Thus it is important that it be done efficiently.

MATLAB’s fzero routine offers a simple solution for zero-finding. However it has the limitation that it is not vectorized. Instead we implemented a parallel bisection method which finds the zero for all names and intervals simultaneously. This is possible because each of the calculations is completely independent. The algorithm is exactly identical to standard bisection with the exception that each operation is performed on the entire array of values therefore it has the performance of the longest individual iteration. This can be compared to the serial method whose execution time is the sum of each individual iteration. This method allows us to exploit MATLAB’s powerful matrix routines. The main loop from ig_intensity2.m is shown below:

for k = 1:maxiter cmid = (cmin + cmax)/2; vmid = f(cmid); rootfound = (vmid == 0); cmin(rootfound) = cmid(rootfound); cmax(rootfound) = cmid(rootfound); % root is below midpoint rootbelow = (vmax.*vmid > 0); cmax(rootbelow) = cmid(rootbelow); vmax(rootbelow) = vmid(rootbelow); % root is above midpoint rootabove = ~rootfound & ~rootbelow; cmin(rootabove) = cmid(rootabove); vmin(rootabove) = vmid(rootabove);

Page 16: Final

% global exit condition if (max(max(cmax-cmin)) < tol) % disp(sprintf('error < tol k = %g\n', k)); break endendc = (cmax + cmin)/2;

A similar technique is applied to speed up the calculation of the λ values. In that case all values cannot be calculated simultaneously since the value of λ at a given time depends all previous λ. The parallel bisection relies on the values being independent. Instead a bootstrapping mechanism is employed in which we step forward through time calculating the hazard rate for all names simultaneously. This code is implemented in intensity_bisection.m and intensity_curve_test.m. It also serves to validate the results of the survival curve generator discussed above since they should both produce identical results.

Getting the Default Times from the Business Time Paths

In Joshi’s Intensity-Gamma Model, )(tci refers to the default rate for name i per unit of information arrival. Therefore the probability of survival for a name i is:

−= ∫

T

o

ti dItctX )(exp)(

Since )(tci is piecewise constant with respect to t (real time), we can calculate default times using the customary method of drawing a uniform random variable U~[0,1] and setting it equal to the survival probability. Then, the default time iτ can be found by from the following formula

<−= ∫

T

o

tiii dItcUT )()log(:minτ

Since the )(tci are piecewise constant, the integration must be performed in parts which can take time, but the process was sped up by noting that most names will not default for any given path. Thus, we first assume the )(tci are constant and assign each name its maximum value. Then the earliest business time that a default could have occurred is:

max,min,

)log(

i

ii c

UI

−=

If this is greater than the maximum business time, TenorI , then no default could have occurred within the timeframe of the CDO.

Page 17: Final

CDO pricer

The CDO pricer code was adapted from C code written by Aloke Mukherjee for the class Interest Rate and Credit Models implementing a Monte Carlo simulator of the one-factor Gaussian copula model. The MATLAB port of this code is cdo.m. The default time generation was removed and only the pricing portion of the code was retained in igcdo.m. This is a misnomer since the code is not specific to IG: it takes the model-dependent simulated default time scenarios as an argument.

Given this (sorted) set of times we simply simulate the effect of each firm’s default on the fixed and floating leg cashflows. Default requires a payment on the fixed leg of the notional associated with the affected fraction of the tranche. This payment must be discounted to the present from the time of default. On the floating leg, default reduces the amount of tranche notional on which the coupon is payed. Calculating the PV of the fixed leg is done by applying the appropriate discount factor from default time to the present weighted by the proportion of affected notional. For the floating leg we keep track of remaining notional in each discretized time interval and make the simplifying assumption that the coupon is payed on the average notional over each period. The break even spread for the tranche is then calculated as the fixed spread which would make the expected value of each leg equal: Cbe = E[Vfix]/(E[Vflt] / c) where c is the coupon assumed in the valuation of the fixed leg.

Validation

Two important validations were performed. First we verify that we can reprice CDS consistently with the input spreads. Secondly we use an analytic approximation suggested in the paper to verify the Monte Carlo simulation.

Roundtrip Test

The model must reproduce the input CDS spreads. The roundtrip test (roundtriptest.m) verifies this by assuming a CDS with a flat spread, recovering the associated (flat) intensities and then using these to price the CDS using the intensity gamma model. A CDS can be thought of as a CDO with only one firm and attachment and detachment points of 0% and (1 – Recovery Rate)% respectively. Additionally, coupons are paid on the entire notional until default at which point the coupons cease: this logic is triggered in the pricer if the number of firms is one.

We verify that the output spread matches the input spread. In addition, the IG pricer outputs a default time for each path. We can use these default times to construct a survival curve simply by summing the number of defaults before a given time and dividing by the total number of paths. This survival curve should be consistent with the survival curve created when bootstrapping the default intensities. A sample run is shown

Page 18: Final

below as well as a graph of the survival curve. For comparison purposes the results for the one-factor Gaussian copula model simulator are also shown.

EDU>> help roundtriptest function pass = roundtriptest(spreadbps, paths); Test pricer and intermediate steps by seeing whether we recover the input spread. spread -> lambda -> igprice -> spread input: spreadbps - a flat spread in bps - defaults to 40 paths - number of monte carlo paths to simulate - defaults to 100000 output: pass - 1 if successful Also produces a graph comparing the implied survival probabilities and survival probabilities calculated from the generated default times. 2006 aloke mukherjee

EDU>> roundtriptest(100,100000);closed form vfix = 0.0421812, vflt = 0.0421812Gaussian vfix = 0.0422499, vflt = 0.0428865IG vfix = 0.0429348, vflt = 0.0422907input spread = 100, gaussian spread = 101.507, IG spread = 98.4998

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 50.91

0.92

0.93

0.94

0.95

0.96

0.97

0.98

0.99

1

Implied survival curve

Gaussian copula

Intensity Gamma

Page 19: Final

A Fast Analytic Price Approximation in the Intensity-Gamma Framework

Joshi and Stacey suggest a clever method for obtaining a fast, analytic price for a CDO by making some approximations. The default intensities λi are assumed to be constant over the period of the CDO and are calculated to match the survival probabilities at maturity. For example, λi = (5 year CDS spread for name i) / (1-Recovery). The business time default intensities ci were then derived in the regular way. The probability that a

specific name survives until the CDO’s maturity, given a business time IT, is Ti Ice−

. Then, the probability that there were exactly k defaults, given a business time IT, can be calculated quickly. The method employed to accomplish this task was borrowed from the Large Pool Model from CDO pricing.

Let Xi be the probability that firm i survives to business time IT, and P(k,n) be the probability that k firms have defaulted by time IT, given that n firms existed at time zero.

• Start with 1 firm: P(k=0, n=1) = Xi

P(k=1, n=1) = (1-X1)

• Add 1 firm: P(k=0, n=2) = P(k=0,n=1) * X2 P(k=1, n=2) = P(k=0,n=1) * (1-X2) + P(k=1,n=1)*X2

P(k=2, n=2) = P(k=1,n=1) *(1-X2)

• And so on…

We assume that all defaults occur exactly at the midpoint of the CDO’s lifetime. Then we can price the floating leg and the fixed leg of a CDO by integrating over all possible business times as follows.

dIIIpIkKpkKvfltVfloat T

N

kT )(*)|(*)(

0 1

=

=== ∫ ∑

=

For a business time process consisting of one gamma process (and possibly a drift term), the probability density of business times is trivially a gamma distribution. (What is the PDF for two gamma processes?) We tested this method and compared the results to our main Intensity Gamma CDO pricer in Table 2, Table 3, and Table 4. The results show that the approximate prices are not terribly accurate. However, Joshi, suggest using it as a control variate for performance improvement.

Table 3 is particularly informative, because it compares the two pricing methods while using the same constant default intensities. Thus, the pricing differences are due solely to the effect of assuming all defaults occur exactly in the middle of the CDO’s lifetime. It seems that only the equity tranche is substantially affected by this.

Page 20: Final

Table 4 shows the results if, in the Approximate Intensity Gamma Pricer, that defaults are evenly distributed over the CDO’s lifetime. The prices match almost exactly, as we would expect.

TRANCHE Approximate Intensity-Gamma Price (bps)

Complete Intensity Gamma Price (bps)

0-3% 1429 17783-7% 135 1877-10% 14 2910-15% 1 515-30% 0 0

Table 2: Results from pricing CDO tranches using the Intensity Gamma method compared to an approximate method. Actual default intensities curves were used for the

Complete Intensity Gamma pricing. The parameters used were: γ=.1, λ=.1, a=1.0, rfr = .05.

TRANCHE Approximate Intensity-Gamma Price (bps)

Complete Intensity Gamma Price (bps)

0-3% 1429 15733-7% 135 1337-10% 14 1310-15% 1 115-30% 0 0

Table 3. Results from pricing CDO tranches using the Intensity Gamma method compared to an approximate method. In this case, the same approximate default intensities

were used in both runs, along with the same parameters: γ=.1, λ=.1, a=1.0, rfr = .05.

TRANCHE Approximate Intensity-Gamma Price (bps)

Complete Intensity Gamma Price (bps)

0-3% 1584 15733-7% 144 1337-10% 14 1310-15% 1 115-30% 0 0

Table 4:Results from pricing CDO tranches using the Intensity Gamma method compared to an approximate method. In this case, the same approximate default intensities

were used in both runs, and the defaults were assume to be evenly distributed over the CDO’s lifetime. The parameters were the same: γ=.1, λ=.1, a=1.0, rfr = .05.

Page 21: Final

Calibration

We chose to test the good fitting of correlation skew using two gamma processes. Thus we tried to replicate market spreads with a two gamma processes CDO pricer.There is a redundancy in the parameters needed to calibrate two gamma processes with drift. Indeed when we multiply the multi gamma process drift a by some factor m and dividing each lambda by the same m, is equivalent to multiplying the gamma paths by m. Thus we also multiply all the individual names default rate by m and lose our calibration of default rate. To avoid such a problem we set a = 1. Now there remain four parameters to estimate. Our goal is to minimize the squared difference of market prices and simulated prices. Mark Joshi advises to use a Downhill Simplex Method, however due to the variability of results from our simulation we considered that a noisy optimization method would be more suited. As we didn’t have any idea of the scale of parameters to choose, we needed an optimization method that would allow a large search space. Because of long computation times, genetic algorithm was not suited for this optimization. We chose to use a Simulated Annealing algorithm, described in Wikipedia as:

“The name and inspiration come from annealing in metallurgy, a technique involving heating and controlled cooling of a material to increase the size of its crystals and reduce their defects. The heat causes the atoms to become unstuck from their initial positions (a local minimum of the internal energy) and wander randomly through states of higher energy; the slow cooling gives them more chances of finding configurations with lower internal energy than the initial one.

By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random "nearby" solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly "downhill" as T goes to zero. The allowance for "uphill" moves saves the method from becoming stuck at local minima—which are the bane of greedier methods.”

We first calibrated a one gamma process to fit our CDO spreads then used those parameters as initial estimates for a two gamma processes optimization. Because of our pricer computation time, the calibration took 48 hours to get a good approximation of a standard 125 names 5 tranches 5 year maturity CDO. Mark Joshi claims that his C++ pricer takes 5 seconds to run a similar calibration. It might come from the fact that we use a matlab pricer. However, it is hardly conceivable.

After getting implied base correlation from the resulting CDO spreads, we compare the market and simulated base correlation skew.

Page 22: Final

Comparison of Base Correlations

0,00%

20,00%

40,00%

60,00%

80,00%

100,00%

120,00%

0-3% 3-7% 7-10% 10-15% 15-30%

Tranches

Bas

e C

orr

elat

ion

MarketBaseCorrelation

SimulatedBaseCorrelation

The skews do not perfectly match. However, we do obtain a correlation skew with this method. We might think that our calibration was not perfect or we need to add a third gamma process. However in our current computation time, adding a third gamma process is forbidden.

Future work

We successfully implemented the Intensity Gamma model and calibrated it to market prices. The implementation was validated by repricing CDS and by checking against an analytic approximation. There are a few avenues for improvement.

The performance of the current implementation could be sped up by using the homogeneous portfolio approximation described above as a control variate. This along with the application of quasi-random numbers are both identified in the paper as sources for performance improvement. Additionally it is possible that there are improvements to be made by vectorizing more of the implementation but it is already quite parallelized.

Two more advanced techniques to extend the model are also outlined in the paper. Observing that using IG to price for products of different maturities than the calibrating instruments produce a greater decrease in correlation than implied by the market, the authors suggest introducing a random delay between the arrival of a sufficient amount of information to trigger default and the default itself. Another enhancement to the model would be to allow different gamma processes for modeling sector or geographic effects.

Page 23: Final

References

Li, David X. (2000) On Default Correlation: A Copula Function Approach. Journal of Fixed Income, March 2000, pp 41-50.

Joshi, M. and Stacey, A. (2005) Intensity Gamma: A new approach to pricingportfolio credit derivatives. www.quarchome.org/ig.pdf

D. Madan, P. Carr, E.C. Chang. (1998) The Variance Gamma process and option pricing, European Finance Review 2 (1)