financial economics pt 2 modelling in insurance and finance

89
COURSE NOTES Financial Mathematics MTH3251 Modelling in Finance and Insurance ETC 3510. Lecturers: Andrea Collevecchio and Fima Klebaner School of Mathematical Sciences Monash University Semester 1, 2016

Upload: john

Post on 08-Jul-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 1/89
Modelling in Finance and Insurance ETC 3510.
Lecturers: Andrea Collevecchio and Fima Klebaner School of Mathematical Sciences
Monash University
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 2/89
1.2 Application in Finance . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Application in Insurance . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Review of probability 4 2.1 Distribution of Random Variables. General. . . . . . . . . . . . . . 4 2.2 Expected value or mean . . . . . . . . . . . . . . . . . . . . . . . . 5 2.3 Variance Var, and SD . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 General Properties of Expectation . . . . . . . . . . . . . . . . . . 7 2.5 Exponential moments of Normal distribution . . . . . . . . . . . . . 8 2.6 LogNormal distribution . . . . . . . . . . . . . . . . . . . . . . . . . 9
3 Independence. 10 3.1 Joint and marginal densities . . . . . . . . . . . . . . . . . . . . . . 10 3.2 Multivariate Normal distributions . . . . . . . . . . . . . . . . . . . 11 3.3 A linear combination of a multivariate normal . . . . . . . . . . . . 12 3.4 Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.5 Covariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.6 Properties of Covariance and Variance . . . . . . . . . . . . . . . . 15 3.7 Covariance function . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4 Conditional Expectation 16
4.1 Conditional Distribution and its mean . . . . . . . . . . . . . . . . 16 4.2 Properties of Conditional Expectation . . . . . . . . . . . . . . . . 17 4.3 Expectation as best predictor . . . . . . . . . . . . . . . . . . . . . 18 4.4 Conditional Expectation as Best Predictor . . . . . . . . . . . . . . 18 4.5 Conditional expectation with many predictors . . . . . . . . . . . . 20
5 Random Walk and Martingales 22 5.1 Simple Random Walk . . . . . . . . . . . . . . . . . . . . . . . . . . 22 5.2 Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 5.3 Martingales in Random Walks . . . . . . . . . . . . . . . . . . . . . 23
5.4 Exponential martingale in Simple Random Walk (
q
 p)
. . . . . . . 25
6 Optional Stopping Theorem and Applications 26 6.1 Stopping Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 6.2 Optional Stopping Theorem . . . . . . . . . . . . . . . . . . . . . . 27 6.3 Hitting probabilities in a simple Random Walk . . . . . . . . . . . . 28 6.4 Expected duration of a game . . . . . . . . . . . . . . . . . . . . . . 29 6.5 Discrete time Risk Model . . . . . . . . . . . . . . . . . . . . . . . . 29 6.6 Ruin Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 3/89
7 Applications in Insurance 30 7.1 The bound for the ruin probability. Constant R. . . . . . . . . . . . 32 7.2 R in the Normal model . . . . . . . . . . . . . . . . . . . . . . . . . 32 7.3 Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 7.4 The Acceptance- Rejection method . . . . . . . . . . . . . . . . . . 35
8 Brownian Motion 36 8.1 Definition of Brownian Motion . . . . . . . . . . . . . . . . . . . . . 36 8.2 Independence of Increments . . . . . . . . . . . . . . . . . . . . . . 37
9 Brownian Motion is a Gaussian Process 38 9.1 Proof of Gaussian property of Brownian Motion . . . . . . . . . . . 38 9.2 Processes obtained from Brownian motion . . . . . . . . . . . . . . 41 9.3 Conditional expectation with many predictors . . . . . . . . . . . . 42
9.4 Martingales of Brownian Motion . . . . . . . . . . . . . . . . . . . . 44
10 Stochastic Calculus 46 10.1 Non-differentiability of Brownian motion . . . . . . . . . . . . . . . 46 10.2 Ito Integral. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 10.3 Distribution of Ito integral of simple deterministic processes . . . . 47 10.4 Simple stochastic processes and their Ito integral . . . . . . . . . . 48 10.5 Ito integral for general processes . . . . . . . . . . . . . . . . . . . . 49 10.6 Properties of Ito Integral . . . . . . . . . . . . . . . . . . . . . . . . 49 10.7 Rules of Stochastic Calculus . . . . . . . . . . . . . . . . . . . . . . 50 10.8 Chain Rule: Ito’s formula for f (Bt). . . . . . . . . . . . . . . . . . . 51
10.9 Martingale property of Ito integral . . . . . . . . . . . . . . . . . . 52
11 Stochastic Differential Equations 54 11.1 Ordinary Differential equation for growth . . . . . . . . . . . . . . . 54 11.2 Black-Scholes stochastic differential equation for stocks . . . . . . . 54 11.3 Solving SDEs by Ito’s formula. Black-Scholes equation. . . . . . . . 55 11.4 Ito’s formula for functions of two variables . . . . . . . . . . . . . . 56 11.5 Stochastic Product Rule or Integration by parts . . . . . . . . . . . 57 11.6 Ornstein-Uhlenbeck process. . . . . . . . . . . . . . . . . . . . . . . 57 11.7 Vasicek’s model for interest rates . . . . . . . . . . . . . . . . . . . 58
11.8 Solution to the Vasicek’s SDE . . . . . . . . . . . . . . . . . . . . . 59 11.9 Stochastic calculus for processes driven by two or more Brownian
motions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 11.10Summary of stochastic calculus . . . . . . . . . . . . . . . . . . . . 60
12 Options 61 12.1 Financial Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 12.2 Functions x+ and  x−. . . . . . . . . . . . . . . . . . . . . . . . . . . 63 12.3 The problem of Option price . . . . . . . . . . . . . . . . . . . . . . 63 12.4 One-step Binomial Model . . . . . . . . . . . . . . . . . . . . . . . 64 12.5 One-period Binomial Pricing Model. . . . . . . . . . . . . . . . . . 65
2
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 4/89
12.6 Replicating Portfolio . . . . . . . . . . . . . . . . . . . . . . . . . . 65 12.7 Option Price as expected payoff . . . . . . . . . . . . . . . . . . . . 66 12.8 Martingale property of the stock under p   . . . . . . . . . . . . . . . 67 12.9 Binomial Model for Option pricing. . . . . . . . . . . . . . . . . . . 68 12.10Black-Scholes formula . . . . . . . . . . . . . . . . . . . . . . . . . . 69
13 Options pricing in the Black-Scholes Model 71 13.1 Self-financing Portfolios . . . . . . . . . . . . . . . . . . . . . . . . . 71 13.2 Replication of Option by self-financing portfolio . . . . . . . . . . . 72 13.3 Replication in Black-Scholes model . . . . . . . . . . . . . . . . . . 72 13.4 Black-Scholes Partial Differential Equation . . . . . . . . . . . . . . 73 13.5 Option Price as discounted expected payoff . . . . . . . . . . . . . . 74 13.6 Stock price  S T  under EMM  Q   . . . . . . . . . . . . . . . . . . . . . 74
14 Fundamental Theorems of Asset Pricing 7614.1 I ntroduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 14.2 Arbitrage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 14.3 Fundamental theorems of Mathematical Finance . . . . . . . . . . . 77 14.4 Completeness of Black-Scholes and Binomial models . . . . . . . . . 78 14.5 A general formula for option price . . . . . . . . . . . . . . . . . . . 78 14.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
15 Models for Interest Rates 81 15.1 Term Structure of Interest Rates . . . . . . . . . . . . . . . . . . . 81 15.2 Bonds and the Yield Curve . . . . . . . . . . . . . . . . . . . . . . . 81
15.3 General bond pricing formula . . . . . . . . . . . . . . . . . . . . . 81 15.4 Models for the spot rate . . . . . . . . . . . . . . . . . . . . . . . . 82 15.5 Forward rates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 15.6 Bonds in Vasicek’s model . . . . . . . . . . . . . . . . . . . . . . . . 83 15.7 Bonds in Cox-Ingersoll-Ross (CIR) model . . . . . . . . . . . . . . . 84 15.8 Options on bonds . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84 15.9 Caplet as a Put Option on Bond . . . . . . . . . . . . . . . . . . . 85
0
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 5/89
1 Introduction.Intro
In order to study Finance and Insurance, we need mathematical tools. We start with a review of probability theory: random variables, their expected values, vari- ance, independence etc. We introduce Random Walks, Martingales and Brownian motion, stochastic differential equations. These are sophisticated mathematical tools. We compromise: we are going to learn how to use these tools, which are useful in other areas, such as Engineering and Biology.
1.1 Example of models
Let  xt  the amount of money in a savings account. Suppose the interest rate is  r, and  x0 > 0.
The evolution of  xt  is described by the differential equation
dxt
dt   = rxt.
We solve this equation as follows. Divide by  xt  to get x′t/xt = r. We know that the derivative of ln x
t  equals  x
t  = rt + C ,
where  C  is a constant. Finally, xt = eC ert.
In order to find the value of eC  we need to know  x0. In fact, by plugging  t  = 0, we have  x0 = eC . Hence, we get
xt  =  x0ert.
What is it for? It allows to predict  xt  at a future time  t. Or it allows to find rate r  if both  xt  and  x0  are known.
What if we introduce a random perturbation?
dX t  =  rX tdt + dξ t,
where ξ t is a random process. This is a strong generalization. We will introduce and study how to solve some cases of this class of equations. They are called Stochastic Differential Equations.
1
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 6/89
        0
Observed prices of stocks as functions of time
Plot price at time  t,  S t  of the  y-axis and time  t  on the  x-axis.
A model for such functions. Notice that simulated functions of time that look like stock prices.Simulations These are random functions, continuous but not smooth, (not differentiable). Using models we solve the problem of Options Pricing in Finance. Option is
a financial contract that allows to buy assets in the future for the agreed price at present.
This is modern approach to risk management in markets used by Banks and other large Financial Companies.
1.3 Application in Insurance
Consider a sequence of independent games, and suppose that your payoff at the end of each game is  X i, which is a random variable. We assume that  X i  are identically distributed. The Random Walk is simply
 ∑n i=1 X i   for  n ∈  N. This is the discrete
counterpart of Brownian motion. Using Random Walk to model Insurance surplus we can calculate the ruin
probability.
2
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 7/89
        0   .
        4
        0   .
        8
        1   .
        2
        0   .
        6
        1   .
        0
        1   .
        4
        1
        2
        3
        4
        5
        6
        0
        5
Figure 2: Computer simulations
The equation for surplus at the end of year  n  is
U n = U 0 + cn − n
k=1
X k,
where  U 0  the initial funds,  c  is the premium collected in each year and  X k   is the amount of claims paid out in year  k. The insurance company wants to compute the probability of ruin, i.e. the probability that soon or later the process (U n, n ≥ 1) to hit zero or become negative.
This model allows to find sufficient initial funds to control the probability of  ruin.
3
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 8/89
2.1 Distribution of Random Variables. General.
A random variable refers to a quantity that takes different values with some prob- abilities. A random variable is completely defined by its cumulative probability distribution function.
Cumulative probability distribution function
F (x) = Pr(X  ≤ x), x ∈   IR.   cdf
The probability of observing an outcome in an interval  A  = (a, b] is
Pr(X  ∈ A) = F (b) − F (a).
Sometimes it is more convenient to describe the distribution by the probability density function.
Probability density function for continuous random variables
f (x) =   ddxF (x)   pdf
Using the relation between the integral and the derivative we can calculate probabilities of outcomes by using the pdf.
The probability of observing an outcome in the range (a,b] (or (a, b)) is
Pr(a < X  ≤ b) = F (b) − F (a) =
   b a
f (x)dx.
Any probability density is a non-negative function,  f (x) ≥ 0 that integrates to 1
   f (x)dx = 1.
Uniform(0,1) have density
0 otherwise.
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 9/89
F (x) =
f (x) =
0 otherwise.
F (x) = 0 if  x
≤ 0
f (x) =   1√ 
x2
2
General Normal Distribution¸ involves two numbers (parameters)  µ  and  σ The density of normal  N (µ, σ2) distribution is given by
f (x) =   1√  2πσ
e− (x−µ)2 2σ2
The cumulative probability function of Standard Normal is denoted by Φ(x).
Φ(x) =
   x −∞
f (u)du.
It cannot be expressed in terms of other elementary functions. It is available in Excel and Tables.
2.2 Expected value or mean
The expected value or the mean is defined as
E (X ) =
    xf (x)dx.
Interpretation, if  f (x) is the mass density then  EX  is the centre of gravity.
5
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 10/89
V ar(X ) = E (X − EX ) 2
.
SD = σ  = √ 
E (X − EX )2.
SD shows how far on average the values are away from the mean.
It turns out that for the  N (µ, σ2) distribution the mean is  µ and variance  σ2.
Theorem 1   If  X   has  N (µ, σ2)   distribution then 
E (X ) = µ, V ar(X ) = σ2, SD(X ) = σ.
Proof is an exercise in Calculus.
Normal Distribution N (µ, σ2) is obtained from the standard Normal by a linearear transform transformation.
linearNorm   Theorem 2   If  Z  has standard Normal distribution  N (0, 1), then random variable 
X  = µ + σZ 
has  N (µ, σ2)  distribution. Conversely, if  X   has  N (µ, σ2)   distribution, then 
Z  =  X − µ
has standard Normal distribution  N (0, 1).
Proof. Write  P (X  ≤ x) and differentiate.
Exercise   1. Find distribution of  X  = µ + N (0, 1). 2. Find distribution of  X  = −N (0, 1).
6
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 11/89
This result allows to calculate probabilities for any Normal distribution by using tables of Standard Normal.
It also allows to generate any Normal by using a standard Normal. Example:   X  ∼ N (1, 2). Find  P (X > 0).
P (X > 0) = P ( X − 1√ 
2 >
) = P (Z > −   1√  2
) = 1 − Φ(−0.707) = 0.76
Example Consider the process X t  = √ tZ , where Z   is N (0, 1). Give the distri- bution of  X t. Give the distribution of the increments of  X t.
2.4 General Properties of Expectation Expectation
1. Expectation is linear E (aX  +  bY ) = aE (X ) + bE (Y )
2. If  X 
3. If  X  = c  then  E (X ) = E (c) = c.
4. Expectation of a function of a rv.
Eh(X ) =
    h(x)f X (x)dx.
5. Expectation of an indicator I A(X ) (I A(X ) = 1 if  X  ∈ A and 0 if  X /∈ A)
EI A(X ) = P (X  ∈ A)
These properties are established from the definition of expectation. Remark that if  h(x) =  xn then  Eh(X ) = E (X n) is called the  n-th moment of 
rv.   X .
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 12/89
2.5 Exponential moments of Normal distribution
Exponential moment of a random variable  X   is EeuX  for a number  u. It is also known as the moment generating function of  X  when considered as a
function of the argument  u.
 mgfNormal   Theorem 3   Exponential moment of  N (µ, σ2)  distribution is given by  eµu+ σ2
2   u 2 .
Proof:   By the Property 4 of the expectation (Expectation of a function of  a rv.   Eh(X ) =
∫   h(x)f X (x)dx) with  h(x) = eux
EeuX  =
eux   1√ 
2πσ e−
(x−µ)2
2σ2 dx.
The rest is an exercise in integration. By putting exponential terms together and
completing the square,mgf of Normal
EeuX  =
2σ2 dx
2σ2 dx
2σ2 dx
2σ2 dx.
By taking the term which does not include  x outside, we have
= eµu+(σ2u2)/2
2σ2 dx.
Recognising that the function under the integral is probability density of  N (µ + σ2u, σ2) distribution, and that its integral equals to 1,
= eµu+(σ2u2)/2 × 1 = eµu+(σ2u2)/2.

8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 13/89
2.6 LogNormal distribution LogNormal
X  is Lognormal  LN (µ, σ2) if log X   is Normal  N (µ, σ2). In other words,
X  = e Y 
,
where  Y   is Normal  N (µ, σ2). Since  ex > 0 for any  x, the lognormal variable is always positive. Lognormal density is given by the formula: for  x > 0
f (x) =   1√ 
2σ2
Exercise   Derive this formula by using the definition and the normal density.
Example:   X  ∼ LN (1, 2). Find  P (X > 1).X  = eY  where  Y  ∼ N (1, 2). Then
P (X > 1) = P (eY  > 1) = P (Y > 0) = 0.76,
where the last value by using the previous example.
Theorem 4   If  X   has  LN (µ, σ2)  distribution then its mean  mean of LN
EX  = eµ+σ2
2  (eσ 2
2 .
Proof:   This is just mgf of  N (µ, σ2) evaluated at  u = 1.
9
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 14/89
3 Independence. Concept of independence of random variables involves their joint distributions.
When we model many Random Variables together we can look at them as a a vector X = (X 1, X 2,...,X n).
It takes values x = (x1, x2,...,xn)
according to some probability distribution, called the  joint distribution. Its proba- bility density is a function of  n  variables f (x) which is non-negative and integrates to 1.
Similar to the one dimension, probabilities, by definition of  f (x), are given by the multiple integral. For a set  B   in Rn
Pr(X ∈ B) =
f (x)dx1dx2...dxn
Note that this formula is only sometimes used for calculations. The probability density functions for each  X i  are called  marginal density func-
tions .
Consider case  n  = 2.
Theorem 5   If  X   and  Y  have a joint density  f (x, y)  then marginal densities are  given by integrating out the other variable.
f X (x) = ∫ ∞ −∞ f (x, y)dy,  and  f Y (y) =
∫ ∞ −∞ f (x, y)dx.
   y −∞
   x −∞
f (u, v)dudv.
Differentiating with respect to  x gives the formula for marginal density of  X .
10
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 15/89
3.2 Multivariate Normal distributions
Multivariate Normal distribution is a collection of a number of Normal distribu- tions, which are correlated with each other.
Definitionultivariate N Multivariate Normal distribution is determined by its mean vector and its co-
variance matrix: X = (X 1,X 2,....,X d) is N (µ, Σ), where
µ = (EX 1, EX 2, . . . , E X  d),   Σ =
Cov(X i, X  j) i,j=1,...,d
if its probability density function is given by
f X(x) =   1
(X−µ)T .
det(Σ) is the determinant of the square matrix   Σ   and   Σ−1 is the inverse, ΣΣ−1 = I .
Example   A bivariate normal.
µ =  0  and  Σ =
  1   ρ ρ   1
Standard Multivariate Normal  Z  is  N (0, I )


It is easy to see that Z  is  N (0, I ) is a vector of independent standard Normals
Z i. |I | = 1,  I −1 = I , and
f Z(z) =   1
(2π) d 2
11
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 16/89
Exercise   If the random variables  (X 1,X 2,....,X d)  are jointly Normal, then they are  independent if and only if they are uncorrelated.
A multivariate Normal is a linear transformation of the standard multivariate Normal, just like in one dimension.
Theorem 6   If  X = (X 1,X 2,....,X d)   is  N (µ, Σ)  then 
X =  µ + AZ,
where   Z   is standard multivariate Normal, and   A   is a matrix square root of   Σ satisfying  Σ =  AAT.
Matrix square root is not unique (cf  √ 
4 = ±2). Proof:   This is an exercise in multivariable calculus. The probability density

3.3 A linear combination of a multivariate normal A linear combination of components of a multivariate vector is aX for a nonrandom a. It is is a scalar random variable
aX =  a1X 1 + a2X 2 + ... + adX d   aX = N
linearcomb   Theorem 7   If  X  is multivariate Normal  N (µ, Σ)  then  aX   is  N (aµ, aΣaT ).
This theorem can be proved by using transforms of distributions given later. Note that in this theorem the joint distribution is multivariate normal, and it is
not enough that the marginal distributions (ie distribution of each  X i) is normal.
A counterexample:   Z  is standard normal and let  X 1  =  Z  and  X 2 = −Z . Then both  X 1  and  X 2  are standard normal also, but  X 1 + X 2  = 0.
Example. Find the distribution of (X 1 +  X 2), and specify its variance, where X 1, X 2  are correlated normals.
X = (X 1,X 2) is  N (µ, Σ),  Σ =
  σ2
ρσ1σ2   σ2 2
Note that the sum can be written as a scalar product X 1 + X 2  =  aX, where  a = (1, 1).
12
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 17/89
1 + 2ρσ1σ2 + σ2 2.
as it should be, as can be verified directly that V ar(X 1 + X 2) = σ2
1 + 2ρσ1σ2 + σ2 2
Example. The average of Normals, even if they are correlated, is again Normal, but not so for LogNormals (eN (µ,σ2)).
If  X  is N (µ, Σ) find distribution of 
X  =   1
eX i.
Remark   Let  X  be multivariate Normal and  U =  B X, for a non random matrix  B. Using Theorem 7 show that   U   is multivariate Normal with mean   BµX   and  covariance matrix  BΣX B
T .
3.4 Independence Independence
Events  A1   and  A2  are independent if the probability that they occur together, is given by the product of their probabilities,
P (A1 ∩ A2) = P (A1)P (A2).
Random variables  X   and  Y   are independent, if the joint probability distribution
is a product of marginal probabilities, and in terms of densities f X(x, y) = f X (x)f Y (y).
In general it is not enough to know the distribution of each variable  X   and  Y  in order to know the distribution of the random vector (X, Y ).
But if variables   X   and   Y   are independent then their marginal distributions determine their joint distribution (by the product formula).
An important corollary to independence is
13
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 18/89
expindepend   Theorem 8  If random variables  X   and  Y   are independent then 
E (XY ) = E (X )E (Y ).
Proof:
=  
Independence can be formulated as a property of expectations.
Theorem 9   X   and  Y  are independent if and only if for any bounded functions  h and  g
E (h(X )g(Y )) = Eh(X )Eg(Y ).
Independence for many variables
Events  A1, A2, . . . , An  are independent if for any their subcollection the proba- bility that they occur together, is given by the product of their probabilities.
Random variables are independent, if the joint probability distribution is a product of marginal probabilities, and the the joint density function is a product of marginal density functions.
f (x) = f 1(x1)f 2(x2)....f n(xn).
In general it is not enough to know the distribution of each variable  X i in order to know the distribution of them all, the random vector (X 1, X 2, . . . , X  n).
But if variables  X 1, X 2, . . . , X  n   are independent then their marginal distribu- tions determine their joint distribution (by the product formula).
14
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 19/89
3.5 Covariance Covariance
Let  X   and  Y   be two random variables with finite second moments  E (X 2)  < ∞ and  E (Y 2) <
∞ . Their   covariance  is defined as
Cov(X, Y ) = E (X − EX )(Y  − EY ).
Theorem 10 Cov(X, Y ) = E (XY ) − E (X )E (Y )
Proof:
XY  − Y EX  − XEY   + EXEY 
Now use the property of expectation that constants can be taken out E (aX ) = aEX 
= E (XY ) −
Now Theorem 8 has the following
Corollary   If  X   and  Y  are independent then they are uncorrelated.
3.6 Properties of Covariance and Variance Covariance
1.   Cov(X, Y ) = E (XY ) − E (X )E (Y ).
2. Covariance is bilinear (as multiplying polynomials)
Cov(aX +bY,cU +dV ) = acCov(X, U )+adCov(X, V )+bsCov(Y, U )+bdCov(Y, V )
3.   V ar(X ) = C ov(X, X )
4.   V ar(X ) = E (X 2) − (E (X ))2 = E ((X − E (X ))2) It is always nonnegative
5.   V ar(X  +  Y ) = V ar(X ) + 2Cov(X, Y ) + V ar(Y )
6. If  X  and  Y   are  independent  or uncorrelated, then
V ar(X  +  Y ) = V ar(X ) + V ar(Y )
15
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 20/89
3.7 Covariance function
Definition   The covariance function of a random process  X t   is defined by 
γ (s, t) = C ov(X t, X s = E (X t − EX t(X s − EX s = E 
( X tX s
− EX tEX s
4 Conditional Expectation
Recall expectation or mean
xf X (x)dx.
Similarly, the conditional expectation is the integral with respect to the condi- tional distribution
E (X |Y   = y) =
xf (x|y)dx.
The conditional distribution is defined as follows. Let  X ,  Y   have joint density f (x, y) and marginal densities  f X (x), and  f Y (y). The conditional distribution of  X  given  Y   = y  is defined by the density
f (x|y) =  f (x, y)
f Y (y)   ,
at any point  y  where  f Y (y) > 0. It is easy to see that so defined  f (x|y) is indeed a probability density, as it is nonnegative and integrates to one.
The expectation of this distribution, when exists, is called the conditional ex- pectation of  X  given Y   = y, and is given by the above formula.
Example   Let  X   and  Y   have a standard bivariate normal distribution with pa- rameter  ρ. Then
1. The conditional distribution of  X  given  Y   = y  is normal  N (ρy, (1 − ρ2)). 2.   E (X |Y   = y) = ρy,  E (X |Y ) = ρY .
Proof:   1. The joint density f (x, y) =   1
2π √ 
,
√ 2π e−y2/2.
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 21/89
Hence the conditional distribution of  X  given  Y   = y   is f (x, y)/f Y (y)
f X  | y(x) =
1√ 2πe−y2/2
=   1√  2π(1 − ρ2)
} .

Conditional expectation as a random variable
E (X |Y   =   y) is a function of   y. If   g   denotes this function, that is,   g(y) = E (X |Y   =  y), then by replacing  y   by  Y  we obtain a new random variable  g(Y ), which is called the conditional expectation of  X  given  Y ,
E (X |Y ) = g(Y ).
In the above example
E (X |Y ) = ρY.
Similarly to this example, in the case of the multivariate normal vector the conditional expectation of  E (X|Y) is a linear function of  Y, Theorem 23.
4.2 Properties of Conditional Expectation Prop. E(X|Y)
1. Conditional expectation is linear in X 
E (aX 1 + bX 2|Y ) = aE (X 1|Y ) + bE (X 2|Y ).
2.   E (E (X |Y )) = E (X ).  The law of double expectation.
3. If  X  is a function of  Y  (also said as “Y -measurable”), then
E (X |Y ) = X.
4. If  U   is  Y -measurable, then “it is treated as a constant”
E (XU 
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 22/89
E (X |Y ) = EX,
that is, if the information we know provides no clues about   X , then theconditional expectation of  X  is simply its mean value.
4.3 Expectation as best predictor
Let  X  denote a random variable. If we predict the outcome of  X   by a number  c then the difference between the actual and the predicted outcomes is (X −c). This
represents the error in our prediction. If we want to predict an outcome so that theerror irrespective of its size is smallest we minimize  mean-squared error  E (X − c)2.
Theorem 11  The best mean-square predictor of  X   is its mean  X  = E (X ).
Proof:   The mean-squared error is a function of  c, E (X − c)2. Minimize in c.
E (X − c)2 = E (X 2 − 2cX  +  c2) = E (X 2) + c2 − 2cE (X ).
The number c  that minimizes this is found by differentiating and equating to zero.
d
dc
Equate to 0, to have c =  E (X ).

4.4 Conditional Expectation as Best Predictor
We now look for the best possible predictor of  X  based on  Y , some function of  Y , h(Y ). We define the optimal predictor or estimator  X  as the one that minimizes the mean-squared error, i.e. for any random variable  Z , which is a function of  Y 
E (X −  X )2 ≤ E (X − Z )2.
It turns out that the best predictor of  X   based on  Y   is the  conditional expec- tation  of  X  given  Y , denoted  E (X |Y ).
18
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 23/89
est Predictor   Theorem 12   The best predictor (optimal estimator)  X   based on   Y   is given by  X  = E (X |Y ), in other words for any random variable  Z , Y -measurable (a function  of  Y )
E (X  −
For the proof we need the following result.
Theorem 13   Any random variable  Z  which is  Y -measurable (a function of  Y ) is  uncorrelated with  X −  X  = X  − E (X |Y ).
Proof:
Cov(Z, X −  X ) = E (Z (X −  X )) − E (Z )E (X −  X )
The second term is zero because by the law of double expectation
E (X −  X ) = E (X ) − E (E (X |Y )) = E (X ) − E (X ) = 0.
Thus Cov(Z, X −  X ) = E (Z (X −  X ))
= E (ZX ) − E (Z  X ) = 0
by the law of double expectation
E (ZX ) = E 
E (ZX |Y )
since  Z   is Y -measurable. Finally  Cov(Z, X −  X ) = 0.
Example Consider the process X t  = √ 
tZ , where Z   is N (0, 1). Give the distri- bution of  X t. Give the distribution of the increments of  X t.
In particular, we have a
Corollary   X  = E (X |Y )  and  X −  X  = X  − E (X |Y )  are uncorrelated.
Proof:   of the Theorem that best predictor is  X   =  E (X |Y ). Take any Z , which is a function of  Y . We need to show that
E (X −  X )2 ≤ E (X − Z )2.
E (X − Z )2 = E 
X −  X  +  X − Z  2
= E (X 
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 24/89
= E (X −  X )2 + E ( X − Z )2,   by the previous result
≥ E (X −  X )2.
Thus  X  = E (X 

4.5 Conditional expectation with many predictors
Let  X, Y 1, Y 2, . . . , Y  n  be random variables. By definition, the optimal predictor  X  minimizes the mean square error, i.e.
for any  Z   function of  Y ’s
E (X −  X )2 ≤ E (X − Z )2.
Theorem 14  The best predictor  X  based on  Y 1, Y 2, . . . , Y  n  is given by 
X  = E (X |Y 1, Y 2, . . . Y  n).
Conditional expectation given many random variables is defined similarly as the mean of the conditional distribution. It is denoted by
E (X |Y 1, Y 2, . . . Y  n) Notation: If we denote the information generated by  Y 1, Y 2, . . . , Y  n  by F n   then
E (X |Y 1, Y 2, . . . Y  n) = E (X |F n).
Note that often it is hard to find a formula for the conditional expectation. But in the multivariate Normal case it is known and is established by direct calculations.
Cond.Exp.MvN   Theorem 15 (Normal Correlation)   Suppose   X   and   Y   jointly form a multi- variate normal distribution. Then the vector of conditional expectations is given by  the following 
E (X|Y) = E (X) + Cov(X, Y)Cov−1(Y, Y)(Y − E (Y)).
Cov(X, Y)  denotes the matrix with elements   Cov(X i, Y  j),   Cov−1(Y, Y)   denotes  the inverse of the covariance matrix of  Y .
20
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 25/89
Example   Best predictor of  X  based on  Y   in Bivariate Normal best Pred Direct application of the formula 
E (X |Y ) = E X  +
 Cov(X, Y )
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 26/89
5 Random Walk and Martingales
5.1 Simple Random Walk RW
A model of pure chance is served by an ideal coin being tossed with equal proba- bilities for the Heads and Tails to come up. Introduce a random variable  Y   taking values +1 (Heads) and −1 (Tails) with probability   1
2 . If the coin is tossed  n times
then a sequence of random variables  Y 1, Y 2, . . . , Y  n   describes this experiment. All Y i  have exactly the same distribution as   Y 1, moreover they are all independent. Random walk is the process  X n, defined by
X n =  X 0 + Y 1 + Y 2 + .... + Y n.
X n  gives the fortune of a player in a game of chance after  n plays, where a coin is tossed and one wins $1 if Heads come up and loses $1 when Tails come up. Random walk is the central model for stock prices, the standard assumption is that returns on stocks follow a random walk
Random Walk
A more general Random Walk X n =  X 0 + Y 1 + Y 2 + .... + Y n,
where  Y i’s are i.i.d. (not necessarily ±1). RW is unbiased if  EY i  = 0 and biased otherwise.
Mean and Variance of Random Walk
Since  E (Y i) = 0, and  V ar(Y i) = E (Y 2) = 1, the mean and the variance of the random walk are given by  E (X n) = X 0 + E (∑n
i=1 Y i) = ∑n i=1 E (Y i) = X 0,
V ar(X n) = V ar(X 0 + n
i=1
V ar(Y i) = nV ar(Y 1) = n.
Useful tools. Strong law of large numbers and central limit theorems. In general if  X 1, X 2, . . . , X  n, . . . are i.i.d. random variables with finite mean, we have
lim n
22
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 27/89
Moreover if  X i  have finite variance  σ2, we have
lim n
−∞
5.2 Martingales Martingales
Definition   A process   (X n),   n   = 0, 1, 2, . . .   is called a martingale if for all   n E |X n| < ∞, and the martingale property holds 
E (X n+1|X 1, X 2, . . . , X  n) = X n.
Martingale property of Random Walk
Since
i=1
E |Y i| = nE |Y 1|,
X n   is integrable provided  E |Y 1| < ∞. For any time  n given  X n,
E (X n+1|X 1, X 2, . . . , X  n) = X n + E (Y n+1|X 1, X 2, . . . , X  n).
Since Y n+1 is independent of the past, and  X n is determined by the first n variables, Y n+1   is independent of  X n. Therefore,  E (Y n+1
| X 1, X 2, . . . , X  n) = E (Y n+1). It now
follows that if  E (Y n+1) = 0, then
E (X n+1|X n) = X n + E (Y n+1|X 1, X 2, . . . , X  n) = X n + 0 = X n.
Thus  X n  is a martingale.
5.3 Martingales in Random Walks
Some questions about Random Walks, such as ruin probabilities can be answered with the help of martingales.
RWMG’s   Theorem 16   Let   X n,   n   = 0, 1, 2, . . .   be a Random Walk. Then the following  processes are martingales.
1.   X n − µn, where  µ  =  E (Y 1). In particular, if the Random Walk is unbiased  ( µ = 0), then it is itself is a martingale.
2.   (X n − µn)2 − σ2n, where  σ2 = E (Y 1 − µ)2 = V ar(Y 1).
23
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 28/89
3. For any   u,   euX n−nh(u), where   h(u) = ln E (euY 1)   (exponential martingales). Using the moment generating function notation   m(u) =  E (euY 1), this mar- tingale becomes  (m(u))−neuX n.
Proof  1. Since, by the triangle inequality |a + b| ≤ |a| + |b|,
E |X n−nµ| = E |X 0+ n
i=1
i=1
E |Y i|+n|µ| = E |X 0|+n(E |Y 1|+|µ|),
X n − nµ   is integrable provided   E |Y 1|   < ∞, and   E |X 0|   < ∞. To establish the martingale property consider for any  n
E (X n+1|X n) = X n + E (Y n+1|X n).
Since Y n+1 is independent of the past, and  X n is determined by the first n variables, Y n+1  is independent of  X n. Therefore,  E (Y n+1|X n) = E (Y n+1). It now follows that
E (X n+1|X n) = X n + E (Y n+1|X n) = X n + µ,
and subtracting (n + 1)µ from both sides of the equation, the martingale property is obtained,
E (X n+1 − (n + 1)µ|X n) = X n − nµ.
2. This is left as an exercise. 3. Put  M n =  euX n−nh(u). Since  M n ≥ 0,  E |M n| = E (M n), which is given by
E (M n) =   EeuX n−nh(u) = e−nh(u)EeuX n = e−nh(u)Eeu(X 0+ ∑n
i=1 Y i)
=   euX 0e−nh(u)E  n
i=1
=   euX 0e−nh(u) n
The martingale property is shown by using the fact that
X n+1 =  X n + Y n+1,   (1)
with  Y n+1  independent of  X n  and of all previous  Y i’s i ≤ n, or independent of  F n. Using the properties of conditional expectation, we have
E (euX n+1|F n) =   E (euX n+uY n+1|F n)
=   euX nE (euY n+1|F n) = euX nE (euY n+1)
=   euX n+h(u).
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 29/89
Multiplying both sides of the above equation by  e−(n+1)h(u), the martingale property is obtained,  E (M n+1|F n) = M n.
5.4 Exponential martingale in Simple Random Walk ( q 
 p )X n
exp MG RW In the special case when   P (Y i   = 1 ) =   p, P (Y i   = −1) =   q   = 1 − p   choosing u  = ln(q/p) in the previous martingale, we have  euY 1 = (q/p)Y 1 and  E (euY 1) = 1. Thus  h(u) = ln E (euY 1) = 0, and  euX n−nh(u) = (q/p)X n . Alternatively in this case, the martingale property of (q/p)X n is easy to verify directly and is left as exercise.
25
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 30/89
6.1 Stopping Times topping times
Let   X 1, X 2, . . . , X  n, . . .   be a sequence of random variables. A random time   τ   is called a stopping time if for any  n, one can decide whether the event {τ  ≤ n} (and hence the complementary event {τ > n}) has occurred by observing the first   n variables X 1, X 2, . . . , X  n.
Another way of expressing the fact that   τ   is a stopping time is that for any n  you can tell if 
 { τ   =  n
}  holds by looking at  X 1, X 2, . . . , X  n. Alternatively, i.e.
equivalent definition, you can tell if  {τ  ≤ n}  holds by looking at  X 1, X 2, . . . , X  n
Let’s see first an example of a random variable that is not a stopping time. Flip a fair coin 10 times. Denote by  τ  the last time you observe Head. Is this a stopping time? No. Looking at  X 1  and  X 2  is not enough to tell if  τ  = 2. In fact if  X 1  = 0 (here 0 means Tail),  X 2  = 1,  X i  = 0 for all  i ∈ {3, 4, . . . , 10}  then  τ  = 2. On the other hand if  X 1  = 0,  X 2  = 1,  X i  = 1 for all  i ∈ {3, 4, . . . , 10}  then  τ  = 10. Hence the first two observations  X 1  and  X 2  are not enough to tell if  τ  = 2 holds.
Time of ruin is a stopping time.
τ  = min{n :  X n = 0}.
{τ > n} = {X 1 = 0, X 2 = 0, . . . , X  n = 0}.
If we can tell if  τ > n  we can also tell if  {τ  ≤  n}. So by observing the capital at times 1, 2, . . . , n, we can decide if the ruin by time  n   has occurred or not, e.g. If  X 1 = 0, X 2 = 0, X 3 = 0 then  τ > 3.
The time when something happens for the first time is a stopping time. E.g. first time Random Walk hits value 1 (or 100). Say you gamble from 8pm to 11 pm, τ   is first time you win $ 100. By observing your winnings you can decide whether τ  has or has not occurred.
A stopping time is allowed to take value +∞ with a positive probability. For example if  τ  is the first time when a RW with a positive drift hits 0, then
P (τ  =
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 31/89
One way to see that a random variable is finite is to establish that it has finite mean.
If  E (τ ) < ∞  then  P (τ < ∞) = 1.
If  τ 1  and  τ 2  are stopping times then their minimum is also a stopping time,
τ  = min(τ 1, τ 2) = τ 1 ∧ τ 2,
is a stopping time. We use this result mainly when one of the stopping times is a constant,  τ  = N . Clearly, any constant  N   is a stopping time. Then  τ  ∧ N   is a stopping time, which is bounded by  N . For example, if  τ   is the first time one wins $5 in a game of coin tossing then
τ ∧ 10 is the time of winning $5 if it happens before 10 tosses, or time 10 if $5 were not won by toss 10.
Note that also max(τ 1, τ 2) =  τ 1 ∨ τ 2   and  τ 1 +  τ 2  are also stopping times. But we don’t use these properties.
6.2 Optional Stopping Theorem
Prove that a martingale has a constant mean for any deterministic time. For example, if (M n) is a martingale, then prove that
E (M 5) = E (M 4) = E (M 3) = E (M 2) = E (M 1) = M 0.
There is nothing special about time 5. We can prove the same for all fixed times. What if we substite a fixed deterministic time with a random one?
It turns out that the mean of the stopped martingale does not change also for some random times, such as a bounded stopping time. What we observed above might not be true! This is why we need the following theorem.
Theorem 17 (Optional Stopping Theorem) Let  M n  be a martingale.EMtau=M0
1. If  τ  ≤ K < ∞  is a bounded stopping time then 
E (M τ ) = E (M 0).
2. If  M n  are uniformly bounded, |M n| ≤ C  for any  n, then for any stopping time  τ  (even non finite),
E (M τ ) = E (M 0).
The proof of this theorem is outside this course.
27
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 32/89
6.3 Hitting probabilities in a simple Random Walk Ruin Prob.
Unbiased RW
Suppose that you are playing a game of chance by betting on the outcomes of 
tosses of a fair coin ( p  = 0.5). You win $1 if heads come up and lose $1 if tails come up. You start with $20. Find the probability of winning $10 before losing all your initial capital of $20.
Solution. Denote the required probability by  a. X 0, X 1, . . . , X  n, . . . denote the capital at times 0, 1, . . . , n , . . .. X 0  = 20, for any  n,  X n+1 = X n + Y n+1, where Y n+1   is the outcome of the  n + 1
toss,  Y n+1 = ±1 with probabilities 0.5.   X n  is an unbiased a random walk. Denote by  τ   the time when you either win 10 or lose 20. In terms of the process
X n τ  = min{n :  X n = 30 or  0}.
Denote by a the probability that you win 10 before losing 20, ie  X τ  = 30, then 1 − a is the probability  X τ  = 0 if you lose 20 before winning 10.
Thus the distribution of  X τ   is:   X τ  = 30 and  X τ  = 0 with probability 1 − a. We have seen that the process   X n   is a martingale. Applying the optional
stopping theorem (without proving that we can)
E (X τ ) = E (X 0) = X 0 = 20.
On the other hand, calculating the expectation directly E (X τ ) = 30×a+0×(1−a). Thus we have from these equations 30a = 20,   and  a  = 2/3. Thus the probability of winning $10 before losing the initial capital of $20 is 2/3.
The same calculation gives that the probability of the process  X n  hitting level b before it hits level  c, having started at  x,  b < x < c  is given byunbiased RW
a =  c − x
Biased RWuin biased RW
Let a simple random move to the right with probability  p and to the left with probability q  = 1 − p. We want to find the probability that it hits level  b before it hits level  c, when started at  x,  b < x < c. Let  τ  be the stopping time of random walk hitting  b  or  c.
28
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 33/89
Stopping the exponential martingale  M n = (q/p)X n we have
E (q/p)X τ  = (q/p)x.
But  X τ  = b  with probability  a and  X τ  = c  with probability 1 − a. Hence
a(q/p)b + (1 − a)(q/p)c = (q/p)x.
a =  (q/p)x − (q/p)c
6.4 Expected duration of a game Unbiased RW
We use the martingale  M n = X 2n − n  and stop it at  τ . Assuming it is allowed EX 2τ  − Eτ  = x2. Eτ   = ab2 + (1 − a)c2 − x2, where  a is the hitting probability
in unbiased RW. Biased RW
We use the martingale  M n  = X n − nµ. Here  µ = p − q  = 2 p − 1. Stopping it at  τ  gives
EX τ  − µEτ  = x. Eτ   = (ab + (1
− a)c
biased RW.
Exercise: Give a proof that Optional stopping applies in the martingale above.
6.5 Discrete time Risk Model
Time is discrete  n = 0, 1, 2, . . . (years). The insurer charges a premium of  ck  >  0 in the  kth year. Let X k   denote the
aggregate claim amount (sum total of all claims) in the  kth year. The insurer has funds  x at the start of year 1.
U n denotes the insurance company surplus at time  n.   U 0 = x  is the initial fund. The premium in year  n   is  c. The payout at time  n  is  X n   (X n   is the aggregate
claim, ). Then the equation for surplus at the end of year  n  is
U n  =  U 0 + cn − n
k=1
X k
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 34/89
Assumptions.
c − E (X n) >  0. The premiums are greater than the expected payout. X 1, X 2, . . . are identically distributed and independent.
Exercise   1. Find the expected surplus and the sd of the surplus U n.
2. Use the Law of Large Numbers to give an approximate value for  U n.
3. Use the Central Limit Theorem to give the approximate distribution of  U n
6.6 Ruin Probability
The probability of ruin is the probability that surplus becomes negative. More precisely, the time of ruin  T ,   T  = min{n  :  U n  < 0}, where   T   = ∞   if   U n ≥ 0 for all
n = 1, 2, · · · . The probability that ruin has occurred by time  n  is
P (T 
P (T < ∞).
This probability is the central question of study in Actuarial mathematics/ Insurance.
7 Applications in Insurance Insurance is an agreement where, for an upfront payment (called premium) the company agrees to pay the policyholder a certain amount if a specific loss occurs. The individual transfers this risk to an insurance company in exchange for a fixed premium.
30
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 35/89
Theorem 18   Assume that {c−X k, k = 1, 2, · · · } are i.i.d. random variables, and  there exists a constant  R > 0  such that uin exp bound
Ee−R(c−X 1) = 1.
Then for all  n P x(T  ≤ n|U 0  =  x) ≤ e−Rx
Proof. Step 1. Show that  M n = e−RU n is a martingale. Step 2. Use the Martingale stopping Theorem with the stopping time min(T, n) = T 
 ∧ n
Step 3. Extract information from the resulting equation. Step 1. Finite expectation. E |e−RU n| = E (e−RU n) = e−Rx
∏n k=1 E (e−R(c−X k)) = e−Rx < ∞.
Proof of the martingale property. Since
U n+1 = U n + c − X n+1,
we have  E (M n+1|U 1, · · ·   , U n) = =   E (e−RU n+1|U 1, · · ·  , U n) by definition of  M n+1
=   E (e−RU n−R(c−X n+1)|U 1, · · ·   , U n) by definition of  U n+1
=   e−RU nE (e−R(c−X n+1)|U 1, · · ·   , U n) since   U n is known
=   e−RU nE (e−R(c−X n+1)) = e−RU nE (e−R(c−X n+1)) by independence
=   e−RU nby definition of  R.
This together with finite expectation implies that  M n  = e−RU n,  n = 0, 1, . . .   is a martingale.
We have seen that  T  is a stopping time.   T  ∧ n  is a stopping time bounded by  n, min(T, n) ≤ n.
We can apply the Martingale Stopping  E (M T ∧n) = E (M 0)
E (e−RU T ∧n) = E (e−RU 0) = e−Rx.
We now “open” the  T ∧n by using the indicators,  T ∧n =  T I (T  ≤ n) + nI (T > n). Thus
e−RU T ∧n = e−RU T I (T   ≤
n) + e−RU nI (T > n).
31
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 36/89
+ E 
≥   E (e−RU T I (T 
 ≤ n)   since  E (e−RU nI (T > n)) >  0
≥   E (I (T  ≤ n)) since U T   < 0 and   e−RU T  > 1
=   P (T  ≤ n) as required .
7.1 The bound for the ruin probability. Constant R.
Bound on the ruin probability is  e−Rx.
We now turn to finding constant  R. The constant  R  is found from the equation
E  (
( eRX 
= 1.
Recall that the second term is the moment generating function of  X 

( eRX 
7.2 R in the Normal model
Example. Suppose that the aggregate claims have  N (µ, σ2) distribution. We give a bound on the ruin probability.
The mgf of  N (µ, σ 2
) is given by
mX (R) = E (eRX ) = E (eN (Rµ,R2σ2)) = eRµ+ 1 2 R2σ2
(or use the formula for Normal moment generating function). Thus the equation for  R  becomes
mX (R) = eRc
eRµ+ 1 2R
2
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 37/89
σ2
Remark The aggregate claims in consecutive years  X 1, X 2, . . . , X  n, . . . are as- sumed to have the same distribution, say as  X 1.
Suppose that there are  n   insured individuals. Then each has individual claim distribution  Y . So that in one year the aggregate claim is
X 1  = n
i=1
Y i,
where  Y i  is the claim of person  i. If the individual claim has mean  µY  and sd  σ2
Y , then the
nσY  ≈ N (0, 1).
Y ).
Example   Consider a car owner who has an 80% chance of no accidents in a year,
a 20% chance of being in a single accident in a year, and no chance of being in morethan one accident in a year. For simplicity, assume that there is a 50% probability that after the accident the car will need repairs costing 500, a 40% probability that the repairs will cost 5000, and a 10% probability that the car will need to be replaced, which will cost 15,000. Hence the distribution of the random variable Y, loss due to accident:
f (x) =
0.80   if x = 0
0.10   if x = 500
0.08   if x = 5000
0.02   if x = 15000
The car owners expected loss is the mean of this distribution,  E (Y ) = 750. The standard deviation of the loss  σY  = 2442.
An insurance company that will reimburse repair costs resulting from accidents for 100 such car owners
For the company the loss in one year is sum of losses for each car. If the loss to car  i  is Y i, then
X 1  = 100 i=1
Y i,
33
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 38/89
Note that most of  Y is are zero. This fact is taken into account in the loss (claim) distribution.
For the company, the expected loss in one year is sum of expected losses
µ =  µX  = E 
100 i=1
100
car  = 596, 336, 400.
So that the aggregate loss in one year X  has approximately Normal distribution with this parameters.
Suppose the premium is set to be 30% higher than the expected claim,  c  = 1.3µ Then
R =  2(c − µ)
596, 336, 400  = 7.55 × 10−5
So, if the company has initial fund of  x = 100, 000 = 105, then the ruin proba- bility is less than  e−7.55 = 0.0005.
Note that initial fund of only  x = 10, 000 = 104 is not enough, the ruin proba- bility is less than  e−0.755 = 0.47.
7.3 Simulations
Suppose you want to simulate from a strictly increasing c.d.f.   F . Let   U   be a
uniform random variable. We have that
Y   = F −1(U ),
has c.d.f.   F . In fact
P (Y  ≤ x) = P (F −1(U ) ≤ x) = P (U  ≤ F (x)) = F (x).
Example: simulation of an exponential.
34
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 39/89
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 40/89
8 Brownian Motion
Botanist R. Brown described the motion of a pollen particle suspended in fluid in
1828. It was observed that a particle moved in an irregular, random fashion. In 1900 L. Bachelier used the Brownian motion as a model for movement of 
stock prices in his mathematical theory of speculation. A. Einstein in 1905 explained Brownian motion as a result of molecular bom-
bardment by the molecules of the fluid. Mathematical foundation for Brownian motion as a stochastic process was done
by N. Wiener in 1931, hence it is also called the Wiener process.
8.1 Definition of Brownian Motion
Defining Properties of Brownian Motion {Bt}   .   Time  t, 0 ≤ t ≤ T .Properties BM
1. (Normal or Gaussian increments) For all   s < t,   Bt −  Bs   has   N (0, t − s) distribution, Normal distribution with mean 0 and variance  t − s.
2. (Independent increments) Bt − Bs   is independent of the past, that is, of  Bu, 0 ≤ u ≤ s.
3. (Continuity of paths)  Bt,  t
≥ 0 are continuous functions of  t.
The initial point  B0  is a constant, often 0. If  B0 = x  then  Bt  is BM started at  x. We explain these properties below.
Defining Property 1 of Brownian Motion
Bt − Bs   is  N (0, t − s) for  s < t. By Theorem 2 with  σ =
√  t − s, the distribution of  Bt − Bs  is the same as the
distribution of  √ 
E (Bt − Bs) = 0.
E (Bt − Bs) = EBt − EBs = 0.
Thus for all  s  and  t EBt = E Bs.
In particular EBt  =  EB0 = B0.
36
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 41/89
The last equality, because expectation of a constant is that constant. Next for a random variable  X  with zero mean,  EX  = 0, we have
V ar(X ) = E (X  −
EX )2 = E (X 2).
Since (Bt − Bs) has zero mean, and by a property of  N (0, σ2) distribution
E (Bt − Bs)2 = V ar(Bt − Bs) = t − s,
SD(Bt − Bs) = √ 
t − s.
If we take  s = 0 then we obtain  E (Bt − B0) = 0 and  E (Bt − B0)2 = t.
8.2 Independence of Increments For any times  s  and  t,  s < t, the random variable  Bt − Bs  is independent of all the variables Bs  and  Bu,  u < s.
BMGauss   Theorem 19   Brownian motion has covariance function  min(t, s).
Proof:   Take  t > s. Then  Bt  can be written as a sum of  Bs  and increment (Bt − Bs),
Bt = Bs + (Bt − Bs).
Hence E (BsBt) = EB2
.
Now Brownian motion has independent increments: (Bt −Bs) and Bs are indepen- dent, therefore expectation of their product is the product of their expectations (Theorem 8) so that

= E BsE (Bt − Bs).
Brownian motion has Normal increments: (Bt − Bs) is  N (0, t − s). Therefore its mean is zero,  E (Bt
− Bs) = 0. So that
E (BsBt) = E (B2 s).
Next, writing  Bs = B0 + (Bs − B0) and using independence of terms, we have
E (B2 s ) = E (B2
0  + (Bs − B0)2 + 2B0(Bs − B0)) = E (B2 0) + s =  B2
0  +  s.
Here we used that  E (Bs − B0)2 =  s  as the variance of  N (0, s) distribution, and that  B0  is non-random,  E (B2
0) = B2 0 .
− B0) = E (B0) = B0.
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 42/89
Cov(Bs, Bt = E (BtBs− EBtEBs =  B 2 0 +  s − B
2 0  = s.
If  t < s, then similarly (or exchanging roles of  s  and  t) E  (BtBs) = t. Therefore
Cov (
9 Brownian Motion is a Gaussian Process
The distributions of  B (t) for any time  t  are called marginal distributions of Brow- nian motion.
The joint distributions of the vector (B(t1), B(t2)) of Brownian motion sampled at two arbitrary times  t1 < t2  are called bivariate distributions.
Similarly for any n  the joint distributions of the vector (B(t1), B(t2), . . . , B(tn)) of Brownian motion sampled at  n  arbitrary times  t1  < t2  < . . . < tn  are called  n- dimensional distributions.
Finite dimensional distributions are the joint distributions when n  = 1, 2, 3, . . .. To describe a random process it is not enough to know the distributions of its
values at any time  t, but also joint distributions.A stochastic (random) process is called Gaussian if all its finite dimensional distributions are multivariate Normal.
In this lecture we prove that Brownian motion is a Gaussian process.
BMGaussPr   Theorem 20  Brownian Motion is a Gaussian process.
9.1 Proof of Gaussian property of Brownian Motion Proof:   of Theorem 20.
We need to show that all joint distributions of BM at time points  t1, t2, . . . , tn for all  n  = 1, 2, . . . are Multivariate Normal. Take BM started at 0,  B0  = 0.
Start with  n  = 1. By the property of increments of BM, with s  = 0 we have Bt − B0  has  N (0, t) distribution. Hence  Bt  has  N (0, t) distribution.
Now take  n = 2. Write
(B(t1), B(t2)) = (B(t1), B(t1) + (B(t2) − B(t1)).
38
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 43/89
Denote  X   =  B(t1), and  Y   =  B(t2) − B(t1). By the property of independence of  increments of BM  X  and  Y  are independent,  X  ∼ N (0, t1),  Y  ∼ N (0, t2 − t1).
Then (B(t1), B(t2)) = (X, X  +  Y ).
Write  X   =  √ 
t1Z 1,  Y   =  √ 
t2 − t1Z 2, where  Z 1, Z 2  are independent standard Nor- mal. Denote  σ1  =
√  t1,  σ2  =
(X, X  +  Y ) = (σ1Z 1, σ1Z 1 + σ2Z 2) = AZ,
where matrix  A   =
  σ2
.
Similarly, for   n   = 3, the joint distribution of the vector (B(t1), B(t2), B(t3)) is
trivariate normal with mean (0, 0, 0), and covariance matrix t1   t1   t1
t1   t2   t2 t1   t2   t3

For a general  n  can complete the proof by induction. Alternatively, write directly
(B(t1), B(t2), . . . , B(tn))
= (B(t1)   Y 1
, B(t1)   Y 1
+ (B(tn)) − B(tn−1)   Y n
),
Denote  Y 1  = B (t1), and for  k > 1  Y k  = B (tk) − B(tk−1). Then by the property of  independence of increments of Brownian motion,  Y k’s are independent. They also have normal distribution, Y 1 ∼ N (0, t1), and Y k ∼ N (0, tk − tk−1).   B(t2) = Y 1 + Y 2, etc, B(tk) = Y 1+Y 2+. . . Y  k.  Z 1 = Y 1/
√  t1, and Z k  = Y k/
√  tk − tk−1 are independent
standard normal. Thus
with
A =
,  σ1 = √ t1,  σk = √ tk − tk−1.
(B(t1), B(t2), . . . , B(tn))

8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 44/89
BMGauss   Corollary   Brownian motion is a Gaussian process with constant mean function, and covariance function  min(t, s).
Example   Find the distribution of  B(1) + B(2) + B(3) + B(4). Consider   X   = (B(1), B(2), B(3), B(4)). Since Brownian motion is a Gaussian process, all its finite dimensional distributions are Normal, in particular   X   has a multivariate Normal distribution with mean vector zero and covariance matrix given by  σij  = C ov(X i, X  j). For example,  Cov(X 1, X 3) = C ov((B(1), B(3)) = 1.
Σ =
1 2 3 31 2 3 4

Now, let  a = (1, 1, 1, 1). Then
aX =  X 1 + X 2 + X 3 + X 4  =  B(1) + B(2) + B(3) + B(4).
aX has a Normal distribution with mean zero and variance  aΣaT , the sum of the elements of the covariance matrix in this case.
Thus   B(1) + B(2) + B(3) + B(4) has a Normal distribution with mean zero and variance 30. Alternatively, we can calculate the variance of the sum by the covariance formula
V ar(X 1 + X 2 + X 3 + X 4) = Cov(X 1 + X 2 + X 3 + X 4, X 1 + X 2 + X 3 + X 4) =
i,j
40
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 45/89
9.2 Processes obtained from Brownian motion Two process used in applications are the Arithmetic and Geometric Brownian motion.
Arithmetic Brownian motion X t  =  µt + σBt, where µ  and  σ  are constants. This is also known as Brownian motion with drift.
Theorem 21   If  X t   is Brownian motion with drift above then   X t−µt σ
  is a standard  Brownian motion.
It is easy to show that  X t   is a Gaussian process. Calculation of its mean and covariance functions is left as exercise.
Geometric Brownian motion
S t   = S 0eµt+σBt .
What is the distribution of  S t? Compute its mean and variance.
In particular, we have a
Corollary   X  = E (X |Y )  and  X −  X  = X  − E (X |Y )  are uncorrelated.
Proof:   of the Theorem that best predictor is  X   =  E (X |Y ). Take any Z , which is a function of  Y . We need to show that
E (X −  X )2 ≤ E (X − Z )2.
E (X − Z )
(X −  X )( X − Z )
= E (X −  X )2 + E ( X − Z )2,   by the previous result
≥ E (X −  X )2.
Thus  X  = E (X |Y ) is the optimal, best predictor/estimator.
41
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 46/89
9.3 Conditional expectation with many predictors
Let  X, Y 1, Y 2, . . . , Y  n  be random variables. By definition, the optimal predictor  X  minimizes the mean square error, i.e.
for any  Z   function of  Y ’s
E (X −  X )2 ≤ E (X − Z )2.
Theorem 22  The best predictor  X  based on  Y 1, Y 2, . . . , Y  n  is given by 
X  = E (X |Y 1, Y 2, . . . Y  n).
Conditional expectation given many random variables is defined similarly as the mean of the conditional distribution. It is denoted by
E (X |Y 1, Y 2, . . . Y  n) Notation: If we denote the information generated by  Y 1, Y 2, . . . , Y  n  by F n   then
E (X |Y 1, Y 2, . . . Y  n) = E (X |F n).
Note that often it is hard to find a formula for the conditional expectation. But
in the multivariate Normal case it is known and is established by direct calculations.
Cond.Exp.MvN   Theorem 23 (Normal Correlation)   Suppose   X   and   Y   jointly form a multi- variate normal distribution. Then the vector of conditional expectations is given by  the following 
E (X|Y) = E (X) + Cov(X, Y)Cov−1(Y, Y)(Y − E (Y)).
Cov(X, Y)  denotes the matrix with elements   Cov(X i, Y  j),   Cov−1(Y, Y)   denotes  the inverse of the covariance matrix of  Y .
Example   Best predictor of  X  based on  Y   in Bivariate Normal best Pred Direct application of the formula 
E (X |Y ) = E X  +  Cov(X, Y )
V ar(Y )   (Y  − EY ).
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 47/89
Example   Best predictor of future value of Brownian motion based on the present  value.best PredBM
Consider best predictor of Brownian motion  Bt+s  at the future time  t + s   if we  know the present value  Bt.
Since  (Bt, Bt+s)  is a Bivariate Normal with  V ar(Bt) = t  and  Cov(Bt, Bt+s) = min(t, t + s) = t,  EBt = 0  we obtain 
E (Bt+s|Bt) = Bt.
Further one can check that even if we know many past values of Brownian  motion at times  t1  < t2  < . . . < tn = t
E (Bt+s|Bt1 , Bt2, . . . , Bt) = Bt.
This is known as the martingale property of Brownian motion.
43
8/19/2019 Financial Economics pt 2 Modelling in Insurance and Finance
http://slidepdf.com/reader/full/financial-economics-pt-2-modelling-in-insurance-and-finance 48/89
A process  M t, t ≥ 0, is a martingale if 
•   for all  t,  E |M t| < ∞ •   for all  t  and  s > 0,  E (M