introduction to probability theory and stochastic …introduction to probability theory and...

98
Introduction to Probability Theory and Stochastic Processes for Finance * Lecture Notes Fabio Trojani Department of Economics, University of St. Gallen, Switzerland * Correspondence address: Fabio Trojani, Swiss Institute of Banking and Finance, University of St. Gallen, Rosenbergstr. 52, CH-9000 St. Gallen, e-mail: [email protected].

Upload: others

Post on 31-May-2020

19 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Introduction to Probability Theory and

Stochastic Processes for Finance∗

Lecture Notes

Fabio Trojani

Department of Economics, University of St. Gallen, Switzerland

∗Correspondence address: Fabio Trojani, Swiss Institute of Banking and Finance, University of St. Gallen,Rosenbergstr. 52, CH-9000 St. Gallen, e-mail: [email protected].

Page 2: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Contents

1 Introduction to Probability Theory 4

1.1 The Binomial Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.1 The Risky Asset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.2 The Riskless Asset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.3 A Basic No Arbitrage Condition . . . . . . . . . . . . . . . . . . . . . . . . 5

1.1.4 Some Basic Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.1.5 Pricing Derivatives: a first Example . . . . . . . . . . . . . . . . . . . . . . 5

1.2 Finite Probability Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.1 Measurable Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2.2 Probability measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.2.3 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.2.4 Expected Value of Random Variables Defined on Finite Measurable Spaces 15

1.2.5 Examples of Probability Spaces and Random Variables with Finite Sample

Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

1.3 General Probability Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

1.3.1 Some First Examples of Probability Spaces with non finite Sample Spaces . 18

1.3.2 Continuity Properties of Probability Measures . . . . . . . . . . . . . . . . 20

1.3.3 Random Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

1.3.4 Expected Value and Lebesgue Integral . . . . . . . . . . . . . . . . . . . . . 25

1.3.5 Some Further Examples of Probability Spaces with uncountable Sample Spaces 28

1.4 Stochastic Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2 Conditional Expectations and Martingales 33

2.1 The Binomial Model Once More . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

2.2 Sub Sigma Algebras and (Partial) Information . . . . . . . . . . . . . . . . . . . . 34

1

Page 3: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

2.3 Conditional Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

2.3.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

2.3.2 Definition and Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.4 Martingale Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

3 Pricing Principles in the Absence of Arbitrage 44

3.1 Stock Prices, Risk Neutral Probability Measures and Martingales . . . . . . . . . . 45

3.2 Self Financing Strategies, Risk Neutral Probability Measures and Martingales . . . 46

3.3 Existence of Risk Neutral Probability Measures and Derivatives Pricing . . . . . . 48

3.4 Uniqueness of Risk Neutral Probability Measures and Derivatives Hedging . . . . . 50

3.5 Existence of Risk Neutral Probability Measures and Absence of Arbitrage . . . . . 52

4 Introduction to Stochastic Processes 52

4.1 Basic Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

4.2 Discrete Time Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

4.3 Girsanov Theorem: Application to a Semicontinuous Pricing Model . . . . . . . . . 57

4.3.1 A Semicontinuous Pricing Model . . . . . . . . . . . . . . . . . . . . . . . . 57

4.3.2 Risk Neutral Valuation in the Semicontinuous Model . . . . . . . . . . . . . 58

4.3.3 A Discrete Time Formulation of Girsanov Theorem . . . . . . . . . . . . . . 60

4.3.4 A Discrete Time Derivation of Black and Scholes Formula . . . . . . . . . . 64

4.4 Continuous Time Brownian Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

5 Introduction to Stochastic Calculus 71

5.1 Starting Point, Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

5.2 The Stochastic Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

5.2.1 Some Basic Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

5.2.2 Simple Integrands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

2

Page 4: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

5.2.3 Squared Integrable Integrands . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.2.4 Properties of Stochastic Integrals . . . . . . . . . . . . . . . . . . . . . . . . 84

5.3 Ito’s Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

5.3.1 Starting Point, Motivation and Some First Examples . . . . . . . . . . . . . 85

5.3.2 A Simplified Derivation of Ito’s Formula . . . . . . . . . . . . . . . . . . . . 88

5.4 An Application of Stochastic Calculus: the Black-Scholes Model . . . . . . . . . . 93

5.4.1 The Black-Scholes Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

5.4.2 Self Financing Portfolios and Hedging in the Black-Scholes Model . . . . . 93

5.4.3 Probabilistic Interpretation of Black-Scholes Prices: Girsanov Theorem once

more . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

3

Page 5: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

1 Introduction to Probability Theory

1.1 The Binomial Model

We start with the binomial model to introduce some basic ideas of probability theory related to

the pricing of contingent claims, basically for the following reasons:

• It is a simple setting where the arbitrage concept and its relation to risk neutral pricing can

be explained

• It is a model used in practice where binomial trees are calibrated to real data, for instance

to price American derivatives

• It is a simple setting to introduce the concept of conditional expectations and martingales,

which are at the hearth of the theory of derivatives pricing.

1.1.1 The Risky Asset

St is the price of a risky stock at time t ∈ I, where we start for simplicity with a discrete time

index I = 0, 1, 2. The dynamics of St is defined by

St =

uSt−1 with probability p

dSt−1 with probability 1-p,

where p ∈ (0, 1). We impose for brevity the further condition

u =1d

> 1

giving a recombining tree.

1.1.2 The Riskless Asset

Bt is the price at time t of a riskless money account. r > 0 is the riskless interest rate on the

money account, implying

Bt = (1 + r)Bt−1

4

Page 6: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

for any t = 1, 2. For simplicity we impose the normalization B0 = 1.

1.1.3 A Basic No Arbitrage Condition

A necessary condition for the absence of arbitrage opportunities in our model is

d < 1 + r < u . (1)

Example 1 In the sequel we will often use a numerical example with parameters S0 = 4, u =

1/d = 2, r = 0.25.

1.1.4 Some Basic Remarks

Notice that to any trajectory TT, TH,HT, HH, in the tree we can associate the corresponding

values of S1 and S2. Thus, from the perspective of time 0, both S1 and S2 are random entities

whose value depends on which event/trajectory will be realized in the model. To fully describe

the random behaviour of S1 and S2 we can make use of the space Ω = TT, TH, HT, HH of

all random sequences that can be realized on the tree. Basically, Ω contains all the information

about the single outcomes that can be realized in our model.

Definition 2 (i) The set Ω of all possible outcomes in a random experiment is called the sample

space. (ii) Each single event ω ∈ Ω is called an outcome of the random experiment.

Example 3 In the above two period model we had Ω = TT, TH, HT, HH and ω = TT or

ω = TH or ω = HT or ω = HH.

Exercise 4 Give the sample space and all single outcomes in a binomial tree with three periods.

1.1.5 Pricing Derivatives: a first Example

Definition 5 An European call option with strike price K and maturity T ∈ I is the right to buy

at time T the underlying stock for the price K. We denote by ct the price of the European call

option at time t.

5

Page 7: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

From the definition we immediately have for the pay-off at maturity of the call option:

cT =

ST −K ST > K

0 ST ≤ K

,

or, more compactly:

cT = (ST −K)+ ,

where (x)+ := max (x, 0) is the positive part of x.

Remark 6 Notice that cT depends on ω ∈ Ω only through ST (ω). The goal in any pricing model

is to determined the time 0 price (as for instance the price c0) of a derivative payoff falling at a

later time T , say (as for instance the pay-off cT = (ST −K)+).

Assumption 7 To illustrate the main ideas we start with T = 1.

Definition 8 A (perfect) hedging portfolio for cT with value V0 at time 0 is a position in ∆0 stock

and V0 −∆0S0 money accounts (recall the normalization B0 = 1), such that

c1 (H) = ∆0S1 (H) + (V0 −∆0S0) (1 + r)

c1 (T ) = ∆0S1 (T ) + (V0 −∆0S0) (1 + r)(2)

Remark 9 A (perfect) hedging portfolio replicates exactly the future pay-off of the derivative to

be hedged. Therefore, it is a vehicle to fully eliminate the risk intrinsic in the randomness of the

future value of a derivative.

Proposition 10 (i) For T = 1, the quantity ∆0 is given by

∆0 =c1 (H)− c1 (T )S1 (H)− S1 (T )

. (3)

∆0 is called the ”delta” of the hedging portfolio. (ii) The risk neutral valuation formula follows:

c0 = V0 =1

1 + r[pc1 (H) + (1− p) c1 (T )] ,

where

p =1 + r − d

u− d.

6

Page 8: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Proof. (i) Compute the difference between the first and the second equation in (2) and solve

for ∆0. (ii) Insert ∆0 given by (3) in one of the two equations in (2) and solve for V0. Absence of

arbitrage then implies V0 = c0.

Remark 11 (i) The price V0 = c0 does not depend on the binomial probability p. (ii) Under the

given conditions (cf. (1)) one has p ∈ (0, 1). Therefore the identity

c0 =1

1 + r[pc1 (H) + (1− p) c1 (T )]

says that the price c0 is a discounted expectation of the call future random pay-offs, computed using

the risk adjusted probabilities p and (1− p). More compactly, we could thus write

c0 =E (c1)1 + r

,

where E denotes expectations under p, 1 − p. This is a so called risk adjusted (or risk neutral)

valuation formula.

Exercise 12 (i) For the case T = 1 and for the model parameters in Example 1 compute the

numerical value of c0. (ii) For the case T = 2 compute recursively the hedging portfolio of the

derivative, starting from ∆1 (H), ∆1 (T ), V1 (H), V1 (T ), and finishing with ∆0 and V0.

1.2 Finite Probability Spaces

In the sequel we let Ω 6= ∅ be a given sample space.

1.2.1 Measurable Spaces

Let F be the family of all subsets of Ω; F is an example of a so called sigma algebra, a concept

that we define in the sequel.

Definition 13 (i) A sigma algebra G ⊂ F is a family of subsets of Ω such that:

1. ∅ ∈ G

7

Page 9: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

2. If A ∈ G then it follows Ac ∈ G

3. If (Ai)i∈N ⊂ G is a countable sequence in G, then it follows

i∈NAi ∈ G

(ii) The couple (Ω,G) is called a measurable space.

Example 14 (i) F is a sigma algebra, the finest one on Ω. Indeed, ∅ ∈ F . Moreover, for any

set A ∈ F the complement Ac is a subset of Ω, i.e. is in F . The same holds for any (not only for

a countable) union of sets in F . (ii) The subfamily G := ∅, Ω is the coarsest sigma algebra on

Ω. (iii) In the setting of the binomial model of Example 1, it is easy to verify (please do it!) that

the subfamily

G := ∅, Ω, HT,HH , TT, TH ,

is a sigma algebra, the sigma algebra generated by the first period price movements in the model.

Remark 15 We make use of sigma algebras to model different information sets at the disposal

of the investor in doing her portfolio choices. For instance, in the setting of the binomial model

of Example 1, the information available at time 0 (before observing prices) can be modelled by the

trivial information set

G0 := ∅, Ω .

That is, at time 0 investors only know that the possible realized outcome ω has to be an element

of the sample space Ω. At time 1 investors can observe S1. Thus, depending on the value of S1

they will know at time 1 that either

ω ∈ HT,HH (if and only if S1 (ω) = S0u) ,

or

ω ∈ TT, TH (if and only if S1 (ω) = S0d) .

8

Page 10: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Thus at time 1 investors do not have full information about ω, since they still do not know the

direction of the price movement in period 2. However, they can determine to which specific event

of their information set ω belongs. The larger (smaller) this set, the preciser (the rougher) the

information on the realized outcome ω. For instance, while at time 0 investors only know that

the outcome will be an element of the sample space, at time 1 they know that the outcome implies

either an upward or a downward price movement in the first period. Based on these considerations

a natural sigma algebra G1 to model investors price information at time 1 is

G1 := ∅, Ω, HT,HH , TT, TH ,

(verify that G1 is indeed a sigma algebra). Similarly, by observing only the price S2 investors will

know at time 2 that either

ω = HH (if and only if S2 (ω) = S0u2) ,

or

ω = TT (if and only if S2 (ω) = S0d2) ,

or

ω ∈ TH,HT (if and only if S2 (ω) = S0du) .

On the other hand, by observing the prices S1 and S2 investors will know at time 2

ω = HH (if and only if S2 (ω) = S0u2) ,

or

ω = TT (if and only if S2 (ω) = S0d2) ,

or

ω = TH (if and only if S1 (ω) = S0d and S2 (ω) = S0du ) ,

or

ω = HT (if and only if S1 (ω) = S0u and S2 (ω) = S0du ) .

9

Page 11: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Based on these considerations a natural sigma algebra G2 to model investors price information up

to time 2 is the smallest one containing the system of subsets of Ω given by

E2 := ∅, Ω, HT , HH , TT , TH .

We denote this sigma algebra by G2 = σ (E2). Finally, the sigma algebra representing the infor-

mation obtained by observing only the price S2 is

G3 = ∅, Ω, HH , TH, HT, TT , TT , TH, HT, HH , TH, HT , TT, HH

Notice that while the relation G0 ⊂ G1 ⊂ G2 implies an information set growing over time, we do

not have G1 ⊂ G3 (why?). Therefore, the sequence of sigma algebras G0,G1,G3 is not consistent

with the idea of an investor’s information set growing over time.

Exercise 16 (Borel sigma algebra on R) Let Ω := R and denote by T the set of all open intervals

in R

T = (a, b) | a ≤ b, a, b ∈ R .

1. Show with a simple counterexample that T is not a sigma algebra on R.

2. We know that there does exist a sigma algebra over R containing T (which one?). Thus,

there also exists a ”minimal sigma algebra” containing T , the so-called Borel sigma algebra

over R (denoted by B (R)) which has to be of the form

B (R) =⋂

G is σ−algebra over RT ⊂G

G

To show that B (R) is indeed a sigma algebra over R it is thus sufficient to show that in-

tersections of sigma algebras are sigma algebras. Do this, by verifying the corresponding

definition.

3. Show, using simple set operations, that the events (−∞, a), (a,∞), [a, b], (a, b], a, where

a ≤ b, are elements of B (R).

10

Page 12: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

4. Show that any countables subset aii∈N of R is an element of B (R).

As mentioned, a natural way to model a growing amount of information over time is through

increasing sequences of sigma algebras. This is the next definition.

Definition 17 Let (Ω,G) be a measurable space. A sequence (Gi)i=0,1,...,n of sigma algebras over

Ω such that

G0 ⊂ G1 ⊂ ... ⊂ Gn ⊂ G ,

is called a filtration.

Example 18 In Remark 15 the sequence (Gi)i=0,1,2 is a filtration, while the sequence (Gi)i=0,1,3

is not.

1.2.2 Probability measures

For the whole section let (Ω,G) be a measurable space.

Definition 19 We say that an event A ∈ G is realized in a random experiment with sample space

Ω if ω ∈ A.

Example 20 In the two period binomial model we have

TH, TT = The stock price drops in the first period .

Thus, if a time 1 we observe T , TH, TT is realized. On the other hand, if we observe H, then

TH, TT is not realized (i.e. Ac = HT, HH is realized).

The next step is to assign in a consistent way probabilities to events that can be realized in a

random experiment.

11

Page 13: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Definition 21 (i) A probability measure on (Ω,G) is a function P : G → [0, 1] such that:

1. P (Ω) = 1

2. For any disjoint sequence (Ai)i∈N ⊂ G such that Ai ∩Aj = ∅ for i 6= j it follows

P

(⋃

i∈NAi

)=

i∈NP (Ai) .

This property is called sigma additivity.

(ii) We call a triplet (Ω,G, P ) a probability space.

Example 22 In the two period binomial model we set Ω = TT, TH, HT,HH, G = F , and

define probabilities with the binomial rule

P (HH) = p2 , P (TT ) = (1− p)2 , P (TH) = P (HT ) = p (1− p) .

The sigma additivity then implies, for instance

P (HT, HH) = P (HH) + P (HT ) = p2 + p (1− p) .

More generally, we have, in this finite sample space setting:

P (A) =∑

ω∈A

P (ω)

Proposition 23 Let (Ω,G, P ) be a probability space. We have:

1. P (A\B) = P (A)− P (A ∩B)

2. P (A ∪B) = P (A) + P (B)− P (A ∩B)

3. P (Ac) = 1− P (A)

4. If A ⊂ B then P (A) ≤ P (B)

12

Page 14: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Proof. 1. A\B = A ∩Bc and A = (A ∩B) ∪ (A ∩Bc). By sigma additivity if follows:

P (A) = P (A ∩B) + P (A ∩Bc) = P (A ∩B) + P (A\B) .

2. A ∪B = (A\B) ∪B. Therefore, using 1 and by sigma additivity:

P (A ∪B) = P (A\B) + P (B) = P (A) + P (B)− P (A ∩B) .

3. This is a particular case of 1. with A = Ω and B = A. 4. By 1. we have, under the given

assumption:

P (B) = P (B ∩A) + P (B\A) = P (A) + P (B\A) ≥ P (A) .

Remark 24 In Definition 21, the condition 1. for a probability measure implies the condition,

1’. P (∅) = 0.

In fact, a function µ : G → [0,∞] satisfying condition 1’. and 2. in Definition 21 is called a

measure on the measurable space (Ω,G). Notice, that in this case we can have µ (Ω) = ∞.

Exercise 25 The Lebesgue measure on the measurable space (R,B (R)) (denoted by µ0) is a mea-

sure µ0 : B (R) → [0,∞] such that

µ0 ((a, b)) = b− a

for any open interval (a, b), a ≤ b. It can be shown that Lebesgue measure exists and is unique

(we will not prove this, we will just assume it in the sequel). Show the following properties of

Lebesgue measure, using the general definition of a measure.

1. µ0 (∅) = 0, µ0 (R+) = ∞

2. µ0 (a) = 0 for any a ∈ R

3. For any countable subset aii∈N of R one has µ0

(aii∈N)

= 0.

13

Page 15: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

1.2.3 Random Variables

For the whole section let (Ω,G) be a measurable space such that the cardinality of Ω is finite

(|Ω| < ∞). We will extend the concept of a random variable to non finite sample spaces in a later

section.

Definition 26 Let X : Ω → R be a function from Ω to the real line. (i) The sigma algebra

σ (X) :=X−1 (B) : B is a subset of R

,

where X−1 (B) is a short notation for the preimage ω : X (ω) ∈ B of B under X, is called the

sigma algebra generated by X. (ii) X is called a random variable on (Ω,G) if it is measurable with

respect to G, that is if

σ (X) ⊂ G .

Remark 27 (i) It is useful to know some properties of preimages. We have for any subset B of

R, and for any (non necessarily countable) sequence (Bα)α∈A of subsets of R:

X−1 (Bc) =(X−1 (B)

)c

X−1

( ⋃

α∈ABα

)=

α∈AX−1 (Bα)

X−1

( ⋂

α∈ABα

)=

α∈AX−1 (Bα)

(ii) σ (X) is a sigma algebra. Indeed, ∅ = X−1 (∅) ∈ σ (X). Moreover, if A = X−1 (B) for some

subset of R, then

Ac =(X−1 (B)

)c= X−1 (Bc) ∈ σ (X) ,

because Bc is a subset of R. Similarly, given a sequence (Ai)i∈N such that Ai = X−1 (Bi) for a

sequence of subsets (Bi)i∈N of R we have:

i∈NAi =

i∈NX−1 (Bi) = X−1

(⋃

i∈NBi

)∈ σ (X) ,

because⋃

i∈NBi is a subset of R. (iii) σ (X) represents the (partial) information set that is

available about an outcome ω ∈ Ω by observing the values of X.

14

Page 16: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Example 28 In the two period binomial model S0, S1 and S2 are all (trivially) measurable with

respect to the finest sigma algebra F over Ω. However, since S0 is constant we have

σ (S0) = ∅, Ω = G0 ,

and S0 is G0 measurable. Further,

σ (S1) = ∅, Ω, HT,HH , TT, TH = G1 ,

and S1 is G1 but not G0 measurable. Finally,

σ (S2) = ∅, Ω, HH , TH,HT, TT , TT , TH, HT,HH , TH,HT , TT,HH

= G3 .

Therefore, S2 is G3 but not G1 measurable. On the other hand, S1 is G1 but not G3 measurable

(why?).

1.2.4 Expected Value of Random Variables Defined on Finite Measurable Spaces

For the whole section let (Ω,G, P ) be a probability space such that the cardinality of Ω is finite

(|Ω| < ∞). We will extend the concept of expected value of a random variable to the non finite

sample space setting in a later section. Further, let X : (Ω,G) → R be a random variable.

Definition 29 (i) The expected value E (X) of a random variable X defined on a finite sample

space is given by

E (X) :=∑

ω∈Ω

X (ω) P (ω) .

(ii) The variance V ar (X) of X is given by

V ar (X) := E[(X − E (X))2

]= E

(X2

)− (E (X))2 .

15

Page 17: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Example 30 In the two period binomial model of Example 1 we have:

S2 (HH) = 16 ; P (HH) = p2

S2 (HT ) = S2 (TH) = 4 ; P (TH) = P (HT ) = p (1− p)

S2 (TT ) = 1 ; P (HH) = (1− p)2

Therefore,

E (S2) = 16 · p2 + 4 · 2 · p (1− p) + 1 · (1− p)2

1.2.5 Examples of Probability Spaces and Random Variables with Finite Sample

Space

Example 31 The Bernoulli distribution with parameter p is a probability measure P on the mea-

surable space (Ω,G) given by Ω := 0, 1, G := F , such that:

P (1) = p ∈ (0, 1) .

Example 32 The Binomial distribution with parameters n and p is a probability measure P on

a measurable space (Ω,G) given below. The sample space is given by

Ω := n− dimensional sequences with components 0 or 1 .

For instance, a possible element of Ω is

ω = 0010100...1111︸ ︷︷ ︸n components

.

Further, we set G := F . Finally, P is given by

P (ω) = p# of 1 in ω (1− p)# of 0 in ω.

For instance, using the properties of a probability measure we have:

P (at least a 1 over the n components) = 1− P (no 1 over the n components)

= 1− (1− p)n,

16

Page 18: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

and so forth.

Example 33 A discrete uniform distribution modelling the toss of a fair die is obtained by setting

Ω := 1, 2, 3, 4, 5, 6, G := F , and

P (ω) =16

, ω ∈ Ω .

For instance, using the properties of a probability measure we then have:

P (obtaining an even number ) = P (2) + P (4) + P (6) =12

,

and so forth.

Example 34 A discrete uniform distribution modelling the toss of two independent fair dies is

obtained by setting

Ω := 11, 12, 13, 14, 15, 16, 21, 22, ..., 66

G := F , and

P (ω) =136

, ω ∈ Ω .

For instance, using the properties of a probability measure we then have:

P (the sum of the two numbers is larger than 10 ) = P (66) + P (56) + P (65) =112

,

and so forth. Let X : Ω → 2, 3, 4, .., 12 be the function giving the sum of the numbers on the two

dies. We have:

σ (X) =

∅, Ω, 11︸︷︷︸

X−1(2)

, 12, 21︸ ︷︷ ︸X−1(3)

, 13, 31, 22︸ ︷︷ ︸X−1(4)

, ...

⊂ F ,

that is X is a random variable on (Ω,F).

1.3 General Probability Spaces

Definition 21 of a probability space does not require the assumption |Ω| < ∞.

17

Page 19: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

1.3.1 Some First Examples of Probability Spaces with non finite Sample Spaces

A first simple example of a probability space defined on a non finite sample space is the following.

Example 35 Let Ω = R,G = B (R) and define

P (A) = µ0 (A ∩ [0, 1]) .

P is a probability measure, the uniform distribution on the interval [0, 1]. Indeed, we have:

1. P (Ω) = µ0 (Ω ∩ [0, 1]) = µ0 ([0, 1]) = 1.

2. For any disjoint sequence (Ai)i∈N ⊂ B (R) it follows

P (∪i∈NAi) = µ0 ((∪i∈NAi) ∩ [0, 1]) = µ0 (∪i∈N (Ai ∩ [0, 1])) =∑

i∈Nµ0 (Ai ∩ [0, 1]) =

i∈NP (Ai)

More generally, setting

P (A) =µ0 (A ∩ [a, b])

µ0 ([a, b]),

defines a uniform distribution on the interval [a, b].

A famous example of a probability space with non finite sample space is the one underlying a

Poisson distribution on N.

Example 36 Let Ω := N and G := F . Thus in this case Ω is an infinite, countable, sample

space. We define for any ω ∈ Ω

P (ω) :=λω

ω!e−λ , λ > 0 .

Setting for A ∈ F

P (A) :=∑

ω∈A

P (ω) ,

one obtains the Poisson distribution on (N,F) with parameter λ. P is a probability measure on

(Ω,F). Indeed, we have

P (Ω) =∑

ω∈Ω

P (ω) =∞∑

k=0

λk

k!e−λ = 1 ,

18

Page 20: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

and, for any disjoint sequence (Ai)i∈N ⊂ F ,

P

(⋃

i∈NAi

)=

ω∈Si∈N Ai

P (ω) =∑

i∈N

( ∑

ω∈Ai

P (ω)

)=

i∈NP (Ai) .

The last example of a probability space with non finite sample space that we present is the

one underlying a Binomial experiment where n →∞.

Example 37 Let Ω := T,H∞ be the space of infinite sequences with components T or H. Thus

any outcome ω ∈ Ω is of the form

ω = (ωi)i∈N , ωi ∈ T,H .

This is an infinite, uncountable, sample space. Therefore, some caution is needed in constructing

a suitable sigma algebra on Ω, on which we are enabled in a second step to extend the binomial

distribution in a consistent way. We define

Gn := The sigma algebra generated by the first n tosses ,

for any n ∈ N. For instance, we obtain for G1:

G1 = ∅,Ω, ω ∈ Ω : ω1 = T , ω ∈ Ω : ω1 = H ,

and so on for n > 1. We know that there is a sigma algebra F over Ω such that Gn ⊂ F for

all n ∈ N. However, this sigma algebra is too large to assign binomial probabilities on it in a

consistent way. Therefore, we work in the sequel with the smallest sigma algebra containing all

Gn’s. We define

G :=⋂

H⊃∪n∈NGn

H is sigma algebra over Ω

H ,

the sigma algebra generated by ∪n∈NGn. Notice that G contains events that can be quite rich and

that do not belong to any Gn, n ∈ N. An example of such an event is

A := H on every toss = ω ∈ Ω : ωi = H for all i ∈ N =⋂

n∈Nω ∈ Ω : ωi = H for i ≤ n︸ ︷︷ ︸

∈Gn

∈ G ,

19

Page 21: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

where

ω ∈ Ω : ωi = H for i ≤ n = H on the first n tosses .

We now define a probability measure P on G whose restriction on any Gn is a binomial distribution

with parameters n and p. Precisely, define for any A ∈ Gn and some given n ∈ N

P (A) = p# of H in the first n tosses (1− p)# of T in the first n tosses.

For instance, for the event

H on the first 2 tosses = ω ∈ Ω : ωi = H for i ≤ 2 ,

we obtain

P (H on the first 2 tosses) = p2 ,

and so forth. Using the properties of a probability measure we can then uniquely extend P to all

of G. For instance, we have

P (H on all tosses) ≤ P (H on the first n tosses) = pn ,

for all n ∈ N. Therefore, for p ∈ (0, 1) it follows

P (H on all tosses) = 0 .

1.3.2 Continuity Properties of Probability Measures

Two further continuity properties of a probability measure - in excess of the properties in Propo-

sition 23 - are useful when working with countable set operations over monotone sequences of

events. They are given below.

Proposition 38 Let (An)n∈N ⊂ G be a countable sequence of events. It then follows:

1. If A1 ⊂ A2 ⊂ ..., then:

P (An) ↑n→∞

P

( ⋃

n∈NAn

),

(continuity from below).

20

Page 22: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

2. If A1 ⊃ A2 ⊃ ..., then:

P (An) ↓n→∞

P

( ⋂

n∈NAn

)

(continuity from above).

Proof. 1. Let A :=⋃

n∈NAn. We have,

A =⋃

n∈N(An\An−1) ,

where A0 := ∅. Thus, under the given assumption the event A is written as a countable, disjoint,

union of subsets of G. It then follows using the properties of a probability measure

P (A) =∑

n∈NP (An\An−1) =

n∈N(P (An)− P (An−1)) = lim

n→∞(P (An)− P (A0)) = lim

n→∞P (An) .

2. We have

P (An) ↓n→∞

P

( ⋂

n∈NAn

)⇔ P (Ac

n) ↑n→∞

P

(( ⋂

n∈NAn

)c)= P

( ⋃

n∈NAc

n

),

by de Morgan’s law. The proof now follows from 1.

1.3.3 Random Variables

For the whole section let (Ω,G, P ) be a probability space and (R,B (R)) be a Borel measurable

space over R.

When working with uncountable sample spaces, the measurability requirement behind Defini-

tion 26 of a random variable for finite sample spaces has to be modified. Basically, we are going to

require measurability only for preimages of any Borel subset of R, rather than measurability for

preimages of any subset of R. This is a necessary step, in order to be able to assign consistently

probabilities to Borel events determined by the images of some random variable on (Ω,G, P ).

Definition 39 Let X : Ω → R be a real valued function. (i) The sigma algebra

σ (X) :=X−1 (B) : B ∈ B (R)

,

21

Page 23: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

is the sigma algebra generated by X. (ii) X is a random variable on (Ω,G) if

σ (X) ⊂ G .

Example 40 For a set A ⊂ Ω let a function 1A : Ω → 0, 1 be defined by

1A (ω) =

1 ω ∈ A

0 otherwise

.

1A is called the indicator function of the set A. We have (please verify)

σ (1A) = ∅, Ω, A, Ac .

Hence, 1A is a random variable over (Ω,G) if and only if A ∈ G.

The measurability property in Definition 44 allows us to assign in a natural way probabilities

also to Borel events that are induced by images of random variables, as is illustrated in the next

example.

Example 41 Let X be a random variable on a probability space (Ω,F , P ). For any event B ∈

B (R) we define

LX (B) := P(X−1 (B)

). (4)

LX is a probability measure on B (R), the probability distribution of X (or the probability induced

by X on B (R)). Remark, that (4) is well defined, precisely because of the measurability of the

random variable X. Showing that LX is indeed a probability measure is very simple. In fact, we

have:

LX (R) = P(X−1 (R)

)= P (X ∈ R) = P (Ω) = 1 .

Moreover, for any sequence (Bi)i∈N of disjoint events we obtain:

LX

(⋃

i∈NBi

)= P

(X−1

(⋃

i∈NBi

))= P

(⋃

i∈NX−1 (Bi)

)=

∞∑

i=1

P(X−1 (Bi)

)=

∞∑

i=1

LX (Bi) ,

using in the third equality the fact that (Bi)i∈N (and thus also(X−1 (Bi)

)i∈N) is a sequence of

disjoint events.

22

Page 24: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Checking measurability of a candidate random variable can be by definition a quite hard and

lengthy task, since we have to check preimages of any Borel subset of R. Fortunately, the next

result offers a much easier criterion by which measurability is easy to verify in many applications.

Proposition 42 For a function X : Ω → R let

E :=X−1 ((−∞, t)) : t ∈ R

= X < t : t ∈ R ,

be the set of preimages of open intervals of the form (−∞, t) under X. Then it follows:

E ⊂ G ⇔ σ (X) ⊂ G .

Proof. Define:

H :=B ∈ B (R) : X−1 (B) ∈ G ⊂ B (R) .

It is sufficient to show that under the given conditions B (R) ⊂ H, i.e. B (R) = H. We start by

showing that H is a sigma algebra. We have first

X−1 (∅) = ∅ ∈ G ,

hence ∅ ∈ H. Second, for a set B ∈ H it follows

X−1 (Bc) =

X−1 (B)︸ ︷︷ ︸

∈G

c

∈ G .

Finally, for a sequence (Bn)n∈N ⊂ H we have

X−1

( ⋃

n∈NBn

)=

n∈N

X−1 (Bn)︸ ︷︷ ︸

∈G

∈ G ,

showing that H is a sigma algebra as claimed. Since B (R) is by definition the smallest sigma

algebra containing all open intervals on the real line it is sufficient to show that under the given

conditions H contains all open intervals on the real line. To this end, recall that all sets of the

form (−∞, t) are by assumption elements of H. For a general open interval (a, b), a ≤ b it then

23

Page 25: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

follows:

X−1 ((a, b)) = X−1

((−∞, b) ∩

( ⋂

n∈N

(−∞, a +

1n

))c)

= X−1

((−∞, b) ∩

( ⋃

n∈N

(−∞, a +

1n

)c))

= X−1 ((−∞, b))︸ ︷︷ ︸∈G

n∈N

X−1

(−∞, a +

1n

)

︸ ︷︷ ︸∈G

c ∈ G.

This concludes the proof of the proposition.

Example 43 Let (Xn)n∈N be an arbitrary sequence of random variables on (Ω,G). It then follows:

1. aX1 + bX2 is a random variable for any a, b ∈ R

2. supn∈NXn and infn∈NXn are random variables

3. lim sup Xn := limn→∞ supk≥n Xk and lim inf Xn := limn→∞ infk≥n Xk are random vari-

ables.

Proof. We apply several times Proposition 42. 1. For a, b 6= 0 we have

aX1 + bX2 < t =⋃

r∈QaX1 < r ∩ bX2 < t− r =

r∈Q

X1 <

r

a

︸ ︷︷ ︸∈G

X2 <t− r

b

︸ ︷︷ ︸∈G

∈ G .

For statement 2. we obtain:

supn∈N

Xn < t

=

n∈NXn < t︸ ︷︷ ︸

∈G

∈ G ,

infn∈N

Xn < t

=

n∈NXn < t︸ ︷︷ ︸

∈G

∈ G .

3. For any n ∈ N it follows that Yn := supk≥n Xk and Zn := infk≥n Xk are random variables,

by 2. Moreover, the sequences (Yn)n∈N and (Zn)n∈N are monotonically decreasing and increasing,

respectively. Thefore:

lim supXn < t =

limn→∞

Yn < t

=⋃

n∈NYn < t︸ ︷︷ ︸

∈G

∈ G .

lim inf Xn < t =

limn→∞

Zn < t

=⋂

n∈NZn < t︸ ︷︷ ︸

∈G

∈ G .

This concludes the proof.

24

Page 26: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

1.3.4 Expected Value and Lebesgue Integral

For the whole section let (Ω,G, P ) be a probability space and (R,B (R)) be the Borel measurable

space over R.

The expected value of a general random variable is defined as its Lebesgue integral with

respect to some probability measure P on (Ω,G). More generally, Lebesgue integrals of measurable

functions can be defined with respect to some measure (as for instance Lebesgue measure µ0)

defined on a corresponding measurable space (as for instance the measurable space (R,B (R))).

The construction of the Lebesgue integral for a general random variable X starts by defining the

value of the Lebesgue integral for linear combinations of indicator functions, goes over to extend

the integral to functions that are pointwise monotonic limits of sequences of simple functions, and

finally defines the integral for the more general case of an integrable random variable (see below

the precise definition.)

Definition 44 (i) A random variable X is simple if

X =n∑

i=1

ci1Ai ,

where n ∈ N, c1, .., cn ∈ R, and A1, .., An ∈ G are mutually disjoint events. The (vector) space

of simple random variables on (Ω,G) is denoted by S (G).The expected value E (X) of a simple

function X is defined by

E (X) :=∫

Ω

XdP :=n∑

i=1

ciP (Ai) .

(ii) Let X ≥ 0 be a non negative random variable. The expected value E (X) of X is defined by

E (X) :=∫

Ω

XdP := sup∫

Ω

Y dP : Y ≤ X and Y ∈ S (G)

.

(iii) A random variable X is integrable, if

E(X+

)< ∞ , E

(X−)

< ∞ ,

where X+ := max (X, 0) and X− := max (−X, 0) are the positive and negative part of X, respec-

tively. We denote the (vector) space of integrable random variable by L1 (P ). For any X ∈ L1 (P )

25

Page 27: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

the expected value E (X) of X is defined by

E (X) = E(X+

)− E(X−)

.

(iv) Finally, for a random variable X ∈ L1 (P ) and a set A ∈ G we define

A

XdP :=∫

Ω

1AXdP

Remark 45 (i) The key point in the definition of E (X) is (iii). In fact, (iii) is a quite reasonable

definition because for any random variable X ≥ 0 there always exists a sequence (Xn)n∈N of simple

random variables converging monotonically pointwise to X from below. Such a sequence is obtained

for instance by setting for any ω ∈ Ω

Xn (ω) =n2n∑

k=1

k − 12n

1 k−12n <X≤ k

2n (ω) + n1X>n (ω) .

Moreover, it can be shown that the limit of the sequence of integrals E (Xn) does not depend on

the choice of the specific approximating sequence. Therefore, (iii) in Definition 44 could be also

equivalently written as

E (X) := limn→∞

E (Xn) := limn→∞

Ω

XndP ,

for a given approximating sequence (Xn)n∈N. (ii) As mentioned, expected values are by definition

just integrals of measurable functions with respect to some probability measure. In fact, the defini-

tion of the Lebesgue integral of a measurable function with respect to some measure µ, say, follows

exactly the same steps as above, readily by replacing everywhere the probability measure P with

the measure µ in (i), (ii), (iii) and (iv).

Let us discuss some first (very) simple examples of expected values computed using the above

definitions.

Example 46 Let Ω := R, G := B (R) and set for any A ∈ G

P (A) = µ0 (A ∩ [0, 1]) .

26

Page 28: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

The expected value of X := 1Q is

E (X) = 1 · µ0 (Q ∩ [0, 1]) = 0 ,

because Q is a countable set. Notice, that this function is not Riemann integrable in the usual

sense. The expected value of

Y (ω) :=

∞ ω = 0

0 otherwise

,

can be computed as the limit of the expected values in an approximating sequence (Xn)n∈N of

simple functions given by

Xn (ω) :=

n ω = 0

0 otherwise

.

Hence:

E (Y ) = limn→∞

E (Xn) = limn→∞

n · µ0 (0 ∩ [0, 1]) = limn→∞

n · µ0 (0) = 0 .

Notice, that also Y is not Riemann integrable in the usual sense.

The basic properties of the above integral definition are collected in the next proposition.

Proposition 47 Let X,Y ∈ L1 (P ) and a,b ∈ R; it then follows:

1. E (aX + bY ) = aE (X) + bE (Y )

2. If X ≤ Y pointwise, then

E (X) ≤ E (Y )

3. For two sets A,B ∈ G such that A ∩B = ∅ it follows

A∪B

XdP =∫

Ω

1A∪BXdP =∫

Ω

(1A + 1B) XdP =1.

A

XdP +∫

B

XdP .

27

Page 29: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Proof. 1. For brevity we show this property only for indicator functions X = 1A, Y = 1B ,

where A, B ∈ G are disjoint events. We have

E (aX + bY ) = E (a1A + b1B) =Def (ii) aP (A)+bP (B) = aE (1A)+bE (1B) = aE (X)+bE (Y ) .

2. If Y −X ≥ 0, then there exists a sequence of simple approximating functions Xn ≥ 0 converging

monotonically to Y −X. This implies:

E (Y )− E (X) =1. E (Y −X) = limn→∞

E (Xn) = limn→∞

kn∑

i=1

cinP (Ain) ≥ 0 ,

say, because for any n ∈ N we have c1n, .., cknn ≥ 0.

1.3.5 Some Further Examples of Probability Spaces with uncountable Sample Spaces

For the whole section let (Ω,G, P ) be a probability space and (R,B (R)) be the Borel measurable

space over R.

Using Lebesgue integrals we are also able to construct probability measures by integrating a

suitable (density) function over events A ∈ G. A well-known example in this respect arises by

integrating the density function of a standard normal distribution.

Example 48 (Ω,G) := (R,B (R)); φ : R→ R+ is defined by

φ (x) =1√2π

exp(−x2

2

), x ∈ R .

φ is the density function of a standard normally distributed random variable and is such that

Rφ (x) dµ0 (x) =

∫ ∞

−∞φ (x) dx = 1 ,

i.e. φ ∈ L1 (µ0). A standard normal probability distribution P on (R,B (R)) is obtained by setting

for any A ∈ G:

P (A) :=∫

A

φ (x) dµ0 (x) .

It is straightforward to verify, using the basic properties of Lebesgue integrals together with some

monotone convergence property, that P is indeed a probability measure.

28

Page 30: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

More generally, densities can be also defined on abstract probability spaces, as is demonstrated

in the next final example.

Example 49 Let X ≥ 0 be a random variable on (Ω,G) such that X ∈ L1 (P ), and define

Q (A) :=E (1AX)E (X)

= E

(1A

X

E (X)

).

It is easy to verify, using the basic properties of Lebesgue integrals together with some monotone

convergence property, that Q is a further probability measure on (Ω,G). Moreover, the absolute

continuity property

P (A) = 0 ⇒ Q (A) = 0 ,

follows from the definition. If, moreover,

P (A) = 0 ⇐⇒ Q (A) = 0

the probabilities Q and P are called equivalent. This property holds when X > 0. The random

variable Z := XE(X) is called the Radon Nykodin derivative of Q with respect to P , denoted by dQ

dP .

By construction dQdP is a density function on (Ω,G) because dQ

dP ≥ 0 and

E

(dQ

dP

)= E

(X

E (X)

)= 1 .

1.4 Stochastic Independence

For the whole section let (Ω,G, P ) be a probability space

Definition 50 Two events A, B ∈ G are (stochastically) independent if

P (A ∩B) = P (A)P (B) . (5)

We use the notation A ⊥ B to denote two independent events.

29

Page 31: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Remark 51 Condition (5) states that two events are independent if and only if their conditional

and unconditional probabilities are the same, i.e.:

P (A|B) :=P (A ∩B)

P (B)=

A⊥B

P (A)P (B)P (B)

= P (A) ,

(provided of course P (B) > 0). This property is symmetric in A, B.

Example 52 Stochastic independence is a feature determined by the structure of the underlying

probability P . As an illustration of this fact consider again the two period binomial model of

Example 1. We have there:

P (HH, HT )P (HT, TH) =(p2 + p (1− p)

)(2p (1− p)) = 2p2 (1− p) , (6)

and

P (HH,HT ∩ HT, TH) = P (HT ) = p (1− p) . (7)

Therefore, (6) and (7) are equal if and only if p = 12 , that is the only binomial probability under

which the above events are independent is the one implied by p = 12 .

The concept of stochastic independence between events can be naturally extended to stochastic

independence between information sets, i.e. sigma algebras.

Definition 53 Two sigma algebras G1,G2 ⊂ G are stochastically independent if for all A ∈ G1

and B ∈ G2 one has A ⊥ B. We use the notation G1 ⊥ G2 to denote independent sigma algebras.

Example 54 In the two period binomial model of Example 1 we define the two following sigma

algebras:

G1 := ∅, Ω, HT,HH , TT, TH ,

the sigma algebra generated by the first price increment, and

G2 := ∅, Ω, HH, TH , TT, HT ,

30

Page 32: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

the sigma algebra generated by the second price movement. We then have, for any p ∈ [0, 1]:

G1 ⊥ G2 .

For instance, for the sets HT, HH and HH, TH one obtains

P (HT,HH)P (HH,TH) =(p2 + p (1− p)

) (p2 + p (1− p)

)= p2 ,

and

P (HT,HH ∩ HH, TH) = P (HH) = p2 .

These features derive directly from the way how probabilities are assigned by a binomial distribution

where

P (ω) = p# of H in ω (1− p)# of T in ω.

Finally, we can also define independence between random variables as independence of the

information sets they generate.

Definition 55 Two random variables X,Y on (Ω,G, P ) are independent if

σ (X) ⊥ σ (Y ) .

We use the notation X ⊥ Y to denote independence between random variables.

Example 56 We already discussed that the two sigma algebras G1, G2 of Example 54 are inde-

pendent in the binomial model. Notice that we have (please verify!)

G1 = ∅, Ω, HT, HH , TT, TH = σ (S1/S0) ,

and

G2 := ∅,Ω, HH,TH , TT, HT = σ (S2/S1) .

Therefore, the stock price returns S1/S0 and S2/S1 in a binomial model are stochastically inde-

pendent.

31

Page 33: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Example 57 Let A, B ∈ G be two independent events and let the functions

1A (ω) =

1 ω ∈ A

0 otherwise

, 1B (ω) =

1 ω ∈ B

0 otherwise

,

be the indicator functions of the sets A and B, respectively. We then have (please verify):

σ (1A) = ∅, Ω, A, Ac , σ (1B) = ∅, Ω, B, Bc .

Therefore, 1A ⊥ 1B if and only if A ⊥ B (please verify).

Some properties related to independence are important. The first one says that independence

is maintained under (measurable) transformations.

Proposition 58 Let X,Y be independent random variables on (Ω,G, P ) and h, g : R→ R be two

(measurable) functions. It then follows:

h (X) ⊥ g (Y ) .

Proof. We give a graphical proof of this statement, which makes use of the fact that preimages

of composite mappings are contained in the preimage of the first function in the composition:

σ (X) ⊥By assumption σ (Y )

∪ ∪

σ (h (X)) σ (g (Y ))

.

The second important property of stochastic independence is related to the expectation of a

product of random variables.

Proposition 59 Let X,Y be independent random variables on (Ω,G, P ). It then follows

E (XY ) = E (X)E (Y ) .

Proof. For the sake of brevity we give the proof for the simplest case where X = 1A, Y = 1B ,

for events A,B ∈ G such that A ⊥ B. As usual, the extension of this result for more general

32

Page 34: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

setting requires considering linear combinations of indicator functions, i.e. simple functions, and

pointwise limits of simple functions. For the given simplified setting we have:

E (XY ) = E (1A1B) = E (1A∩B) = 1 · P (A ∩B) + 0 · P ((A ∩B)c)

= P (A ∩B) =A⊥B

P (A)P (B) = E (1A)E (1B) = E (X) E (Y ) .

This concludes the proof.

2 Conditional Expectations and Martingales

For the whole section let (Ω,G, P ) be a probability space

2.1 The Binomial Model Once More

For later reference, we summarize the structure of a general n−period binomial model, since it

will be used to illustrate some of the concepts introduced below.

• I := 0, 1, 2, .., n is a discrete time index representing the available transaction dates in the

model

• The sample space is given by Ω := Sequences of n coordinates H or T with single out-

comes ω of the form

ω = (TTTH..HT )︸ ︷︷ ︸n coordinates

,

for instance.

• G := F , the sigma algebra of all subsets of Ω

• Dynamics of the stock price and money account:

St =

uSt−1 with probability p

dSt−1 with probability 1− p

, Bt = (1 + r) Bt−1

33

Page 35: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

for given B0 = 1, S0 and where

u = 1/d , u > 1 + r > d .

The sequence (St)t=0,..,n is a sequence of random variables defined on a single probability space

(Ω,G, P ). This is an example of a so called stochastic process on (Ω,G, P ). Associated with

stochastic processes are flows of information sets (i.e. sigma algebras) generated by the process

history up to a given time. For instance, for any t ∈ I we can define

Gt := σ (σ (S0) , σ (S1) , .., σ (St)) := σ

(t⋃

k=0

σ (Sk)

),

the smallest sigma algebra containing all sigma algebras generated by S0, S1,...,St. Gt represents

the information about a single outcome ω ∈ Ω which can be obtained exclusively by observing the

price process up to time t. Clearly,

Gt ⊂ Gs ⇐⇒ t ≤ s .

Therefore, the sequence (Gt)t=0,..,n constitutes a filtration, the filtration generated by the process

(St)t=0,..,n.

2.2 Sub Sigma Algebras and (Partial) Information

We model partial information about single outcomes ω ∈ Ω or about single events A ∈ G using

sub sigma algebras of G.

Example 60 Let X be a random variable on (Ω,G). Then σ (X) is (by definition) a sub sigma

algebra of G. σ (X) represent the partial information about an outcome ω ∈ Ω which can be

obtained by observing X (ω). For instance, set n = 3 in the above binomial model and consider

the outcome ω = (TTT ). By observing S1, i.e. using σ (S1) as the available information set we

can only conclude

ω ∈ TTT, THH, THT, TTH (⇔ S1 (ω) = S0d) .

34

Page 36: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

However, when observing all price movements from t = 0 to t = 3 we can make use of the sigma

algebra

G3 := σ

(3⋃

t=0

σ (St)

),

to fully identify ω ∈ Ω. Both σ (S1) and G3 are sub sigma algebras of G, which however represent

different pieces of information about ω ∈ Ω

Based on the above simple considerations we can now formally define what it means for an

event to be ”realized”.

Definition 61 (i) An event A ∈ G is realized by means of a sub sigma algebra G′ ⊂ G if A ∈ G′.

(ii) Let Gt be a sigma algebra generated by some price process1 up to time t. We say that A is

realized by means of the price information up to time t if A ∈ Gt.

Remark 62 By definition, realization of an event A ∈ G by means of G′ is precisely measurability

of that event with respect to the sub sigma algebra G′. Precisely, given an event A ∈ G we can

determine it uniquely using G′, i.e. we can say that A has been realized, if and only if A ∈ G′. For

instance, in the above 3−period binomial model we can consider the event

A = TTT .

Clearly, A /∈ σ (S1) since we do not know using σ (S1) the value of the second and the third coin

tosses. Therefore, A is not realized by means of σ (S1), i.e. it is not realized by means of the price

information up to time 1. However,

A ∈ G3 := σ

(3⋃

t=0

σ (St)

),

i.e. A is realized by means of the whole price information available up to time 3.

Example 63 The event The first two price returns are both positive is realized by means of the

price information up to time 2, while the event The total number of positive price returns is 2

is not.

1 See for instance the above examples.

35

Page 37: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

2.3 Conditional Expectations

For the whole section let X be a random variable on (Ω,G).

2.3.1 Motivation

Given an event A = X−1 (a) ∈ G, for some a ∈ R, we are always able to identify for any ω ∈ A

the corresponding value X (ω) of the random variable X using the information set G. Indeed, we

then have by definition

ω ∈ X−1 (a) ∈σ(X)⊂G

G , i.e. X (ω) = a ,

for all ω ∈ A. However, using a coarser information set G′ ⊂ σ (X) it may happen that we are

not able to fully determine the value X (ω) that a random variable X associates to a given single

outcome ω ∈ A. Specifically, it may happen that based on the information available in G′ we can

only state for some non singleton set B ∈ B (R)

ω ∈ X−1 (B) , i.e. X (ω) ∈ B . (8)

In this case, the information set G′ is not sufficiently fine to fully determine the precise value of

X (ω) associated with a specific ω ∈ A. Thus, the goal in such a situation is to define a suitable

candidate prediction E (X| G′) (ω) for the unknown value X (ω) based on the information G′.

We will call E (X| G′) the conditional expectation of X conditionally on G′. Notice, that a first

necessary requirement on E (X| G′) is that it can be fully determined using the information G′,

that is it has to be G′−measurable. Further, a natural idea to compute the prediction E (X| G′)

as an unbiased forecast such that the expectation of E (X| G′) and X agree on all sets A ∈ G′:∫

A

E (X| G′) dP =∫

A

XdP

(see below the precise definition).

36

Page 38: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

2.3.2 Definition and Properties

Definition 64 Let G′ ⊂ G be a sub sigma algebra. The conditional expectation E (X| G′) of X

conditioned on the sigma algebra G′ is a random variable satisfying:

1. E (X| G′) is G′−measurable

2. For any A ∈ G′:∫

A

E (X| G′) dP =∫

A

XdP ,

(partial averaging property).

In the sequel, we write for any further random variable Y on (Ω,G):

E (X|Y ) := E (X|σ (Y ))

Remark 65 (i) E (X| G′) exists, provided X ∈ L1 (P ); this is a consequence of the so called

Radon Nykodin Theorem. (ii) The random variable E (X| G′) is unique, up to events of zero

probability. Precisely, if Y and Z are two candidate G′−measurable random variables satisfying 2.

of the above definition, then:

P (Y = Z) = 1

Example 66 (i) If G′ = ∅, Ω then E (X| G′) = E (X)1Ω, that is conditional expectations

conditioned on trivial information sets are unconditional expectations. Indeed, E (X)1Ω is G′

measurable and∫

Ω

E (X)1ΩdP = E (X)P (Ω) = E (X) =∫

Ω

XdP

(ii) If X is G′−measurable then E (X| G′) = X, that is if the conditioning information set is

sufficiently fine to determine X completely then conditional expectations of a random variable are

the random variable itself. Indeed, in this case we trivially have:

A

E (X| G′) dP =∫

A

XdP ,

for any set A ∈ G′.

37

Page 39: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Proposition 67 Let G′ ⊂ G be a sub sigma algebra and X, Y ∈ L1 (P ). It then follows:

1. E (E (X| G′)) = E (X) (Law of Iterated Expectations).

2. For any a, b ∈ R:

E (aX + bY | G′) = aE (X| G′) + bE (Y | G′) ,

(Linearity).

3. If X ≥ 0 then E (X| G′) ≥ 0 with probability 1 (Monotonicity).

4. For any sub sigma algebra H ⊂ G′:

E (E (X| G′)|H) = E (X|H) ,

(Tower Property).

5. If σ (X)⊥G′ then

E (X| G′) = E (X)1Ω ,

(Independence).

6. If V is a G′−measurable random variable such that V X ∈ L1 (P ) then

E (V X| G′) = V E (X| G′)

Proof. 1. Set A = Ω ∈ G′; by definition in then follows

E (X) =∫

Ω

XdP =∫

Ω

E (X| G′) dP = E (E (X| G′)) .

2. By construction aE (X| G′) + bE (Y | G′) is G′ measurable. Moreover, for any A ∈ G′:∫

A

(aE (X| G′) + bE (Y | G′)) dP = a

A

E (X| G′) dP + b

A

E (Y | G′) dP

= a

A

XdP + b

A

Y dP

=∫

A

(aX + bY ) dP ,

38

Page 40: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

using in the first and the third equality the linearity of Lebesgue integrals and in the second

equality the definition of conditional expectations.

3. Let

A := E (X| G′) < 0 ∈ G′ .

Then,∫

A

E (X| G′) dP =∫

A

XdP ≥ 0 ,

since X ≥ 0 and by the monotonicity of Lebesgue integrals. Further, the monotonicity of Lebesgue

integrals also implies∫

A

E (X| G′) dP ≤ 0 ,

since 1AE (X| G′) < 0. Therefore,

A

E (X| G′) dP = 0 ,

implying P (A) = 0.

4. E (X|H) is by definition H measurable. Further, for any A ∈ H:

A

E (X|H) dP =∫

A

XdP =∫

A

E(

X| G′)

dP =:∫

A

Y dP ,

since A ∈ G′ because H ⊂ G′. By definition, this implies that E (X|H) is the conditional

expectation of the random variable Y := E(

X| G′)

conditioned on the sigma algebra H.

5. E (X)1Ω is trivially G′−measurable. We show the statement for the case X = 1B , where

B ∈ G. The extension to the general case follows by standard arguments. We have for any A ∈ G′:∫

A

E (X)1ΩdP = E (X)P (A) = E (1B)P (A) = P (B) P (A)

= P (A ∩B) = E (1A1B) =∫

A

XdP ,

using in the fourth equality the independence assumption, in the fifth the properties of indicator

functions and in the sixth the definition of X. 6. V E (X| G′) is G′−measurable. Again, we show

39

Page 41: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

the statement for the simpler case V = 1B , where B ∈ G′. We have for any A ∈ G′,∫

A

V E (X| G′) dP =∫

A

1BE (X| G′) dP =∫

A∩B

E (X| G′) dP

=∫

A∩B

XdP =∫

A

1BXdP =∫

A

V XdP ,

using in the third equality the definition of conditional expectations, and otherwise the properties

of indicator functions.

Example 68 In the n−period Binomial model we have

E (S1|σ (S1)) = S1 ,

by the σ (S1)−measurability of S1. S2 is not σ (S1)−measurable. However, we know that

σ (S2/S1)⊥σ (S1) .

Therefore,

E (S2|σ (S1)) = E

(S2

S1S1

∣∣∣∣ σ (S1))

= S1E

(S2

S1

∣∣∣∣ σ (S1))

= S1E

(S2

S1

)= S1 (pu + (1− p) d) .

More generally, we have

σ (St/St−1)⊥Gt−1 ,

where

Gt−1 := σ

(t−1⋃

k=0

σ (Sk)

),

t = 1, . . . , n. Therefore, by the same arguments:

E (St|Gt−1) = St−1 (pu + (1− p) d) .

40

Page 42: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Finally, the tower property gives after some iterations:

E (St+k|Gt−1) = E (E (St+k|Gt+k−1) |Gt−1)

= E (St+k−1 (pu + (1− p) d) |Gt−1)

= (pu + (1− p) d) E (St+k−1|Gt−1)

= . . .

= (pu + (1− p) d)kE (St|Gt−1)

= (pu + (1− p) d)k+1St−1 .

2.4 Martingale Processes

We now introduce a class of stochastic processes that are particularly important in finance: the

class of martingale processes. Indeed, it will turn out in a later chapter that the price processes of

many financial instruments are martingale processes after a suitable change of probability. In this

section we give the necessary definitions and present some first examples of martingale processes.

Definition 69 (i) Let G := (Gt)t=0,..,n be a filtration over (Ω,G, P ). The quadruplet (Ω,G,G,P ) is

called a filtered probability space. (ii) A stochastic process X := (Xt)t=0,..,n on a filtered probability

space (Ω,G,G,P ) is adapted (is G−adapted) if for any t = 0, .., n the random variable Xt is

Gt−measurable. (iii) A G−adapted process is a martingale if for any t = 0, .., n− 1 one has

Xt = E (Xt+1| Gt) , (9)

(martingale condition). The process is a submartingale (a supermartingale) if in (9) the ”≤” sign

(the ”≥” sign) holds.

Remark 70 Notice, that in Definition 69 both the filtration G and the relevant probability P are

crucial in determining the validity of the martingale condition (9) for an adapted process. Indeed,

41

Page 43: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

different probabilities and filtrations can imply (9) to be satisfied or not. For instance, in the

n−period binomial model we obtained, using the filtration generated by the stock price process,

E (St|Gt−1) = St−1 (pu + (1− p) d) .

Therefore, the only binomial probability measure under which the stock price process is a martingale

is the one satisfying

pu + (1− p) d = 1 , i.e. p =1− d

u− d. (10)

The binomial probabilities such that p > (1− d) / (u− d) (p < (1− d) / (u− d)) imply a stock

price process that is a submartingale (a supermartingale).

Being a martingale is a quite strong condition on a stochastic process, which strongly relates

future process coordinates with current ones. This is made more explicit below.

Proposition 71 Let (Xt)t=0,..,n be a martingale on the filtered probability space (Ω,G,G,P ).

1. It then follows for any t, s ∈ 0, 1, . . . , n such that s ≥ t:

Xt = E (Xs| Gt) .

2. If (Yt)t=0,..,n is a further martingale on the filtered probability space (Ω,G,G,P ) and such

that Yn = Xn then Yt = Xt almost surely for all t ∈ 0, 1, . . . , n.

Proof. 1. The tower property combined with the martingale property implies

Xt = E (Xt+1| Gt) = E (E (Xt+2| Gt+1)| Gt) = E (Xt+2| Gt) = ... = E (Xt+k| Gt) ,

for any k = s− t.

2. From 1. we have

Xt = E (Xn| Gt) = E (Yn| Gt) = Yt .

This concludes the proof.

42

Page 44: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Example 72 AR(1) process: Let (εt)t=1,...,n be an identically distributed, zero mean, adapted

process on a filtered probability space (Ω,G,G,P ) and such that for any t the random variable εt

is independent from the process history up to time t− 1, i.e.:

σ (εt) ⊥ σ

(t−1⋃

i=1

σ (εi)

), t = 1, . . . , n . (11)

An Autoregressive Process of Order 1 (AR(1)) is defined by

Xt =

0 t = 0

ρXt−1 + εt t > 0,

where ρ ∈ R. It is easily seen that (Xt)t=0,..,n is G−adapted. Furthermore, for any t = 1, . . . , n,

E (Xt| Gt−1) = E (ρXt−1 + εt| Gt−1) = ρE (Xt−1| Gt−1)+E (εt| Gt−1) = ρXt−1+E (εt) = ρXt−1 ,

using in the second equality the linearity of conditional expectations, in the third the Gt−1−measurabi-

lity of Xt−1 and the independence assumption (11), and in the fourth the zero mean property of

εt (E (εt) = 0). Therefore, an AR(1) process is a martingale if and only if ρ = 1. The process

resulting for ρ = 1 is called a ”Random Walk” process.

Example 73 MA(1) process: Let (εt)t=0,..,n be the same process as in Example 72. A Moving

Average Process of Order 1 (MA(1)) is defined by

Xt =

0 t = 0

ε1 t = 1

εt + ρεt−1 t > 1

,

where ρ ∈ R. It is easily seen that (Xt)t=0,..,n is G−adapted. Furthermore, for any t = 2, .., n we

have, similarly to above,

E (Xt| Gt−1) = E (εt + ρεt−1| Gt−1) = ρE (εt−1| Gt−1) + E (εt| Gt−1) = ρεt−1 + E (εt) = ρεt−1 .

Therefore,

Xt−1 = E (Xt| Gt−1) ⇐⇒ εt−1 + ρεt−2 = ρεt−1 ,

43

Page 45: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

implying that in order to satisfy the martingale condition one must have for all t = 2, .., n

εt−1 = 0 if ρ = 0

εt−1 = ρρ−1εt−2 if ρ 6= 0

.

However, this is in evident contradiction with the independence assumption on the process (εt)t=0,..,n.

We thus conclude that MA(1) processes can never be martingales.

3 Pricing Principles in the Absence of Arbitrage

This section considers the pricing problem of a general European derivative in the context of an

n−period Binomial pricing model. The model structure is:

• I := 0, 1, 2, .., n is a discrete time index representing the available transaction dates in the

model

• The sample space is given by Ω := Sequences of n coordinates H or T with single out-

comes ω of the form

ω = (TTTH..HT )︸ ︷︷ ︸n coordinates

,

for instance.

• G := F , the sigma algebra of all subsets of Ω

• Dynamics of the stock price and money account:

St =

uSt−1 with probability p

dSt−1 with probability 1− p

, Bt = (1 + r) Bt−1

for given B0 = 1, S0 and where

u = 1/d , u > 1 + r > d . (12)

• (Gt)t=0,...,n is the filtration generated by the stock price process (St)t=0,...,n

44

Page 46: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

• A binomial probability P on (Ω,F) is obtained by defining for some p ∈ (0, 1),

P (ω) := p# of H in ω (1− p)# of T in ω.

• A binomial risk adjusted probability measure P on (Ω,F) is obtained by defining

P (ω) := p# of H in ω (1− p)# of T in ω,

where

p =1 + r − d

u− d, (13)

is under condition (12) a risk adjusted probability in the one period binomial model.

The characterizing property of a risk adjusted probability measure is to make discounted stock

prices under such measure martingales. Therefore, this measure is also called a risk adjusted (or

risk neutral) martingale measure. Existence of a risk adjusted martingale measure is equivalent to

the absence of arbitrage opportunities. We now show formally all these properties in the setting

of a binomial pricing model.

3.1 Stock Prices, Risk Neutral Probability Measures and Martingales

Discounted stock prices are defined next for completeness.

Definition 74 The stochastic process(

St

Bt

)t=0,..,n

is called the discounted stock price process.

The terminology of Definition 74 is obvious, since one has for any t ∈ 0, .., n (recall the

normalization B0 = 1):

St

Bt=

1(1 + r)t St .

The next proposition shows that under the risk adjusted measure P discounted stock prices are

martingales.

45

Page 47: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Proposition 75 The discounted stock price process

(St

Bt,Gt

)

t=0,...,n

,

is a martingale under the probability P .

Proof. Since Bt is deterministic, St/Bt is Gt−measurable if and only if St is Gt−measurable.

Therefore (St/Bt,Gt)t=0,...,n is an adapted process. Moreover,

E

(St+1

Bt+1

∣∣∣∣Gt

)= E

(St

Bt+1· St+1

St

∣∣∣∣Gt

)=

St

Bt

11 + r

E

(St+1

St

∣∣∣∣Gt

),

using the Gt−measurability of St/Bt+1. Therefore, (St/Bt,Gt)t=0,...,n is a martingale under P if

and only if

E

(St+1

St

∣∣∣∣Gt

)= 1 + r .

Indeed, we have

E

(St+1

St

∣∣∣∣Gt

)= E

(St+1

St

)= pu + (1− p) d = 1 + r ,

using in the first equality the independence of the binomial increments and in the second equality

definition (13). This concludes the proof.

3.2 Self Financing Strategies, Risk Neutral Probability Measures and

Martingales

The martingale property for discounted stock prices under P is valid more generally for dynamic

portfolios that are self-financed, i.e. portfolios that are rebalanced at any time using only past

capital gains. This fact is very important for the pricing of a derivative, because hedging portfolios

are a particular case of a self-financing portfolio.

Definition 76 (i) An adapted process (∆t,Gt)t=0,...,n with value process (Xt)t=0,...,n defines a

portfolio process if ∆t is the number of stock at time t in the portfolio and (Xt −∆tSt) /Bt is the

46

Page 48: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

number of units of the money account. (ii) A portfolio process (∆t,Gt)t=0,...,n is self-financed if

Xt+1 = ∆tSt+1 + (Xt −∆tSt) (1 + r) .

(iii) A self-financed portfolio is an arbitrage opportunity if X0 = 0 and

Xn ≥ 0 , P (Xn > 0) > 0 .

Let us remark a few important aspects on the above definition of a self financed portfolio.

Remark 77 (i) The adaptedness condition on a portfolio process ensures that at any time t the

number of stocks in the portfolio is determined using only information available at that time. (ii)

The self-financing condition of a self-financing portfolio says that the portfolio value Xt+1 at any

time t+1 must be obtained as the sum of the stock and money account positions at time t evaluated

at t + 1 prices:

Xt+1 = ∆t · St+1 + (Xt −∆tSt) (1 + r) = ∆t︸︷︷︸# of stocksat time t

· St+1︸︷︷︸Stock priceat time t+1

+Xt −∆tSt

Bt︸ ︷︷ ︸# of bondsat time t

Bt+1︸ ︷︷ ︸Bond priceat time t+1

.

(iii) The self-financing condition of a self-financed portfolio already implies that the value process

(Xt,Gt)t=0,...,n of a self-financed portfolio is adapted. Indeed, for any t = 0, .., n− 1,

Xt+1 = ∆tSt+1 + (Xt −∆tSt) (1 + r) ,

i.e. Xt+1 is a linear combination of random variables that are Gt+1−measurable, and is therefore

Gt+1−measurable. (iv) An arbitrage portfolio is simply a self-financed strategy of zero initial cost

and with non negative and non zero final value.

The martingale property under P of discounted value processes of a self-financed portfolio is

proved next.

Proposition 78 The discounted portfolio value process

(Xt

Bt,Gt

)

t=0,...,n

,

of any self-financed portfolio process (∆t,Gt)t=0,...,n is a martingale under the probability P .

47

Page 49: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Proof. We already showed that (Xt,Gt)t=0,...,n is an adapted process. Therefore, it remains to

show that the martingale condition for the discounted value process (Xt/Bt,Gt)t=0,...,n is satisfied.

We have:

E

(Xt+1

Bt+1

∣∣∣∣Gt

)= E

(∆t (St+1 − St (1 + r)) + Xt (1 + r)

Bt+1

∣∣∣∣Gt

)

=∆t

Bt+1E (St+1 − St (1 + r)| Gt) + E

(Xt

Bt

∣∣∣∣Gt

)

=∆t

Bt+1E (St+1 − St (1 + r)| Gt) +

Xt

Bt,

using in the first equality the self-financing definition, in the second the Gt−measurability of

∆t/Bt+1 and the linearity of conditional expectations, and in the third the Gt−measurability of

Xt/Bt. Thus, (Xt/Bt,Gt)t=0,...,n is a martingale if and only if

E (St+1 − St (1 + r)| Gt) = 0 ,

i.e. if and only if

E

(St+1

St

∣∣∣∣Gt

)= 1 + r .

This is precisely what we have shown in the proof of Proposition 75. Therefore, the proof is

completed.

3.3 Existence of Risk Neutral Probability Measures and Derivatives

Pricing

Self financed portfolios are precisely the type of dynamic portfolios that can be used to hedge

derivatives. Indeed, the self-financing condition implies that if we are able to fully replicate a

contingent claim by means of a self-financed portfolio then we are also able to fully eliminate the

risk deriving from the random pay-off of the contingent claim.

Definition 79 (i) An European derivative VT with maturity T ∈ I is a GT−measurable random

variable. (ii) A European derivative VT is hedgeable if there exists a self-financed portfolio process

48

Page 50: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

(∆t,Gt)t=0,...,n with value process (Xt,Gt)t=0,...,n such that

XT = VT .

Notice that if an European contingent claim VT is hedgeable, then absence of arbitrage oppor-

tunities immediately implies that its price is the value of the corresponding hedging portfolio. We

state this important fact in the next Proposition under point (i) for completeness.

Proposition 80 (i) If a European contingent claim VT is hedgeable, then in the absence of arbi-

trage opportunities we have for any t ∈ I:

Vt = Xt .

(ii) The risk neutral valuation formula is obtained:

Vt =Bt

BTE (VT | Gt) =

1

(1 + r)T−tE (VT | Gt) .

Proof. (i) Without loss of generality assume that V0 > X0, in order to imply a contradiction

with the no arbitrage assumption. Then, a portfolio short in one unit of the hedging portfolio and

long one unit of the derivative at time 0 costs X0 − V0 < 0. Holding the derivative until maturity

and rebalancing the short position in the portfolio according to its self financing dynamics yields

a pay off XT − VT = 0 at maturity T . Investing in the money account the amount V0 −X0 yields

a final pay-off (V0 −X0) (1 + r)T> 0 at maturity, i.e. an arbitrage opportunity. Therefore, one

must have V0 = X0. (ii) We have, by (i) and the martingale property of (Xt,Gt)t=0,...,n under P

(see also Proposition 78):

Vt

Bt=

Xt

Bt= E

(XT

BT

∣∣∣∣Gt

)= E

(VT

BT

∣∣∣∣Gt

),

where the last equality arises because (∆t,Gt)t=0,...,n is an hedging portfolio for VT . This concludes

the proof.

49

Page 51: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

The implication of the results in this section is that any hedgeable derivative has a price given

by a risk neutral valuation formula. But, when is a derivative hedgeable? This is discussed in the

next secion.

3.4 Uniqueness of Risk Neutral Probability Measures and Derivatives

Hedging

Can a simple European derivative always be hedged? The answer depends on the pricing model

used. For instance, in the standard binomial model this is the case. On the other hand, in the

discrete time/continuous state space model in the next chapter this is not the case.

Basically, the answer depends on the relation between the number of basic instruments available

to construct a hedging portfolio and the number of independent risk factors in the model. Roughly

speaking, if the number of available instruments is sufficiently large then every contingent claim is

perfectly hedgeable and thus obtains a unique price. Models that satisfy this property are called

complete.

Definition 81 In the absence of arbitrage opportunities a pricing model is complete if for any

European derivative VT there exists a hedging portfolio strategy (∆t,Gt)t=0,...,n for VT with value

process (Xt,Gt)t=0,...,n and such that

XT = VT .

As mentioned, the standard binomial model is complete. This statement is made precise in

the next result.

Theorem 82 The binomial model is complete. Precisely, for any European derivative VT there

exists a hedging portfolio strategy (∆t,Gt)t=0,..,T−1 for VT with value process (Xt,Gt)t=0,..,T−1.

For any t = 0, .., T the value Xt at time t is given by

Xt = BtE

(VT

BT

∣∣∣∣Gt

)

50

Page 52: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

and for any t = 0, .., T − 1 the stock position ∆t is given by

∆t =Vt+1 (ω1, .., ωt, H)− Vt+1 (ω1, .., ωt, T )St+1 (ω1, .., ωt, H)− St+1 (ω1, .., ωt, T )

. (14)

Proof. Define a self-financed portfolio process with stock position at time t given by ∆t in

(14) and with value process dynamics given recursively by

Xt+1 = ∆tSt+1 + (Xt −∆tSt) (1 + r) ,

where

X0 = E

(VT

BT

∣∣∣∣G0

).

We show that for any t = 0, .., T one has

Xt = Vt := E

(VT

BT

∣∣∣∣Gt

).

This statement is correct by construction for t = 0. Thus, assume it is correct for some t < T ,

that is

Xt = Vt = BtE

(VT

BT

∣∣∣∣Gt

).

We show by induction that then it is correct also for t + 1, i.e. that

Xt+1 (ω1, .., ωt,H) = Vt+1 (ω1, .., ωt, H)

Xt+1 (ω1, .., ωt, T ) = Vt+1 (ω1, .., ωt, T ) .

For brevity, we show the first of these two equalities. The second follows in a similar way. The

self-financing condition gives:

Xt+1 (H) =Vt+1 (H)− Vt+1 (T )St+1 (H)− St+1 (T )

(St+1 − St (1 + r)) + Vt (1 + r) ,

using the definition (14) of ∆t. Moreover, we know that (Vt/Bt,Gt)t=0,...,n is a martingale under

P , implying

Vt (1 + r) = VtBt+1

Bt= E (Vt+1| Gt) = pVt+1 (H) + (1− p) Vt+1 (T ) .

51

Page 53: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

We thus obtain

Xt+1 (H) =Vt+1 (H)− Vt+1 (T )St+1 (H)− St+1 (T )

(St+1 (H)− St (1 + r)) + E (Vt+1| Gt)

=Vt+1 (H)− Vt+1 (T )

(u− d)St(u− (1 + r)) St + E (Vt+1| Gt)

=u− (1 + r)

(u− d)(Vt+1 (H)− Vt+1 (T )) + (pVt+1 (H) + (1− p) Vt+1 (T ))

= (1− p) (Vt+1 (H)− Vt+1 (T )) + (pVt+1 (H) + (1− p)Vt+1 (T ))

= Vt+1 (H) ,

using in the last equality the definition

p =1 + r − d

u− d.

Since, for t = T we obtain

XT = VT = E (VT | GT ) = VT ,

almost surely, i.e. (∆t,Gt)t=0,..T is an hedge portfolio, as claimed. This concludes the proof.

3.5 Existence of Risk Neutral Probability Measures and Absence of

Arbitrage

4 Introduction to Stochastic Processes

For the whole section let (Ω,G, P ) be a probability space.

4.1 Basic Definitions

A stochastic process is a mathematical model to describe the realizations of a random experiment

at some different dates defined on a time index set I, as for instance I = 0, 1, 2, .., n, I = N,

I = [0,∞).

52

Page 54: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Definition 83 Let I be a time index set. (i) A family X := (Xt)t∈I of random variables on

(Ω,G, P ) is called a stochastic process. (ii) If I is countable the process is called a discrete-time

stochastic process. If it is uncountable the process is called a continuous-time stochastic process.

(iii) For any ω ∈ Ω the real valued function t 7−→ Xt (ω) is called a trajectory of the process.

Remark 84 (i) The important thing to note in the above definition is that all random variables

in the family X are defined on a single probability space (Ω,G, P ). In fact, when constructing

a stochastic process satisfying a set of a priori desirable properties one will have to construct

a family X of random variables defined on a single probability space (Ω,G, P ). This puts quite

strong restrictions on the way how stochastic processes can be obtained. (ii) We can also think of

a stochastic process X as a (measurable) function

X : Ω× I → R ; (ω, t) 7→ Xt (ω) ,

i.e. as a random variable defined on a measurable space with sample space Ω× I.

The concept of a filtration of an adapted process and of a martingale extend in a natural way

to continuous time stochastic processes.

Definition 85 (i) A family G := (Gt)t∈I of sub sigma algebras of G is a filtration if for any

t, s ∈ I

Gt ⊂ Gs ⇔ t < s .

(ii) A stochastic process X := (Xt)t∈I is G−adapted if for any t ∈ I the random variable Xt is

Gt−measurable. (iii) An adapted process (Xt,Gt)t∈I is a martingale if for any t, s ∈ I such that

s > t the martingale condition

Xt = E (Xs| Gt) ,

is satisfied.

53

Page 55: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

4.2 Discrete Time Brownian Motion

A first example of a discrete time process with continuous state space is a random walk process

with normally distributed innovations. This is the discrete time analogue of the continuous time

Brownian motion process.

Example 86 (Discrete Time Brownian Motion). Let Y :=(Yt)t=0,..,n be a sequence of iid N (0, 1)

random variables2 on (Ω,G, P ). We define Z0 = 0 and

Zt =t∑

i=1

Yi .

(Zt)t=0,..,n is a random walk with normally distributed innovations and is the discrete time ana-

logue of the (continuous time) Brownian motion process. We immediately have the following

properties of discrete time Brownian motion:

Zt v N (0, t) , (15)

i.e. Zt is normally distributed with mean zero and a variance increasing proportionally with time,

and for s > t

Zs − Zt ⊥ σ (Zk; k ≤ t) = σ (Yk; k ≤ t) , (16)

i.e. increments of Brownian motions are independent of the past and current history of the process.

Finally, we also have for any two time points s > t:

Cov (Zt, Zs) = E (ZtZs)− E (Zt)E (Zs)

= E ((Zs − Zt + Zt)Zt)

= E ((Zs − Zt) Zt) + E(Z2

t

)

= E (Zs − Zt)E (Zt) + V ar (Zt)

= min (t, s) ,

2 This defines already a discrete time stochastic process.

54

Page 56: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

using in the second equality the zero mean property of Brownian motion, in the third the indepen-

dence of its increments and in the fourth again the zero mean property. The same result arises

for the case t < s. Therefore:

Cov (Zt, Zs) = min (t, s) .

Further, define

Gt := σ (Y1, .., Yt) ,

the sigma algebra generated by the process Y up to time t. Notice that we have:

Y1 = Z1 , Y2 = Z2 − Z1 , Y3 = Z3 − Z2, ..., Yn = Zn − Zn−1

Therefore, the sigma algebras generated by Y and by (Zt)t=0,..,n up to a given time are the same.

This implies that Zt is Gt−measurable and that (Zt)t=0,..,n is (Gt)t=0,..,n adapted. Further, for

any time indices s > t we have:

E (Zs| Gt) = E (Zs − Zt + Zt| Gt) = E (Zs − Zt| Gt) + Zt = Zt ,

i.e. discrete time Brownian motion is a martingale process.

Some further examples of a martingale process are obtained by looking at some simple func-

tionals of (discrete time) Brownian motion. The first one arises simply by recentering squared

(discrete time) Brownian motion by its variance.

Example 87 Let (Zt,Gt)t=0,..,n be a discrete time Brownian motion. For the adapted process

55

Page 57: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

(Xt,Gt)t=0,..,n :=(Z2

t ,Gt

)t=0,..,n

it follows for s > t:

E (Xs| Gt) = E(Z2

s

∣∣Gt

)

= E(

(Zs − Zt + Zt)2∣∣∣Gt

)

= E(

(Zs − Zt)2∣∣∣Gt

)+ E

(Z2

t

∣∣Gt

)+ E (2 (Zs − Zt)Zt| Gt)

= E((Zs − Zt)

2)

+ Z2t + 2ZtE ( (Zs − Zt)| Gt)

= (s− t) + Xt + 2ZtE ((Zs − Zt))

= (s− t) + Xt ≥ Xt ,

using in the third equality the linearity of conditional expectations, in the fourth the independence

of Brownian increments and the Gt−measurabiltiy of Zt, and in the fifth the zero mean property

of Brownian motion. This implies that (Xt,Gt)t=0,..,n is a submartingale. However, by similar

arguments as those listed above we see that the process(Z2

t − t,Gt

)t=0,..,n

is a martingale.

The last example of a Brownian functional that gives a martingale process is exponential

(discrete time) Brownian motion.

Example 88 (Exponential Brownian Motion) Let (Zt,Gt)t=0,..,n be a discrete time Brownian

motion. For the adapted process (Xt,Gt)t=0,..,n defined by

Xt = exp(

σZt − σ2t

2

)

it follows for s > t:

E (Xs| Gt) = E

(exp

(σZs − σ2s

2

)∣∣∣∣Gt

)

= E

(exp

(σ (Zs − Zt)− σ2 (s− t)

2

)exp

(σZt − σ2t

2

)∣∣∣∣Gt

)

= exp(

σZt − σ2t

2

)E

(exp

(σ (Zs − Zt)− σ2 (s− t)

2

)∣∣∣∣Gt

)

= Xt exp(−σ2 (s− t)

2

)E (exp σ (Zs − Zt)) , (17)

using in the last equality the independence of Brownian increments. Now, since Zs − Zt v

N (0, s− t), the expression E (exp σ (Zs − Zt)) is the moment generating function of a N (0, s− t)

56

Page 58: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

distributed random variable, evaluated at the point σ. Thus,

E (exp σ (Zs − Zt)) = MZs−Zt (σ) = exp(

σ2 (s− t)2

).

With this result, we obtain in (17) the martingale property for the process (Xt,Gt)t=0,..,n.

4.3 Girsanov Theorem: Application to a Semicontinuous Pricing Model

This section considers the pricing problem of a general European derivative in the context of an

n−period semicontinuous pricing model.

4.3.1 A Semicontinuous Pricing Model

The model structure is:

• I := 0, 1, 2, .., n is a discrete time index representing the available transaction dates in the

model

• The sample space is given by Ω := Rn with single outcomes ω of the form

ω = (ω1, ω2, .., ωn) ,

where ωi ∈ R, i = 1, . . . , n.

• G := B (Rn) the Borel sigma algebra on Rn

• Dynamics of the stock price and money account:

St = St−1 exp(

σYt − σ2

2

)exp (µ) , (18)

Bt = exp (r) Bt−1 ,

for some µ, r, σ > 0, for given B0 = 1, S0 and where (Yt)t=1,...,n is an iid N (0, 1) sequence

of random variables on (Rn,B (Rn)).

• (Gt)t=0,...,n is the filtration generated by (Yt)t=1,...,n, which coincides with the filtration

generated by the stock price process (St)t=0,...,n.

57

Page 59: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

• A probability P on (Ω,G) such that (Yt)t=1,...,n is an iid N (0, 1) sequence.

Some simple properties of the above asset price dynamics can be immediately deduced from the

above definitions. Firstly, the above money account dynamics gives:

Bt = exp (rt) ,

implying a continuous interest rate compounding. Secondly, for the risky asset price dynamics we

get:

E

(Ss

St

∣∣∣∣Gt

)= E

(exp

s∑

i=t+1

Yi − σ2 (s− t)2

)exp (µ (s− t))

∣∣∣∣∣Gt

)

= E

(exp

(σ (Zs − Zt)− σ2 (s− t)

2

)∣∣∣∣Gt

)exp (µ (s− t))

= exp (µ (s− t)) ,

because exponential Brownian motion is a martingale process. Therefore, exp (µ (s− t)) is the

expected rate of return on the stock, or alternatively

µ =log E (Ss| Gt)− log St

s− t,

is the continuous expected rate of returns on the stock. Similarly, for the variance of logarithmic

stock returns one gets

V ar

(log

(Ss

St

)∣∣∣∣Gt

)= V ar

((σ (Zs − Zt)− σ2 (s− t)

2

)∣∣∣∣Gt

)= σ2V ar (Zs − Zt) = σ2 (s− t) ,

i.e.

σ2 =V ar

(log

(Ss

St

)∣∣∣Gt

)

s− t,

is the continuous variance rate on the stock.

4.3.2 Risk Neutral Valuation in the Semicontinuous Model

In the semicontinuous model it is not possible to hedge perfectly any European derivative using

a suitable self-financed hedge portfolio, i.e. this model is not complete. Therefore, the deriva-

tion/computation of a suitable risk neutral probability measure for pricing derivatives must follow

58

Page 60: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

other arguments than those adopted in the binomial model setting. Fortunately, a powerful theo-

rem from the theory of stochastic process, namely Girsanov Theorem, can assist us in constructing

such probability measures by using pure probabilistic arguments. We first give for completeness

some basic definitions which are the pendant of Definition 76 for the semicontinuous model setting

Definition 89 (i) An adapted process (∆t,Gt)t=0,...,n with value process (Xt)t=0,...,n defines a

portfolio process if ∆t is the number of stock at time t in the portfolio and (Xt −∆tSt) /Bt is the

number of units of the money account. (ii) A portfolio process (∆t,Gt)t=0,...,n is self-financed if

Xt+1 = ∆tSt+1 + (Xt −∆tSt) exp (r) .

(iii) A self-financed portfolio is an arbitrage opportunity if X0 = 0 and

Xn ≥ 0 , P (Xn > 0) > 0 .

(iv) A probability P on the measurable space (Ω,G) which is equivalent to P and such that the

discounted price process (St/Bt,Gt)t=0,...,n is a martingale, is called a risk adjusted (risk neutral)

probability measure.

Remark 90 Notice that self-financed portfolios and arbitrage strategies in the semicontinuous

model are defined exactly as in the earlier binomial setting, with the only difference that now

interest rates are continuously compounded. By contrast, a risk neutral probability measure P in

the semicontinuous model is explicitly required to be equivalent to the physical probability P . This

ensures that the null sets of these two probabilities coincide. This property was by construction

satisfied in the binomial model, where no null probability events - apart from the trivial empty set

- could arise.

To highlight the properties that a risk neutral measure in the semicontinuous model should

have we consider again the discounted price process (St/Bt,Gt)t=0,...,n, which is given by

St+1

Bt+1=

St exp(σYt+1 − σ2

2 + µ)

Bt exp (r)=

St

Btexp

(σYt+1 − σ2

2+ µ− r

).

59

Page 61: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Therefore,

E

(St+1

Bt+1

∣∣∣∣Gt

)=

St

BtE

(exp

(σYt+1 − σ2

2+ µ− r

)∣∣∣∣Gt

)

=St

BtE

(exp

(Yt+1 +

µ− r

σ

))∣∣∣∣Gt

)exp

(−σ2

2

)

=St

BtE (exp (σ (Yt+1 + θ))| Gt) exp

(−σ2

2

), (19)

where

θ =µ− r

σ,

is the so called market price of risk. Recall that if under P the random variable Yt+1 := Yt+1 + θ

is both standard normally distributed and independent of Gt then:

E (exp (σ (Yt+1 + θ))| Gt) = E(exp

(σYt+1

))= exp

(σ2

2

),

and the martingale property follows for time t. Thus, if under a probability P the process(Yt

)t=1,...,n

= (Yt + θ)t=1,...,n is an iid N (0, 1) random sequence, then under P the discounted

stock price process is a martingale and P is a risk neutral measure. This is equivalent to stating

that under P the process(Zt

)t=0,..,n

given by

Zt = Zt + θt =t∑

i=1

(Yi + θ) =t∑

i=1

Yi ,

is a discrete time Brownian motion. Notice, that under the physical probability P the process(Zt

)t=0,..,n

is not a Brownian motion but a so called Brownian motion with drift. Therefore,

the probability P ”reconverts ” a process which is a Brownian motion with drift under P to a

standard Brownian motion.

How can we construct such a probability measure P? The answer is provided by Girsanov’s

Theorem.

4.3.3 A Discrete Time Formulation of Girsanov Theorem

A discrete time formulation of the famous Girsanov Theorem is proved in the sequel.

60

Page 62: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Theorem 91 Let (Ω,G, P ) be a probability space such that the process (Yt)t=1,...,n is an iid N (0, 1)

random sequence (or equivalently the process (Zt)0=1,...,n is a discrete time Brownian motion).

Define a further measure P on (Ω,G) by

P (A) =∫

A

dP

dPdP , A ∈ G , (20)

where

dP

dP= exp

(−θ

n∑t=1

Yi − θ2n

2

)= exp

(−θZn − θ2n

2

). (21)

It then follows:

1. P is a probability measure equivalent to P ,

2. The process(Yt

)t=0,..,n

= (Yt + θ)t=1,...,n is an iid N (0, 1) random sequence under P ,

3. The process(Zt

)t=0,..,n

= (Zt + θt)t=0,..,n is a discrete time Brownian motion under P .

Proof. 1. By the properties of Lebesgue integrals P is a measure on (Ω,G). Moreover, we

have

Ω

dP

dPdP = E

(dP

dP

)= E

(exp

(−θZn − θ2n

2

))= E (exp (−θZn)) exp

(−θ2n

2

).

Recall that Zn v N (0, n) under P . Therefore,

E (exp (−θZn)) = exp(

θ2n

2

),

by the properties of moment generating functions of normally distributed random variables, im-

plying∫Ω

d ePdP dP = 1. Thus, d eP

dP is a strictly positive proper density and P is a probability measure

equivalent to P . 2. To show that under P the random sequence(Yt

)t=1,...,n

:= (Yt + θ)t=1,...,n is

iid N (0, 1) let us denote by LeY1,..,eYnand LY1,..,Yn (LeY1,..,eYn

and LY1,..,Yn) the distribution induced

by Y1, .., Yn and Y1, .., Yn, respectively, on (Rn,B (Rn)) under P (under P ), that is

LeY1,..,eYn(B) = P

((Y1, .., Yn

)∈ B

), LY1,..,Yn (B) = P ((Y1, .., Yn) ∈ B) ,

LeY1,..,eYn(B) = P

((Y1, .., Yn

)∈ B

), LY1,..,Yn (B) = P ((Y1, .., Yn) ∈ B) ,

61

Page 63: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

for any B ∈ B (Rn). We have, for any B ∈ B (Rn):

LeY1,..,eYn(B) = P

((Y1, .., Yn

)∈ B

)= P ((Y1, .., Yn) ∈ B − θ) = LY1,..,Yn (B − θ) ,

where B − θ := (x1 − θ, .., xn − θ) : x ∈ B, because for any i = 1, . . . , n, it follows Yi = Yi − θ.

Further, recall that the joint distribution of (Yt)t=1,...,n under P is iid N (0, 1), i.e.

LY1,..,Yn(B) =

B

(2π)−n2 exp

(−1

2

n∑t=1

y2t

)dy1 · · · dyn .

Therefore, we obtain

LY1,..,Yn(B − θ) =

B−θ

exp

(−θ

n∑t=1

yt − θ2n

2

)dLY1,..,Yn (y1,.., yn)

=∫

B−θ

exp

(−θ

n∑t=1

yt − θ2n

2

)(2π)−

n2 exp

(−1

2

n∑t=1

y2t

)dy1 · · · dyn

=∫

B

exp

(−θ

n∑t=1

(yt − θ)− θ2n

2

)(2π)−

n2 exp

(−1

2

n∑t=1

(yt − θ)2)

dy1 · · · dyn

=∫

B

(2π)−n2 exp

(−1

2

n∑t=1

y2t

)dy1 · · · dyn .

Since the function

f (y) = (2π)−n2 exp

(−1

2

n∑t=1

y2t

),

is the joint density function of an iid sequence of N (0, 1) random variables we have shown that

LeY1,..,eYnis such a normal distribution on (Rn,B (Rn)), as claimed. Statement 3. follows directly

from statement 2.

Using Girsanov theorem, we are now able to give a risk neutral probability measure for the

above semicontinuous pricing model. We summarize this finding in the next corollary.

Corollary 92 In the semicontinous pricing model with stock price dynamics (18) a risk adjusted

martingale measure P on (Ω,G) is obtained by setting for any A ∈ G,

P (A) :=∫

A

exp(−θZn − θ2n

2

)dP ,

62

Page 64: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

where

θ =µ− r

σ,

is the market price of risk in the model.

Inspired by the previous results in the Binomial model, we are now tempted to price European

derivatives also in the semicontinuous model by means of a risk neutral valuation formula under P .

Notice, that since the model is incomplete there could exist more than one risk neutral probability

measure for this setting, in excess to the just identified probability measure P . This causes the

problem of finding adequate criteria for selecting one of these probabilities to price contingent

claims in incomplete markets. Nevertheless, using P we can still compute, at least formally, the

corresponding risk neutral pricing formula as a specific expectation. Moreover, we can define the

t−time price of an European derivative VT in the semicontinuous model as

Vt :=Bt

BTE (VT | Gt) = exp (−r (T − t)) E (VT | Gt) .

In the case of a call pay-off VT = (ST −K)+ this yields the famous Black-Scholes pricing formula

using pure probabilistic arguments. In order to motivate this pricing formula completely, we will

have to work out - in a later section - a model where trading can evolve in continuous time, the

Black and Scholes model. In this setting we will also be able to construct hedging strategies for

any European derivative in the model and to show that the above pricing approach is the only

one consistent with the absence of arbitrage opportunities in the Black and Scholes model. To

this end we will have to introduce some continuous time stochastic processes more explicitly and

to develop a stochastic integral calculus, Ito’s calculus, where integrals are defined with respect

to increments of a continuous time Brownian motion (see below).

Before doing that, we conclude this section by computing Black-Scholes formula in the semi-

continuous model by means of a risk neutral valuation formula under P .

63

Page 65: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

4.3.4 A Discrete Time Derivation of Black and Scholes Formula

The famous Black and Scholes pricing formula for an European call option arises in our semicon-

tinuous setting as the discouted risk neutral expectation of the call pay-off at maturity. This is

shown in the next result.

Proposition 93 (Black and Scholes Call Price Formula) The time 0 discounted risk neutral ex-

pectation of the call pay-off (ST −K)+ of a call option with maturity T and strike price K is

given by:

B0

BTE

((ST −K)+

)= S0N (d1)−K exp (−rT )N (d2) ,

where

d1 =log

(S0K

)+

(r + σ2

2

)T

σ√

T, d2 = d1 − σ

√T .

Proof. Writing

(ST −K)+ = (ST −K)1ST >K = ST 1ST >K −K1ST >K ,

we have

E((ST −K)+

)= E

((ST −K)1ST >K

)= E

(ST 1ST >K

)−KE(1ST >K

). (22)

Moreover,

E(1ST >K

)= P (ST > K) .

To compute this probability notice that we have:

ST > K =

logST

S0> log

K

S0

.

Moreover, using the explicit form for the stock price dynamics in the semicontinuous model,

logST

S0= σZT −

(σ2

2− µ

)T = σ (ZT + θT )−

(σ2

2− r

)T ,

64

Page 66: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

where θ = (µ− r) /σ, it follows

ST > K =

σ (ZT + θT ) > logK

S0+

(σ2

2− r

)T

=

ZT + θT√T

>log K

S0+

(σ2

2 − r)

T

σ√

T

=

ZT + θT√T

<log S0

K −(

σ2

2 − r)

T

σ√

T

=−ZT + θT√

T< d1 − σ

√T

=−ZT + θT√

T< d2

.

Now, notice that by Girsanov theorem ZT + θT is a standard Brownian motion under the proba-

bility measure P , so that

−ZT + θT√T

v N (0, 1) .

Therefore,

P (ST > K) = P

(−ZT + θT√

T< d2

)= N (d2) . (23)

We now compute the first term in the difference on the RHS of (22), discounted by BT = exp (rT ).

We have:

E

(ST

BT1ST >K

)=

ST >K

ST

BTdP

= S0

ST >K

ST

S0BTdP

= S0

ST >Kexp

(σ (ZT + θT )− σ2T

2

)dP

Now, recall that (see above)

ST > K =

ZT + θT > −d2

√T

,

and that under P we have

ZT + θT v N (0, T ) .

65

Page 67: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

This gives

E

(ST

BT1ST >K

)= S0

ST >Kexp

(σ (ZT + θT )− σ2T

2

)dP

= S0

∫ ∞

−d2√

T

exp(

σz − σ2T

2

)1√2πT

exp

(−1

2

(z√T

)2)

dz

= S0

∫ ∞

−d2√

T

1√2πT

exp

(−1

2

(z − σT√

T

)2)

dz . (24)

Finally, notice that∫ ∞

−d2√

T

1√2πT

exp

(−1

2

(z − σT√

T

)2)

dz

is the probability of the interval[−d2

√T ,∞

)under a N (σT, T ) distribution, which is the same

as the probability of the interval[−d2 − σ

√T ,∞

)under a N (0, 1) distribution. By symmetry,

this is also equal to the probability of the interval(−∞, d2 + σ

√T

]under a N (0, 1) distribution,

i.e.:

S0

∫ ∞

−d2√

T

1√2πT

exp

(−1

2

(z − σT√

T

)2)

dz = S0N(d2 + σ

√T

)= S0N (d1) . (25)

Putting terms together we finally obtain:

B0

BTE

((ST −K)+

)= E

(ST

BT1ST >K

)− K

BTE

(1ST >K

)= S0N (d1)−K exp (−rT )N (d2) ,

from (23) and (25).

4.4 Continuous Time Brownian Motion

The starting point to develop a continuous time model for the stock price is the Brownian motion

process.

Definition 94 A continuous time adapted process (Zt,Gt)t≥0 on a probability space (Ω,G, P ) is

a (standard) Brownian motion if

1. Z0 = 0

66

Page 68: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

2. For any s > t if follows

Zs − Zt v N (0, s− t) , Zs − Zt ⊥ Gt .

3. For any ω ∈ Ω the mapping t 7−→ Zt (ω) is continuous.

We shall speak sometimes of Brownian motion (Zt,Gt)0≤t≤T on [0, T ] for some T > 0 and the

meaning of this is apparent.

Remark 95 (i) The filtration (Gt)t≥0 is part of the definition. A natural choice of a filtration is

the one generated by the process coordinates, defined by setting

Gt = GZt := σ (σ (Zu) ; 0 ≤ u ≤ t) .

In some cases3 , it is important to work with a larger filtration that the one generated by the

process. In the sequel we will assume Gt to be at least augmented, i.e., for any t ≥ 0:

Gt = σ(GZ

t ∪N)

, (26)

where N is the family of all subsest of G having probability 0. Notice, that GZt does not contain N ,

so that(GZ

t

)t≥0

is not augmented. The augmented filtration implied by (26) is sometimes called

the natural filtration of a Brownian motion process. (ii) The fact that a probability space (Ω,G, P )

and an adapted stochastic process (Zt,Gt)t≥0 with the Brownian motion properties indeed can be

constructed is a fundamental result in probability theory. (iii) It can be shown that Brownian

motion is the only process with continuous paths and with independent stationary increments, i.e

satisfying

Zs − Zt ⊥ Gt and LZs−Zt = LZs−t−Z0 ,

3 As for instance when constructing solutions to some particular stochastic differential equations.

67

Page 69: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

for any s ≥ t. (iv) It can also be shown that Brownian motion is the only martingale with

continuous paths such that for any 0 ≤ t ≤ s:

E((Zs − Zt)2

∣∣Gt

)= s− t .

The finite dimensional distributions of a Brownian motion process are easily obtained from the

definition.

Proposition 96 For any finite index set 0 ≤ t1 < t2 < ... < tn the finite dimensional distribution

of the random vector (Zt1 , Zt2 , .., Ztn)′ is Gaussian,

LZt1 ,Zt2 ,..,Ztn= N (0,Σ) ,

where

Σ =

t1 · · · t1 t1

... t2 · · · t2

t1...

. . ....

t1 t2 · · · tn

.

Proof. We have

Zt1

Zt2

...

Ztn

=

1 0 · · · 0

1 1 · · · 0

......

. . ....

1 1 · · · 1

︸ ︷︷ ︸Λ

Zt1 − Z0

Zt2 − Zt1

...

Ztn − Ztn−1

,

i.e. any finite vector of coordinates of a Brownian motion can be written as a linear function

of a vector of Gaussian Brownian increments, and is thus also Gaussian with expectation 0 and

covariance matrix

Σ = ΛDΛ′ ,

68

Page 70: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

where

D =

t1 0 0 · · · 0

0 t2 − t1 0 · · · 0

0 0 t3 − t2 · · · ...

......

.... . .

...

0 0 0 · · · tn − tn−1

Explicit computation of Σ concludes the proof.

In some cases, transformations of a Brownian motion give again a Brownian motion process.

Here are some well-known examples.

Example 97 Let (Zt,Gt)t≥0 be a standard Brownian motion on a probability space (Ω,G, P ). The

following transformations give again a Brownian motion:

1. Symmetry: (−Zt,Gt)t≥0

2. Scaling:(c−

12 Zct,Gct

)t≥0

3. Time reversal: For given T > 0, the process(Vt,GV

t

)0≤t≤T

defined by

Vt = ZT − ZT−t , GVt := σ (Vu; 0 ≤ u ≤ t) .

For instance, to show 3. notice first that for any ω ∈ Ω the map t 7−→ Vt (ω) = ZT (ω) −

ZT−u (ω) |u=t is continuous, because it consists of sums and compositions of continuous functions

of t. Further, by definition V0 = ZT − ZT−0 = 0 and any finite dimensional distribution of

(Vt)0≤t≤T is Gaussian since coordinates of (Vt)0≤t≤T arise as simple linear transformations of

coordinates of (Zt)0≤t≤T . Finally, for any t ≤ T

E (Vt) = E (ZT − ZT−t) = 0 ,

69

Page 71: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

and for any 0 ≤ u ≤ t ≤ s ≤ T

Cov (Vs, Vu) = Cov (ZT − ZT−s, ZT − ZT−u)

= T + (T − s) ∧ (T − u)− T ∧ (T − u)− (T − s) ∧ T

= T + (T − s)− (T − u)− (T − s) = u .

In particular, then

Cov (Vs − Vu, Vu) = 0 ,

for any 0 ≤ u ≤ t. Since any finite dimensional distribution of (Vt)0≤t≤T is Gaussian this implies

Vs − Vt ⊥ σ (Vu; 0 ≤ u ≤ t) = GVt . Thus,

(Vt,GV

t

)0≤t≤T

satisfies the definition of a Brownian

motion process.

As in the discrete time setting, some simple examples of continuous martingales are obtained

by considering some specific functionals of Brownian motion. For completeness, we give in the

next results two examples that are the most relevant to our exposition.

Example 98 Let (Zt,Gt)t≥0 be a standard Brownian motion on a probability space (Ω,G, P ).

Then the processes

1.(Z2

t − t,Gt

)t≥0

2.(exp

(σZt − σ2t

2

),Gt

)t≥0

are both martingales.

Proof. It is obvious that both processes are (Gt)t≥0−adapted, since they are both simple

measurable functions of Brownian motion. Moreover, the proof of the martingale property is

obtained readily with the same arguments as for the proof of the discrete time case in Example

87 and 88

70

Page 72: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

5 Introduction to Stochastic Calculus

For the whole chapter let (Zt,Gt)t≥0 be a Brownian motion on a probability space (Ω,G, P ).

5.1 Starting Point, Motivation

To motivate the introduction of a stochastic integral consider again the discrete time dynamics

for the discounted value process (Xt/Bt)t=0,..,n of a self-financed portfolio (∆t)t=0,..,n−1:

Xt+1

Bt+1= ∆t

St+1

Bt+1+

(Xt −∆tSt)Bt+1

· Bt+1

Bt

= ∆t

(St+1

Bt+1− St

Bt

)+

Xt

Bt

= ∆t

(St+1

Bt+1− St

Bt

)+ ∆t−1

(St

Bt− St−1

Bt−1

)+

Xt−1

Bt−1

= ... =t∑

i=0

∆i

(Si+1

Bi+1− Si

Bi

)+

X0

B0

Recall, that under the risk neutral measure P the discounted price process (St/Bt)t=0,..,n is a

martingale. Thus, we have represented the time variation of the discounted values of a self-

financed portfolio as a sum of portfolio exposures (∆t)t=0,..,n−1 weighted by the increments of a

martingale process under P :

Xt

Bt− X0

B0︸ ︷︷ ︸Change in discounted

portfolio value

=t−1∑

i=0

∆i︸︷︷︸Portfolioexposure

(Si+1

Bi+1− Si

Bi

)

︸ ︷︷ ︸Martingaleincrement

. (27)

Expression (27) is an example of a so called martingale transform. Martingale transforms are the

discrete analogues of stochastic integrals in which the process (∆t)t=0,..,n−1 is used as the integrand

and the process (St/Bt)t=0,..,n is used as an integrator. Informally, we could thus introduce the

suggestive notation:

Xt

Bt− X0

B0=

∫ t

0

∆ · d(

S

B

),

to denote such martingale transforms. In a later section this will denote a stochastic integral of

an adapted process ∆ with respect to the martingale process S/B over the time interval [0, t].

71

Page 73: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Definition 99 Let M = (Mt,Gt)t=0,..,n be a martingale and H = (Ht,Gt)t=0,..,n−1 an adapted

process on a probability space (Ω,G, P ). The process X = H •M defined dy

Xt =t−1∑

i=0

Hi (Mi+1 −Mi) ,

for t > 0 and by X0 = 0 is called the martingale transform of M by H.

Example 100 (i) Equation (27) defines (Xt/Bt)t=0,..,n as the martingale transform of the P−mar-

tingale (St/Bt,Gt)t=0,..,n by (∆t,Gt)t=0,..,n−1. Therefore, after an appropriate change of probabil-

ity measure the discounted value processes of self financed portfolios are martingale transforms. (ii)

Let (Zt,Gt)t≥0 be a Brownian motion on a probability space (Ω,G, P ) and H = (Ht,Gt)t=0,..,n−1

be an adapted process. Then the process (Xt)t=0,..,n defined by X0 = 0 and

Xt =t−1∑

i=0

Hi (Zi+1 − Zi) ,

is the martingale transform of the the Brownian motion process (Zt,Gt)t≥0 by H.

Modulo some integrability conditions (which are for example satisfied if H is a bounded pro-

cess), martingale transforms are martingales. Indeed, for any s > t:

E ( (H •M)s| Gt) =t−1∑

i=0

Hi (Mi+1 −Mi) + E

(s−1∑

i=t

Hi (Mi+1 −Mi)

∣∣∣∣∣Gt

)

= (H •M)t +s−1∑

i=t

E

Hi E (Mi+1 −Mi| Gi)︸ ︷︷ ︸

=0

∣∣∣∣∣∣Gt

,

by the tower property. In fact, we will see in a later section that under some integrability con-

ditions on the integrand H stochastic integrals are martingale processes also in the more general

continuous time setting.

Finally, notice that

E((H •M)2t

)= E

t−1∑

i=0

Hi (Mi+1 −Mi)t−1∑

j=0

Hj (Mj+1 −Mj)

=t−1∑

i=0

t−1∑

j=0

E [HiHj (Mi+1 −Mi) (Mj+1 −Mj)] .

72

Page 74: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Now, for any i < j one has

E [HiHj (Mi+1 −Mi) (Mj+1 −Mj)] = E

Hi (Mi+1 −Mi)Hj E ( (Mj+1 −Mj)| Gj)︸ ︷︷ ︸

=0

= 0 ,

again by the tower property, and similarly for j < i. Therefore, we get

E((H •M)2t

)= E

(t−1∑

i=0

H2i (Mi+1 −Mi)

2

), (28)

i.e. a discrete time version of the so called Ito isometry (see below). Specifically, for the case

where M is a Brownian motion process, equation (28) has the simpler form

E((H • Z)2t

)= E

(t−1∑

i=0

H2i E

((Zi+1 − Zi)

2∣∣∣Gi

))= E

(t−1∑

i=0

H2i (i + 1− i)

)= E

(∫ t

0

H2s · ds

),

writing for any ω ∈ Ω the expression∑t−1

i=0 H2i (ω) as a standard Lebesgue integral

∫ t

0H2

s (ω) · ds.

There are several important theoretical and applied settings that ask for an extension of the

martingale transform (of the discrete time stochastic integral) concept to a more general class of

integrands defined on a continuous index set and with possibly not piecewise constant paths. From

a more financially oriented perspective, extending the class of integrands suitable for a stochastic

integration with respect to a martingale will allow us to enlarge the set of self-financed portfolios

that can act as an hedging portfolio. Eventually, this will allow us to perfectly replicate some

contingent claims which in the semicontinuous model setting could not be perfectly hedged.

5.2 The Stochastic Integral

The definition and the construction of the stochastic integral for continuous time integrands with

respect to a Brownian motion process is performed in this section. Since with probability one the

trajectories of a Brownian motion process are of unbounded variation, this construction cannot

happen simply by integrating pathwise the Brownian trajectories via a standard Lebesgue Stjielties

integral.

73

Page 75: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

5.2.1 Some Basic Preliminaries

The basic idea in constructing the stochastic integral of an adapted integrand H := (Ht,Gt)0≤t≤T

is to interpret such processes as random variables H : [0, T ]× Ω → R on the product measurable

space

([0, T ]× Ω,B ([0, T ))⊗ G) .

This space becomes a measure space ([0, T ]× Ω,B ([0, T ))⊗ G, µT ) when equipped with the prod-

uct measure µT : B ([0, T ])⊗ G → [0, T ] defined by

µ (A) = E

[∫ T

0

1A (t, ω) dt

]=

Ω

(∫ T

0

1A (t, ω) dt

)dP (ω) ,

for any A ∈ B ([0, T )) ⊗ G. In particular, for any adapted process H := (Ht,Gt)0≤t≤T one can

define the L2−norm of H as

‖H‖2,T :=

(∫

[0,T ]×Ω

H2dµT

) 12

=

(E

(∫ T

0

H2dt

)) 12

,

provided of course ‖H‖2,T < ∞. We call the space of measurable processes (Ht,Gt)0≤t≤T such that

‖H‖2,T < ∞ the space of squared integrable random variables on ([0, T ]× Ω,B ([0, T ])⊗ G, µT )

and denote it by HT . Precisely:

HT :=

B ([0, T ])⊗ G −measurable processes H such that E

(∫ T

0

H2dt

)< ∞

. (29)

This space equipped with the norm ‖·‖2,T is a normed vector space.

Example 101 (i) The Brownian motion process Z := (Zt,Gt)0≤t≤T is an element of HT . Indeed,

E

(∫ T

0

Z2t dt

)=

∫ T

0

E(Z2

t

)dt =

∫ T

0

tdt =T

2

2

< ∞ .

(ii) The squared Brownian motion process Z2 :=(Z2

t ,Gt

)0≤t≤T

is also an element of HT . Indeed,

E

(∫ T

0

(Z2

t

)2dt

)=

∫ T

0

E(Z4

t

)dt =

∫ T

0

3(E

(Z2

t

))2dt = 3

∫ T

0

t2dt = T 3 < ∞ ,

using the fact that for a normally N (0, σ2

)−distributed random variable X one has E(X4

)= 3σ4.

(iii) In fact, by similar arguments as the ones above any power Zk, k ∈ N, of a Brownian motion

can be shown to be an element of HT .

74

Page 76: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

For simplicity, we fix in the sequel T > 0 and use the notation H := HT . Convergence in the

space H means convergence in the ‖·‖2,T −norm.

Definition 102 (i) A sequence (Hn)n∈N ⊂ H is said to converge to some process H ∈ H if and

only if ‖Hn −H‖2,T → 0, as n →∞, i.e. if and only if

(E

(∫ T

0

(Hn −H)2 dt

)) 12

→n→∞

0 ,

or, equivalently, if and only if

E

(∫ T

0

(Hn −H)2 dt

)→

n→∞0 .

(ii) In that case we call H the limit4 of the sequence (Hn)n∈N and denote it by H = limn→∞Hn.

As usual, as for instance when defining standard Lebesgue integrals, one starts from integrands

that are piecewise constant with respect to some underlying variable, i.e. simple integrands,

and then extends the integral definition to more general integrands by means of a suitable limit

argument.

5.2.2 Simple Integrands

Simple integrands in the space H and stochastic integrals for simple processes are defined as

follows.

Definition 103 (i) An adapted process H = (Ht,Gt)t≥0 is simple if there exists a partition 0 =

t0 < t1 < .. < tn = T of [0,∞) such that

Ht (ω) = ξ0 (ω)10 (t) +n−1∑

i=0

ξi (ω)1(ti,ti+1] (t) , for all (t, ω) ∈ [0, T ]× Ω ,

and for some Gti−measurable, bounded, random variables ξi (ω), i = 0, .., n. The vector space of

simple processes is denoted by S. (ii) For a simple process H = (Ht,Gt)0≤t≤T ∈ S, the stochastic

4 Notice, that if H = limn→∞Hn, then H can be modified on a set of µT measure 0 without affecting the valueof ‖·‖2,T . Therefore, every limit in H is in fact a class of processes that can differ on a set of µT measure 0.

75

Page 77: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

integral of H with respect to the Brownian motion Z := (Zt,Gt)t≥0 is the martingale transform of

Z by H. That is, for any t ∈ (tk, tk+1] and k = 0, .., n− 1, we define:

∫ t

0

HsdZs :=k−1∑

i=0

ξi

(Zti+1 − Zti

)+ ξk (Zt − Ztk

) .

Sometimes we will write for brevity∫ t

0HdZ :=

∫ t

0HsdZs.

Remark 104 (i) Notice that by construction S ⊂ H since

E

(∫ T

0

H2dt

)= E

(∫ T

0

(ξ2010 (s) +

n−1∑

i=0

ξ2i 1(ti,ti+1] (s)

)ds

)

= E

(n−1∑

i=0

ξ2i (ti+1 − ti)

)=

n−1∑

i=0

(ti+1 − ti) E(ξ2i

)< ∞ , (30)

by the boundedness of ξ1, .., ξn. (ii) For any given t ≥ 0 one can define the Brownian motion

process stopped at time t by

Zt := (Zs∧t)s≥0 .

With this definition we have for any t ∈ (tk, tk + 1]:

Z(ti+1)∧t − Zti∧t =

Zti+1 − Zti i < k

Zt − Ztki = k

Zt − Zt = 0 i > k

,

implying

∫ t

0

HsdZs =k−1∑

i=0

ξi

(Zti+1 − Zti

)+ ξk (Zt − Ztk

) =n−1∑

i=0

ξi

(Z(ti+1)∧t − Zti∧t

).

Moreover, we have

E(Zt

ti+1

∣∣Gti

)= E

(Z(ti+1)∧t

∣∣Gti

)=

Zti = Zti∧t = Ztti

i ≤ k

Zt = Zti∧t = Ztti

i > k

,

that is (Zts,Gs)s=0,t1,t2,..,tn

is a discrete time martingale and∫ t

0HsdZs is the martingale transform

of Zt by H. Finally, since for any ω ∈ Ω and ti ∈ 0, t1, t2, .., tn the mapping t 7−→ Ztti

(ω) is a

76

Page 78: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

continuous one, it follows that the mapping

t 7−→∫ t

0

HsdZs =n−1∑

i=0

ξi

(Z(ti+1)∧t − Zti∧t

),

is, as a linear combination of continuous functions, also continuous. Therefore, the stochastic

integral process(∫ t

0HdZ

)t≥0

is a continuous time process with continuous trajectories. Notice,

that the integrand H is a process with possibly discontinuous trajectories. Therefore, the stochastic

integral is a ”regularizing” operator that maps possibly discontinuous processes into process with

continuous trajectories.

It is not suprising that the stochastic integral process(∫ t

0HdZ

)t≥0

is a martingale process,

since it can be written as a martingale transform. Moreover, second moments of stochastic inte-

grals can be often easily computed by means of the so called Ito isometry. We summarize these

properties in the next result for the case of a simple integrand H ∈ S. In the next section, such

properties will hold also for stochastic integrals of more general integrands H ∈ H.

Proposition 105 Let H,H ′ ∈ S. Then it follows:

1. The stochastic integral is a linear operator, that is:

∫ t

0

(αH + βH ′) dZ = α

∫ t

0

HdZ + β

∫ t

0

H ′dZ,

for any α,β ∈ R.

2.(∫ t

0HdZ,Gt

)t≥0

is a martingale with continuous trajectories.

3. The Ito isometry holds:

E

[(∫ t

0

HdZ

)2]

= E

(∫ t

0

H2ds

).

Proof. 1. This is immediate from the definition of the stochastic integral∫ t

0HdZ as a

(stochastic) linear combination of Brownian increments. To prove 2. it remains to show that

77

Page 79: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

(∫ t

0HdZ,Gt

)t≥0

is a martingale (continuity was already established in Remark 104). Thus, let

t ≤ s. We then have

E

(∫ s

0

HdZ

∣∣∣∣Gt

)=

n−1∑

i=0

E(ξi

(Z(ti+1)∧s − Zti∧s

)∣∣Gt

),

and

E(ξi

(Z(ti+1)∧s − Zti∧s

)∣∣Gt

)=

ξiE(Z(ti+1)∧s − Zti∧s

∣∣Gt

)

= ξi

(Z(ti+1)∧t − Zti∧s

)

= ξi

(Z(ti+1)∧t − Zti∧t

)

t ≥ ti

E(ξiE

(Z(ti+1)∧s − Zti∧s

∣∣Gti

)∣∣Gt

)

= 0 = ξi

(Z(ti+1)∧t − Zti∧t

)

t < ti

,

because for t ≤ ti it follows Z(ti+1)∧t − Zti∧t = 0. Therefore.

E

(∫ s

0

HdZ

∣∣∣∣Gt

)=

n−1∑

i=0

E(ξi

(Z(ti+1)∧s − Zti∧s

)∣∣Gt

)=

n−1∑

i=0

ξi

(Z(ti+1)∧t − Zti∧t

)=

∫ t

0

HdZ ,

To prove 3. note first that we have

E

[(∫ t

0

HdZ

)2]

= E

n−1∑

i=0

ξi

(Z(ti+1)∧t − Zti∧t

) n−1∑

j=0

ξj

(Z(tj+1)∧t − Ztj∧t

)

=n−1∑

i=0

n−1∑

j=0

E(ξiξj

(Z(ti+1)∧t − Zti∧t

) (Z(tj+1)∧t − Ztj∧t

)).

Further, if i < j (i.e if ti+1 ≤ tj) it follows

E(ξiξj

(Z(ti+1)∧t − Zti∧t

) (Z(tj+1)∧t − Ztj∧t

))

= E(ξi

(Z(ti+1)∧t − Zti∧t

)ξjE

(Z(tj+1)∧t − Ztj∧t

∣∣Gtj

))

= 0 ,

by the independence of Brownian increments. A similar argument applies for the case j < i.

Moreover, for i = j we get

E(ξ2i

(Z(ti+1)∧t − Zti∧t

)2)

= E(ξ2i E

((Z(ti+1)∧t − Zti∧t

)2∣∣∣Gti

))

=

0 = E(ξ2i ((ti + 1) ∧ t− ti ∧ t)

)t ≤ ti

E(ξ2i ((ti + 1) ∧ t− ti ∧ t)

)t > ti

.

78

Page 80: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Consequently,

E

[(∫ t

0

HdZ

)2]

=n−1∑

i=0

E(ξ2i ((ti + 1) ∧ t− ti ∧ t)

)= E

(n−1∑

i=0

ξ2i ((ti + 1) ∧ t− ti ∧ t)

)

= E

(∫ t

0

H2ds

),

as claimed. This concludes the proof.

Basically, one can think about the problem of defining a stochastic integral for more general

integrands than simple processes, as the problem of extending smoothly the integral definition of

a process H ∈ S to a larger space of adapted integrands. Smoothness of the extension procedure

is desirable in order to maintain the integral properties in Proposition 105 - which are valid for

integrands H ∈ S - also for the resulting stochastic integral of a more general integrand.

It turns out that the adequate space on which stochastic integrals can be smoothly extended

is the space H defined in (29). This fact relies on an approximation result which states that any

H ∈ H can be approximated by a sequence of simple processes (Hn)n∈N ⊂ S converging to H,

i.e. approximating H in the norm defined on H.

For completeness, we state without proof this crucial approximation finding precisely in the

next proposition.

Proposition 106 For any process H ∈ H there exists a sequence (Hn)n∈N ⊂ S such that

E

(∫ T

0

(H −Hn)2 ds

)→

n→∞0 .

Example 107 We illustrate the above approximation result for the case where H = Z. We know

(see Example 101) that the Brownian motion process Z := (Zt)0≤t≤T is an element of H. Z can

be approximated by means of a sequence (Hn)n∈N ⊂ H given by5

Hnt (ω) :=

2n−1∑

i=0

ZiT/2n (ω)1(iT/2n,(i+1)T/2n] (t) .

5 Notice, that for any fixed ω ∈ Ω this is the same type of approximation procedure we would use to define astandard Lebesgue integral of the function t 7−→ Zt (ω) (see again Remark 45).

79

Page 81: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Indeed, we have

E

∫ T

0

(Z −Hn)2 dt =∫ T

0

E (Zt −Hnt )2 dt

=∫ T

0

E

(Zt −

2n−1∑

i=0

ZiT/2n1(iT/2n,(i+1)T/2n] (t)

)2

dt

=∫ T

0

E

(2n−1∑

i=0

(Zt − ZiT/2n

)1(iT/2n,(i+1)T/2n] (t)

)2

dt

=∫ T

0

2n−1∑

i=0

E(Zt − ZiT/2n

)21(iT/2n,(i+1)T/2n] (t) dt

=2n−1∑

i=0

∫ T

0

(t− iT/2n)1(iT/2n,(i+1)T/2n] (t) dt

=2n−1∑

i=0

(T/2n)2

2=

T

2n+1→

n→∞0 .

Therefore, Hn →n→∞

Z in the space H. Remark that strictly speaking Hn /∈ S because Zt is

unbounded for any t. However, any Hn can be approximated by a sequence(Knk

)k∈N ⊂ S defined

by

Knkt (ω) =

2n−1∑

i=0

ZiT/2n (ω)1(iT/2n,iT/2n+1] (t)1ZiT/2n≤k (ω)

+2n−1∑

i=0

k1(iT/2n,iT/2n+1] (t)1|ZiT/2n |>k (ω) .

Indeed,

E

(∫ T

0

(Hn

t −Knkt

)2dt

)=

∫ T

0

E((

Hnt −Knk

t

)2)

dt

=∫ T

0

E

(2n−1∑

i=0

(ZiT/2n − k

)1|ZiT/2n |>k1(iT/2n,(i+1)T/2n] (t)

)2 dt

=∫ T

0

(2n−1∑

i=0

E((

ZiT/2n − k)2

1|ZiT/2n |>k)1(iT/2n,(i+1)T/2n] (t)

)dt

This last integral is finite and non negative, since it is the Lebesgue integral of a simple (deter-

ministic) function of t:

t 7−→2n−1∑

i=0

E[(

ZiT/2n − k)2

1|ZiT/2n |>k]1(iT/2n,(i+1)T/2n] (t) (31)

80

Page 82: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Moreover, by the properties of normal distributions one has for any i = 0, .., 2n−1 and any k > k′,

the strict inequality

E[(

ZiT/2n − k)2

1|ZiT/2n |>k]

< E[(

ZiT/2n − k′)2

1|ZiT/2n |>k′]

,

implying that as a function of k the corresponding Lebesgue integrals in (31) build a monotonically

strictly decreasing non negative sequence. Therefore, this sequence of integrals can only converge

to a limit 0, i.e.:

E

(∫ T

0

(Hn

t −Knkt

)2dt

)↓

k→∞0 .

Summarizing, Z can be approximated by the sequence (Hn)n∈N ⊂ H and any Hn can be approxi-

mated by a sequence(Knk

)k∈N ⊂ S. In turn, Z can be approximated by a sequence (Wn)n∈N ⊂ S

where Wn := Knn.

5.2.3 Squared Integrable Integrands

We can now define the stochastic integral of any process H ∈ H by using the approximation result

in Proposition 106.

We notice first that for any sequence (Hn)n∈N ⊂ S converging to a process H ∈ H, i.e. such

that

E

∫ T

0

(Hs −Hns )2 ds →

n→∞0 ,

it follows, for any 0 ≤ t ≤ T :

E

[(∫ t

0

Hns dZ −

∫ t

0

Hms dZ

)2]

= E

[(∫ t

0

(Hns −Hm

s ) dZ

)2]

= E

∫ t

0

(Hns −Hm

s )2 ds

≤ 2E

∫ T

0

(Hs −Hns )2 ds + 2E

∫ T

0

(Hs −Hms )2 ds →

n,m→∞0

(32)

81

Page 83: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

using in the first equality the linearity of stochastic integrals for simple processes, in the second

the Ito isometry applied to the simple process Hns − Hm

s , and in the last inequality the bound

(a + b)2 ≤ 2(a2 + b2

), where a, b ∈ R.

Therefore, for any 0 ≤ t ≤ T the sequence(∫ t

0Hn

s dZ)

n∈Nis a Cauchy sequence of random

variables with mean 0 and variance E(∫ t

0(Hn)2 ds

)< ∞, i.e. a Cauchy sequence in the space of

squared integrable random variables on (Ω,G, P ). It is well known that in this space any Cauchy

sequence converges to a well defined element of the space. Therefore, it is a natural idea to define

the limit of the sequence(∫ t

0Hn

s dZ)

n∈Nas the stochastic integral

∫ t

0HdZ of H with respect to

Z. We state this precisely in the next definition.

Definition 108 Let H ∈ H be a squared integrable process and let (Hn)n∈N ⊂ S be any sequence

of simple processes approximating H, i.e. such that:

E

∫ T

0

(H −Hn)2 ds →n→∞

0 .

For any 0 ≤ t ≤ T the stochastic integral of H with respect to the Brownian motion Z :=

(Zt,Gt)0≤t≤T is defined by∫ t

0

HsdZs := limn→∞

∫ t

0

Hns dZs .

Example 109 We compute the stochastic integral as the corresponding limit in an example where

explicit computations are possible. Recall from Example 107 that the sequence (Hn) ⊂ S defined

by

Hnt (ω) :=

2n−1∑

i=0

ZiT/2n (ω)1(iT/2n,(i+1)T/2n] (t) ,

converges in the space H to the Brownian motion process Z. We have:

∫ T

0

Hns dZs =

2n−1∑

i=0

ZiT/2n

(Z(i+1)T/2n − ZiT/2n

)

=12

(Z2

T − Z20

)− 12

2n−1∑

i=0

(Z(i+1)T/2n − ZiT/2n

)2.

82

Page 84: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Moreover, we have

E

(2n−1∑

i=0

(Z(i+1)T/2n − ZiT/2n

)2

)=

2n−1∑

i=0

E((

Z(i+1)T/2n − ZiT/2n

)2)

=2n−1∑

i=0

T

2n= T ,

and

E

(2n−1∑

i=0

(Z(i+1)T/2n − ZiT/2n

)2 − T

)2 = E

(2n−1∑

i=0

((Z(i+1)T/2n − ZiT/2n

)2 − T

2n

))2

=2n−1∑

i=0

E

[((Z(i+1)T/2n − ZiT/2n

)2 − T

2n

)2]

=2n−1∑

i=0

E

(((Z(i+1)T/2n − ZiT/2n

)2 − T

2n

)2)

,

using in the second equality the independence of Brownian, which implies

E

[((Z(i+1)T/2n − ZiT/2n

)2 − T

2n

) ((Z(j+1)T/2n − ZjT/2n

)2 − T

2n

)]= 0 ,

for i 6= j. Further we obtain

E

(((Z(i+1)T/2n − ZiT/2n

)2 − T

2n

)2)

= E[(

Z(i+1)T/2n − ZiT/2n

)4]

+(

T

2n

)2

− 2 · T

2nE

((Z(i+1)T/2n − ZiT/2n

)2)

= 3(

T

2n

)2

+(

T

2n

)2

− 2(

T

2n

)2

= 2(

T

2n

)2

.

Therefore,

E

(2n−1∑

i=0

(Z(i+1)T/2n − ZiT/2n

)2 − T

)2 =

T

2n−1→

n→∞0 ,

i.e.2n−1∑

i=0

(Z(i+1)T/2n − ZiT/2n

)2 →n→∞

T ,

in the space of squared integrable random variables on (Ω,G, P ). Summarizing this gives

∫ T

0

HsdZs = limn→∞

∫ T

0

Hns dZs

= limn→∞

(12Z2

T −12

2n−1∑

i=0

(Z(i+1)T/2n − ZiT/2n

)2

)

=12Z2

T −12T .

83

Page 85: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

5.2.4 Properties of Stochastic Integrals

It is immediate from the definition of the stochastic integral of a process H ∈ H defined as the

limit of a sequence of stochastic integrals of processes Hn ∈ S, that linearity is preserved in the

limit, i.e.∫ t

0

(αHs + βH ′s) dZs = α

∫ t

0

HsdZs + β

∫ t

0

H ′sdZs ,

for any H, H ′ ∈ H and α, β ∈ R.

Furthermore, the key property of the space H from the perspective of stochastic integration

is that convergence of a sequence (Hn)n∈N ⊂ H to some limit H ∈ H defines the corresponding

stochastic integral as the limit of the sequence(∫

HndZ)n∈N in the space of squared integrable

random variables on (Ω,G, P ) (see again equation (32) and the following discussion). In fact, this

implies convergence of the first two moments and the conditional expectations of the sequence

(∫HndZ

)n∈N to the first two moments and the conditional expectations of the limit

∫HdZ = lim

n→∞

∫HndZ .

For instance, this gives

E

(∫ t

0

HudZu

∣∣∣∣Gs

)= lim

n→∞E

(∫ t

0

Hnu dZu

∣∣∣∣Gs

)= lim

n→∞

∫ s

0

Hnu dZu =

∫ s

0

Hnu dZu ,

where t ≥ s, i.e. the martingale property. Therefore, the martingale property and the Ito isom-

etry of stochastic integrals, which have been shown to hold for integrals of simple processes, are

maintained for stochastic integrals of integrands H ∈ H.

Finally, it can also be shown at the cost of some more technical details that convergence

in the space of squared integrable random variables together with the martingale property and

the continuity of stochastic integrals of simple processes implies the continuity of the stochastic

integral of a process H ∈ H. For completeness, we summarize the above discussion in the next

proposition.

Proposition 110 Let H,H ′ ∈ H. It then follows:

84

Page 86: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

1. The stochastic integral is a linear operator, that is:

∫ t

0

(αH + βH ′) dZ = α

∫ t

0

HdZ + β

∫ t

0

H ′dZ,

for any α,β ∈ R.

2.(∫ t

0HdZ,Gt

)0≤t≤T

is a martingale with continuous trajectories.

3. The Ito isometry holds:

E

[(∫ t

0

HdZ

)2]

= E

(∫ t

0

H2ds

).

5.3 Ito’s Lemma

This section introduces a differential calculus for differentiable functions of stochastic integrals.

This calculus is called Ito’s calculus and its primary tool is Ito’s formula, a version of the funda-

mental theorem of calculus for stochastic differentials.

5.3.1 Starting Point, Motivation and Some First Examples

To introduce the basic ideas behind stochastic differentials we start with a simple illustrative

example. Consider the squared Brownian motion process Z2 :=(Z2

t ,Gt

)t≥0

. The goal is to

express Z2t by means of an integral form of the type

Z2t − Z2

0 =∫ t

0

Ksds +∫ t

0

HsdZs ,

for some suitable adapted integrands (Kt,Gt)t≥0 and (Ht,Gt)t≥0. Such a representation would

motivate the suggestive stochastic differential notation

d(Z2

t

)= Ktdt + HtdZt .

Under the naive assumption that for any ω ∈ Ω the Brownian path t 7−→ Zt (ω) is a differentiable

function (we know it is not!) one would be tempted to apply the standard fundamental theorem

85

Page 87: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

of calculus to write

Z2t (ω)− Z2

0 (ω) =∫ t

0

d

dtZ2

s (ω) ds = 2∫ t

0

Zs (ω) · d

dtZs (ω) ds

︸ ︷︷ ︸= dZs(ω) under

the differentiabilityassumption

= 2∫ t

0

Zs (ω) · dZs (ω) . (33)

Unfortunately, this approach cannot work, as we know. In fact, we already developed a partic-

ular stochastic integral construction in order to avoid the fact that Brownian trajectories are of

unbounded variation and thus not differentiable.

Indeed, the naive approach (33) leads immediately to an internal inconsistency which can be

highlighted as follows. First, notice that the stochastic integral process(∫ t

0ZdZ

)0≤t≤T

on the

RHS of (33) is a martingale, since (Zt,Gt)0≤t≤T ∈ H (cf. again Example 101). At the same time,

the process(Z2

t ,Gt

)0≤t≤T

is a submartingale, since(Z2

t − t,Gt

)0≤t≤T

is a martingale (cf. again

Example (98)):

E(Z2

s − s∣∣Gt

)= Z2

t − t ⇐⇒ E(Z2

s

∣∣Gt

)= Z2

t + (s− t) > Z2t , (34)

for any s > t. Thus, (34) shows that a fundamental theorem of calculus for stochastic integrals

has to be of a different form that the standard one. In particular, it appears that the standard

Theorem neglects a deterministic term in Z2t which is the expected value t = E

(Z2

t

)of Z2

t in the

LHS of (33). Therefore, one could be tempted to write

Z2t − t = 2

∫ t

0

ZdZ ,

i.e.

Z2t − Z2

0 = 2(∫ t

0

ZdZ +12t

), (35)

in order to avoid, at least superficially, the inconsistency behind (33). It turns out that this is

the correct guess. Indeed, the structure behind (35) can be highlighted as a special case of Ito’s

86

Page 88: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

formula by setting g (Zt) = Z2t , to get

g (Zt)− g (Z0) =∫ t

0

g′ (Zs) dZs +12

∫ t

0

g′′

(Zs) ds . (36)

Notice that in (36) Z2t has been decomposed as the sum of a stochastic integral giving the mar-

tingale component of Z2t , and a standard pathwise Lebesgue integral describing the deterministic

trend of Z2t . Expression (36) gives the mathematical foundation for the stochastic differential

form

dg (Zt) = g′ (Zt) dZt +12g′′

(Zt) dt ,

of Ito’s formula.

An immediate extension of Ito’s formula (36) arises when g is a function of both t and Zt.

In this case, the rule is to apply standard differentials to deterministic variables and stochastic

differentials - by means of Ito’s formula (36) - to stochastic variables, that is:

g (t, Zt)− g (0, Z0) =∫ t

0

∂g

∂Z(s, Zs) dZs +

∫ t

0

(∂g

∂t(s, Zs) +

12

∂2g

∂2Z(s, Zs)

)ds , (37)

or, in differential form,

dg (t, Zt) =∂g

∂Z(t, Zt) dZt +

(∂g

∂t(t, Zt) +

12

∂2g

∂2Z(t, Zt)

)dt

Example 111 (Geometric Brownian Motion) A geometric Brownian motion process (St,Gt)t≥0

is defined by

St = S0 exp(

σZt +(

µ− σ2

2

)t

)=: g (t, Zt) , (38)

for given S0, σ > 0 and µ ∈ R. It then follows

∂g

∂t(t, Zt) = S0 exp

(σZt +

(µ− σ2

2

)t

)(µ− σ2

2

)= St

(µ− σ2

2

)

∂g

∂Z(t, Zt) = S0 exp

(σZt +

(µ− σ2

2

)t

)σ = Stσ

∂2g

∂2Z(t, Zt) = Stσ

2 .

87

Page 89: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Ito’s formula (37) then gives

St − S0 = g (t, Zt)− g (0, Z0)

=∫ t

0

∂g

∂Z(s, Zs) dZs +

∫ t

0

(∂g

∂t(s, Zs) +

12

∂2g

∂2Z(s, Zs)

)ds ,

=∫ t

0

SsσdZs +∫ t

0

(Ss

(µ− σ2

2

)+

12Ssσ

2

)ds

=∫ t

0

Ssµds +∫ t

0

SsσdZs ,

or, in differential form,

dSt = µStdt + σStdZt .

Remark 112 Example 111 provides a first example of a solution to a stochastic differential equa-

tion. Indeed, if we consider the problem of determining an adapted process (St,Gt)t≥0 that solves

the stochastic differential equation

dSt = µStdt + σStdZt ,

for a given S0, Example 111 already gives the answer: (St,Gt)t≥0 is the Geometric Brownian

motion process defined in (38).

5.3.2 A Simplified Derivation of Ito’s Formula

We illustrate some of the main ideas behind the proof of Ito’s formula (37) by providing a proof

for the simpler case where

g (t, Z) =12

(t + Z)2 . (39)

Thus, we are going to show that

g (t, Z) =∫ t

0

∂g

∂Z(s, Zs) dZs +

∫ t

0

(∂g

∂t(s, Zs) +

12

∂2g

∂2Z(s, Zs)

)ds ,

i.e., for our specific case (apply Ito’s formula (37) to (39)),

12

(t + Zt)2 =

∫ t

0

(12

+ s + Zs

)ds +

∫ t

0

(s + Zs) dZs . (40)

88

Page 90: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

Proof. Let us fix first a partition t0, .., t2n of the interval [0, t], given by

ti =it

2n, i = 0, .., 2n .

We then have,

12

(t + Zt)2 =

12

2n−1∑

i=0

[(ti+1 + Zti+1

)2 − (ti + Zti)2

]. (41)

We can now apply an exact second order Taylor approximation to any term in the sum (41):

12

[(ti+1 + Zti+1

)2 − (ti + Zti)2

]= g

(ti+1, Zti+1

)− g (ti, Zti)

=∂g (ti, Zti)

∂t(ti+1 − ti) +

∂g (ti, Zti)

∂Z

(Zti+1 − Zti

)

+12

(∂2g (ti, Zti

)∂2t

(ti+1 − ti)2 +

∂2g (ti, Zti)∂2Z

(Zti+1 − Zti

)2)

+∂2g (ti, Zti)

∂t∂Z(ti+1 − ti)

(Zti+1 − Zti

).

Explicit computations then give

∂g (ti, Zti)∂t

= ti + Zti =∂g (ti, Zti)

∂Z,

and

∂g (ti, Zti)∂t∂Z

=∂g (ti, Zti)

∂Z∂t=

∂2g (ti, Zti)∂2t

=∂2g (ti, Zti)

∂2Z= 1 .

Therefore,

(ti+1 + Zti+1

2

)2

−(

ti + Zti

2

)2

= (ti + Zti)(ti+1 − ti + Zti+1 − Zti

)

+12

(ti+1 − ti)2 +

12

(Zti+1 − Zti

)2

+12· 2 (ti+1 − ti)

(Zti+1 − Zti

).

and

12

(t + Zt)2 =

2n−1∑

i=0

(ti + Zti)(ti+1 − ti + Zti+1 − Zti

)

+12

2n−1∑

i=0

[(ti+1 − ti)

2 +(Zti+1 − Zti

)2 + 2 (ti+1 − ti)(Zti+1 − Zti

)]. (42)

89

Page 91: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

We now compute the limit as n →∞ of each term in this expression. We first have

limn→∞

2n−1∑

i=0

(ti + Zti) (ti+1 − ti) =

∫ t

0

(s + Zs) ds ,

a pathwise Lebesgue integral. Further,

limn→∞

2n−1∑

i=0

(ti + Zti)(Zti+1 − Zti

)=

∫ t

0

(s + Zs) dZs ,

a stochastic integral (see again Example 109). Moreover,

limn→∞

12

2n−1∑

i=0

(Zti+1 − Zti

)2 =t

2=

12

∫ t

0

ds , (43)

again from the computations in Example 109.

In order to prove (40), we thus have to show that all remaining terms in (42) converge to zero.

Indeed, we have:2n−1∑

i=0

(ti+1 − ti)2 =

2n−1∑

i=0

(12n

)2

=12n

→n→∞

0 .

Moreover,

2n−1∑

i=0

(ti+1 − ti)(Zti+1 − Zti

)=

12n

2n−1∑

i=0

(Zti+1 − Zti

)=

12n

(Zt − Z0) .

Hence, for any ω ∈ Ω:2n−1∑

i=0

(ti+1 − ti)(Zti+1 − Zti

)(ω) →

n→∞0 .

This concludes the proof.

Remark 113 The above proof has been considerably simplified by the fact that the function

g (t, Z) =12

(t + Z)2

can be locally exactly approximated by a sequence of second order Taylor expansions. In the more

general case one will have to work with exact Taylor approximations of the form

g(ti+1, Zti+1

)− g (ti, Zti) =∂g (ti, Zti)

∂t(ti+1 − ti) +

∂g (ti, Zti)∂Z

(Zti+1 − Zti

)

12

(∂2g

(t∗i , Z

∗ti

)

∂2t(ti+1 − ti)

2 +∂2g

(t∗i , Z

∗ti

)

∂2Z

(Zti+1 − Zti

)2

)

+∂2g

(t∗i , Z

∗ti

)

∂t∂Z(ti+1 − ti)

(Zti+1 − Zti

),

90

Page 92: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

for some t∗i , Z∗i such that t∗i ∈ [ti, ti+1] and Z∗i ∈

[Zti

, Zti+1

], and show that the residual approxi-

mation error goes to 0 as n →∞.

The above simplified proof of Ito’s formula shows why the non standard second derivative term

12

∫ t

0

∂2g (ti, Zti)

∂2Zds

appears. In fact, we have shown for our specific example that

limn→∞

2n−1∑

i=0

∂2g (ti, Zti)∂2Z

(Zti+1 − Zti

)2 = limn→∞

2n−1∑

i=0

(Zti+1 − Zti

)2 =∫ t

0

ds =∫ t

0

∂2g (ti, Zti)∂2Z

ds .

Notice that

limn→∞

2n−1∑

i=0

(Zti+1 − Zti

)2 = t , (44)

is the quadratic variation of the Brownian motion process on the interval [0, t], which is of order

t and thus not6 zero. Therefore, the further term in Ito’s formula derives precisely from the non

zero quadratic variation of Brownian motion.

By contrast to that, we have shown that

limn→∞

2n−1∑

i=0

∂2g (ti, Zti)∂2t

(ti+1 − ti)2 = lim

n→∞

2n−1∑

i=0

(ti+1 − ti)2 = 0 ,

i.e. that the second order derivative terms arising from the dependence on the deterministic

argument t have zero quadratic variation, and thus do not contribute to Ito’s formula.

Similarly, for the mixed second order derivative terms we have shown

limn→∞

2n−1∑

i=0

∂2g (ti, Zti)∂t∂Z

(ti+1 − ti)(Zti+1 − Zti

)= lim

n→∞

2n−1∑

i=0

(ti+1 − ti)(Zti+1 − Zti

)= 0 ,

6 In particular, this implies that Brownian motion has non differentiable trajectories, because otherwise onewould get

2n−1X

i=0

`Zti+1 − Zti

´2=

2n−1X

i=0

„d

dtZt∗i (ti+1 − ti)

«2

=1

2n

2n−1X

i=0

„d

dtZt∗i

«2

(ti+1 − ti) ,

where t∗i ∈ [ti, ti+1], and, in the limit,

2n−1X

i=0

`Zti+1 − Zti

´2 →n→∞ lim

n→∞1

2n·Z t

0

„d

dtZs

«2

ds = 0 ,

i.e. a contradiction with (43).

91

Page 93: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

i.e. they also do not contribute to Ito’s formula. In that case, the contribution is zero because the

quadratic cross variation

limn→∞

2n−1∑

i=0

(ti+1 − ti)(Zti+1 − Zti

)

between t and Zt is zero.

Based on these considerations, a simple mechanical rule can be motivated to compute Ito

differentials. It consists in computing first a second order Taylor ”differential” and then in defining

second order differentials in the single variables according to the simple ”multiplications rule”:

dt dZt

dt 0 0

dZt 0 dt

.

In a mechanical way, this gives Ito’s formula as

dg (t, Zt) =∂g (t, Zt)

∂tdt +

∂g (t, Zt)∂Z

dZt +12

∂2g (t, Zt)∂2t

(dt)2 +12

∂2g (t, Zt)∂2Z

(dZt)2

+∂2g (t, Zt)

∂t∂ZdtdZt

=(

∂g (t, Zt)∂t

+12

∂2g (t, Zt)∂2Z

)dt +

∂g (t, Zt)∂Z

dZt ,

by using the multiplication rule (dt)2 = dtdZt = 0 and (dZt)2 = dt.

Example 114 Consider a process X := (Xt,Gt)0≤t≤T satisfying the stochastic differential

dXt = Ktdt + HtdZt ,

for given X0 and for some adapted processes K := (Kt,Gt)0≤t≤T and H := (Ht,Gt)0≤t≤T suchthat H ∈ H and K ∈ K, where

K :=

(Gt)0≤t≤T − adapted processes (Kt)0≤t≤T |

∫ T

0

Ktdt < ∞ P − a.s

.

X is called an Ito process. By applying the above multiplication rules it then follows for anyfunction f of class C2:

df (Xt) = f ′ (Xt) dXt +12f ′′ (Xt) (dXt)

2

= f ′ (Xt) (Ktdt + HtdZt) +12f ′′ (Xt)

(K2

t (dt)2 + H2t (dZt)

2 + 2KtHtdtdZt

)

= f ′ (Xt) (Ktdt + HtdZt) +12f ′′ (Xt)H2

t dt .

92

Page 94: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

5.4 An Application of Stochastic Calculus: the Black-Scholes Model

By means of Ito’s calculus we are now endowed with the analytical tool that permits us to extend

the set of self-financing strategies in Black and Scholes model in a way that will make any European

contingent claim in the model perfectly hedgeable.

5.4.1 The Black-Scholes Market

The model structure is:

• I := [0, T ] is a continuous time index representing the available transaction dates in the

model

• The sample space is given by Ω := R[0,T ] with single outcomes ω of the form

ω = (ωt)t∈[0,T ] ,

where ωt ∈ R, t ∈ [0, T ].

• A Brownian motion process Z := (Zt,Gt)t∈[0,T ] on (Ω,G,P ), where (Gt)t∈[0,T ] is the natural

filtration associated to Z.

• Dynamics of the stock price and money account:

St = S0 exp(

σZt +(

µ− σ2

2

)t

), (45)

Bt = B0 exp (rt) ,

for some µ, r, σ > 0, for given B0 = 1, S0. In differential form this gives

dSt = µStdt + σStdZt ,

dBt = rBtdt .

5.4.2 Self Financing Portfolios and Hedging in the Black-Scholes Model

Definition 115 A self-financing strategy in the Black and Scholes model is an adapted process∆ := (∆t,Gt)t∈[0,T ] ∈ H with value process X := (Xt,Gt)t∈[0,T ] such that

dXt = ∆tdSt +(Xt −∆tSt)

BtdBt = ∆tdSt + (Xt −∆tSt) rdt ,

93

Page 95: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

for given X0. We will implicitly require in the sequel the integrability condition ∆ · S ∈ H.

Remark 116 The continuous time definition of a self-financing strategy is the direct extensionof the one introduced for the discrete time setting. In particular, one has, using the risky assetdynamics:

dXt = ∆t (dSt − Strdt) + Xtrdt = [rXt + ∆t (µ− r)St] dt + ∆tσStdZt . (46)

Our goal is to hedge European derivatives defined by some GT−measurable pay-off given by

v (T, ST ) = g (ST ) ,

for a given continuous function g. We denote by v (t, St) the price of the derivative at time

t ∈ [0, T ] and assume that v is of class C1,2 (in order to apply Ito’s Lemma). By Ito’s Lemma the

dynamics of vt := v (t, St) are

dvt = ∂tvtdt +12∂2

SSvt · (dSt)2 + ∂Svt · dSt (47)

=[∂tvt +

σ2S2t

2∂2

SSvt + µSt∂Svt

]dt + σSt∂SvtdZt .

In order for a self-financed portfolio ∆ with value process X to be a perfect hedge for (vt)t∈[0,T ]

the following hedging condition has to be satisfied:

Xt = vt , t ∈ [0, T ] . (48)

This imposes a strong restriction on the joint dynamics (46), (47), which have to coincide, implying:

∆tσSt = ∂SvσSt ,

rXt + ∆t (µ− r) St = ∂tv + µSt∂Sv +σ2S2

t

2∂2

SSv .

Therefore we get, from the first of these two equations,

∆t = ∂Svt ,

i.e the delta of the portfolio. Inserting the delta in the second equation together with the perfect

hedging condition (48) finally gives the partial differential equation (PDE)

∂v + rS∂Sv +σ2S2

t

2∂2

SSv = rv , (49)

94

Page 96: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

for the function v (t, S), subject to the boundary condition

v (T, S) = g (S) , S > 0 .

This is Black-Scholes partial differential equation for the price of an European derivative. Solving

this equation for the case

g (S) = (S −K)+ ,

gives the call Black-Scholes pricing formula vBS (t, S) in Proposition 93. Computing ∂SvBS based

on the formula in Proposition 93 gives the delta of the call as

∆t = ∂SvBS (t, St) = N (d1t) ,

where

d1t =log

(SK

)+

(r + σ2

2

)(T − t)

σ√

(T − t)

∣∣∣∣∣∣S=St

.

We remark that since |N (d1t)| ≤ 1 one has (N (d1t)St,Gt)t∈[0,T ] ∈ H, as initially assumed.

5.4.3 Probabilistic Interpretation of Black-Scholes Prices: Girsanov Theorem oncemore

It is important to remark that the fundamental PDE (49) does not depend on the expected return

parameter7 µ. This gives us the possibility to provide a probabilistic interpretation of Black

and Scholes formula, which can be written as a discounted conditional expectation under a risk

neutral martingale measure in the model. In other words, this allows us to give a probabilistic

interpretation of pricing functions that are solutions of specific PDE’s.

To highlight this point, rewrite the stock price dynamics as

dSt = µStdt + σStdZt = rStdt + σSt (dZt + θdt) = rStdt + σStdZt ,

where Zt = Z + θt and θ = (µ− r) /σ is the market price of risk. Notice, that if we could find an

equivalent probability measure P such that the process(Zt,Gt

)t∈[0,T ]

is Brownian motion under

7 Therefore, the pricing of a derivative does not depend on the market expectations on the risky asset returns.

95

Page 97: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

P , then we would have that under P the stock price process is a geometric Brownian motion with

dynamics

dSt = rStdt + σStdZt . (50)

By replicating all arguments in the above section when using the P dynamics (50) with drift r

we would then obtain again precisely the PDE (49) for the price function v (t, S) of an European

derivative. Therefore, changing in this way the measure does not alter the functional form for the

pricing formula v (t, S).

As usual the desired change of probability measure is provided by a version of Girsanov Theo-

rem. We give a version of this theorem for the present setting, which is an immediate consequence

of the proofs developed in the semicontinuous model setting.

Corollary 117 In the Black and Scholes model with stock price dynamics (45) a risk neutralmartingale measure P on (Ω,GT ) is obtained by setting for any A ∈ GT ,

P (A) :=∫

A

exp(−θZT − θ2T

2

)dP ,

whereθ =

µ− r

σ,

is the market price of risk in the model.

Proof. The proof follows the same arguments as those for the version of Girsanov Theorem

provided in the semicontinuous setting.

The key feature of risk neutral probabilities is that discounted prices of self-financed portfolios

(and thus also discounted prices of hedge portfolios) are martingales. This allows us to write

today’s price function of a derivative as the discounted risk neutral expectation of its terminal pay

off.

Specifically, let Xt be the t−time value of a self-financed portfolio in the Black Schole model

and define by Xt := Xt/Bt = Xt exp (−rt) the discounted portfolio value. By Ito’s Lemma we

96

Page 98: Introduction to Probability Theory and Stochastic …Introduction to Probability Theory and Stochastic Processes for Finance∗ Lecture Notes Fabio Trojani Department of Economics,

then have under P (cf. also (46)):

dXt = Xtd

dt(exp (−rt)) dt +

1Bt

dXt +1Bt

· 0

= −Xt

Btrdt +

1Bt

(rXtdt + ∆tσStdZt

)

=∆tSt

BtσdZt ,

that is(Xt,Gt

)t∈[0,T ]

is a martingale under P , provided that ∆ · S ∈ H. For the hedge portfolio

∆t = ∂Sv (t, St) ,

this gives

v (0, S0) =v (0, S0)

B0=

X0

B0= E

(XT

BT

∣∣∣∣G0

)= E

(v (T, ST )

BT

)=

1BT

E (g (ST )| G0) , (51)

which writes v (0, S0) as a discounted expectation of the terminal pay-off g (ST ), conditional on

the initial condition S = S0.

For the case g (S) = (S −K)+ we computed this expectation in the semicontinuous model,

providing Black and Scholes call price formula. Moreover, this formula is at the same time the

solution of the PDE (49) to which we already gave the probabilistic interpretation (51). Finally,

to compute the hedging strategy in the Black and Scholes model we just have to compute the

derivative

∂v (t, S)∂S

∣∣∣∣S=St

=Bt

BT· ∂E (g (ST )|St = S)

∂S.

In the call option case, this gives after some algebra

∂v (t, S)∂S

∣∣∣∣S=St

= N (d1t) ,

using the explicit expression for

1BT

E(

(ST −K)+∣∣∣G0

)

obtained in Proposition 93.

97